[jira] [Updated] (GEODE-4363) Add new distributed destroy action configuration
[ https://issues.apache.org/jira/browse/GEODE-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4363: -- Description: Problem: currently eviction-action currently must be either 'local-destroy' or 'overflow-to-disk' ex: Solution: This story is to add the additional option "distributed-destroy" as an action setting for expiration-attributes. This will enable a local destroy action to be distributed across the cluster (currently this does not exist) Acceptance: can set this on region configuration via cache.xml, API or gfsh gfsh help and docs have been updated was: Currently eviction-action currently must be either 'local-destroy' or 'overflow-to-disk' ex: This story is to add the additional option "distributed-destroy" as an action setting for expiration-attributes. This will enable a local destroy action to be distributed across the cluster (currently this does not exist) Acceptance: can set this on region configuration via cache.xml, API or gfsh gfsh help and docs have been updated > Add new distributed destroy action configuration > > > Key: GEODE-4363 > URL: https://issues.apache.org/jira/browse/GEODE-4363 > Project: Geode > Issue Type: Sub-task >Reporter: Fred Krone >Priority: Major > > > Problem: currently eviction-action currently must be either 'local-destroy' > or 'overflow-to-disk' > ex: > Solution: This story is to add the additional option "distributed-destroy" as > an action setting for expiration-attributes. > This will enable a local destroy action to be distributed across the cluster > (currently this does not exist) > > Acceptance: can set this on region configuration via cache.xml, API or gfsh > gfsh help and docs have been updated > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4363) Add new distributed destroy action configuration
[ https://issues.apache.org/jira/browse/GEODE-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4363: -- Description: Currently eviction-action currently must be either 'local-destroy' or 'overflow-to-disk' ex: This story is to add the additional option "distributed-destroy" as an action setting for expiration-attributes. This will enable a local destroy action to be distributed across the cluster (currently this does not exist) Acceptance: can set this on region configuration via cache.xml, API or gfsh gfsh help and docs have been updated was: Should add action="distributed-destroy" eviction-action currently must be either 'local-destroy' or 'overflow-to-disk' Acceptance: can set this on region configuration via cache.xml, API or gfsh gfsh help and docs have been updated > Add new distributed destroy action configuration > > > Key: GEODE-4363 > URL: https://issues.apache.org/jira/browse/GEODE-4363 > Project: Geode > Issue Type: Sub-task >Reporter: Fred Krone >Priority: Major > > > Currently eviction-action currently must be either 'local-destroy' or > 'overflow-to-disk' > ex: > This story is to add the additional option "distributed-destroy" as an action > setting for expiration-attributes. > This will enable a local destroy action to be distributed across the cluster > (currently this does not exist) > > Acceptance: can set this on region configuration via cache.xml, API or gfsh > gfsh help and docs have been updated > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5057) Remove 'experimental' from JDBC Connector code
[ https://issues.apache.org/jira/browse/GEODE-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5057: -- Description: AC: 'experimental' added back into JDBC Connector code. was:Remove 'experimental' from JDBC Connector code. > Remove 'experimental' from JDBC Connector code > -- > > Key: GEODE-5057 > URL: https://issues.apache.org/jira/browse/GEODE-5057 > Project: Geode > Issue Type: Task > Components: extensions >Affects Versions: 1.6.0 >Reporter: Anilkumar Gingade >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > > > AC: 'experimental' added back into JDBC Connector code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-5180) Update documentation for JDBC Connector using jdni binding
Fred Krone created GEODE-5180: - Summary: Update documentation for JDBC Connector using jdni binding Key: GEODE-5180 URL: https://issues.apache.org/jira/browse/GEODE-5180 Project: Geode Issue Type: Sub-task Reporter: Fred Krone [https://cwiki.apache.org/confluence/display/GEODE/Simple+JDBC+Connector] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-5179) Add --data-source=value attribute to gfsh create jdbc-mapping
Fred Krone created GEODE-5179: - Summary: Add --data-source=value attribute to gfsh create jdbc-mapping Key: GEODE-5179 URL: https://issues.apache.org/jira/browse/GEODE-5179 Project: Geode Issue Type: Sub-task Reporter: Fred Krone This is the name for JNDI data source for GPDB JDBC connection. Required: true -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-5178) Remove gfsh create-jdbc connection
Fred Krone created GEODE-5178: - Summary: Remove gfsh create-jdbc connection Key: GEODE-5178 URL: https://issues.apache.org/jira/browse/GEODE-5178 Project: Geode Issue Type: Sub-task Components: regions Reporter: Fred Krone This should be removed. The user will now use create jndi-binding and that will be referenced in create jdbc-mapping -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4707) Have jdbc connector look for configured jndi binding
[ https://issues.apache.org/jira/browse/GEODE-4707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4707: -- Description: Given I want to to configure JDBC Connector WHEN I use the jndi-binding command to create a datasource connection THEN JDBC Connector should use this configuration I should be able to reference this data-source name in jdbc-mapping I should not see or have the option to use create jdbc-connection Acceptance: gfsh create jdbc-connection is not available to the user anymore Create jdbc-mapping has --data-source=value attribute which references the name for JNDI data source for GPDB JDBC connection. Creating jdni-binding and referencing it in jdbc-mapping works as expected (reads/writes from database) Tests are updated accordingly Documentation is updated was: A user should be able to create a datasource using the gfsh command {{create jndi-binding }} Then a datasource will be created with the supplied options and the binding will be created without the user having to restart the existing server(s) > Have jdbc connector look for configured jndi binding > -- > > Key: GEODE-4707 > URL: https://issues.apache.org/jira/browse/GEODE-4707 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > Given I want to to configure JDBC Connector > WHEN I use the jndi-binding command to create a datasource connection > THEN JDBC Connector should use this configuration > I should be able to reference this data-source name in jdbc-mapping > I should not see or have the option to use create jdbc-connection > > Acceptance: gfsh create jdbc-connection is not available to the user anymore > Create jdbc-mapping has --data-source=value attribute which references the > name for JNDI data source for GPDB JDBC connection. > Creating jdni-binding and referencing it in jdbc-mapping works as expected > (reads/writes from database) > Tests are updated accordingly > Documentation is updated > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4707) Have jdbc connector look for configured jndi binding
[ https://issues.apache.org/jira/browse/GEODE-4707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4707: -- Component/s: (was: extensions) regions > Have jdbc connector look for configured jndi binding > -- > > Key: GEODE-4707 > URL: https://issues.apache.org/jira/browse/GEODE-4707 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > A user should be able to create a datasource using the gfsh command {{create > jndi-binding }} > Then a datasource will be created with the supplied options and the binding > will be created without the user having to restart the existing server(s) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-5111) show missing-disk-stores sometimes does not show the missing disk stores
[ https://issues.apache.org/jira/browse/GEODE-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-5111. --- Resolution: Fixed Fix Version/s: 1.7.0 > show missing-disk-stores sometimes does not show the missing disk stores > > > Key: GEODE-5111 > URL: https://issues.apache.org/jira/browse/GEODE-5111 > Project: Geode > Issue Type: Bug >Reporter: Lynn Gallinat >Assignee: Lynn Gallinat >Priority: Major > Labels: persistence, pull-request-available > Fix For: 1.7.0 > > Time Spent: 1h > Remaining Estimate: 0h > > When geode logs are showing there is in fact a missing disk store, running > the show-missing-disk-store command sometimes returns that there are no > missing disk stores -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (GEODE-4966) Add Security Permissions for JDBC gfsh commands
[ https://issues.apache.org/jira/browse/GEODE-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431351#comment-16431351 ] Fred Krone edited comment on GEODE-4966 at 5/1/18 9:49 PM: --- We've updated the two wiki pages. [http://geode.apache.org/docs/guide/14/managing/security/implementing_authorization.html] should go into the same release with the JDBC release notes (currently targeting 1.8). was (Author: fkrone): We've updated the two wiki pages. We will assign to docs once we remove experimental tag. > Add Security Permissions for JDBC gfsh commands > --- > > Key: GEODE-4966 > URL: https://issues.apache.org/jira/browse/GEODE-4966 > Project: Geode > Issue Type: Task > Components: docs >Reporter: Barbara Pruijn >Assignee: Joey McAllister >Priority: Major > > Please make sure security permissions are documented for the jdbc commands on > these pages: > [http://geode.apache.org/docs/guide/14/managing/security/implementing_authorization.html] > [https://cwiki.apache.org/confluence/display/GEODE/Geode+Integrated+Security] > [https://cwiki.apache.org/confluence/display/GEODE/Finer%20grained%20security] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-4966) Add Security Permissions for JDBC gfsh commands
[ https://issues.apache.org/jira/browse/GEODE-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone reassigned GEODE-4966: - Assignee: Joey McAllister > Add Security Permissions for JDBC gfsh commands > --- > > Key: GEODE-4966 > URL: https://issues.apache.org/jira/browse/GEODE-4966 > Project: Geode > Issue Type: Task > Components: docs >Reporter: Barbara Pruijn >Assignee: Joey McAllister >Priority: Major > > Please make sure security permissions are documented for the jdbc commands on > these pages: > [http://geode.apache.org/docs/guide/14/managing/security/implementing_authorization.html] > [https://cwiki.apache.org/confluence/display/GEODE/Geode+Integrated+Security] > [https://cwiki.apache.org/confluence/display/GEODE/Finer%20grained%20security] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4966) Add Security Permissions for JDBC gfsh commands
[ https://issues.apache.org/jira/browse/GEODE-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4966: -- Component/s: (was: regions) > Add Security Permissions for JDBC gfsh commands > --- > > Key: GEODE-4966 > URL: https://issues.apache.org/jira/browse/GEODE-4966 > Project: Geode > Issue Type: Task > Components: docs >Reporter: Barbara Pruijn >Priority: Major > > Please make sure security permissions are documented for the jdbc commands on > these pages: > [http://geode.apache.org/docs/guide/14/managing/security/implementing_authorization.html] > [https://cwiki.apache.org/confluence/display/GEODE/Geode+Integrated+Security] > [https://cwiki.apache.org/confluence/display/GEODE/Finer%20grained%20security] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5063) AbstractRegionMap.basicPut may get old value more than once
[ https://issues.apache.org/jira/browse/GEODE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5063: -- Labels: AbstractRegionMap (was: arm) > AbstractRegionMap.basicPut may get old value more than once > --- > > Key: GEODE-5063 > URL: https://issues.apache.org/jira/browse/GEODE-5063 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Darrel Schneider >Priority: Major > Labels: AbstractRegionMap > > AbstractRegionMap.doPutOnRegionEntry calls both of the following: > setOldValueForDelta(putInfo); > setOldValueInEvent(putInfo); > I think the logic in setOldValueForDelta can be moved into setOldValueInEvent. > As it is now we might fetch the old value twice when using delta. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5059) AbstractRegionMap.basicPut should be refactored
[ https://issues.apache.org/jira/browse/GEODE-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5059: -- Labels: AbstractRegionMap (was: ) > AbstractRegionMap.basicPut should be refactored > --- > > Key: GEODE-5059 > URL: https://issues.apache.org/jira/browse/GEODE-5059 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Darrel Schneider >Priority: Major > Labels: AbstractRegionMap > > Recently the AbstractRegionMap.basicPut method was refactored into many > smaller methods that all take an instance of RegionMapPutContext. > These methods should be moved to another class (it could be named > RegionMapPut) and the RegionMapPutContext could go away since the > RegionMapPut instance could keep all the state of the put operation. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5063) AbstractRegionMap.basicPut may get old value more than once
[ https://issues.apache.org/jira/browse/GEODE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5063: -- Labels: arm (was: ) > AbstractRegionMap.basicPut may get old value more than once > --- > > Key: GEODE-5063 > URL: https://issues.apache.org/jira/browse/GEODE-5063 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Darrel Schneider >Priority: Major > Labels: arm > > AbstractRegionMap.doPutOnRegionEntry calls both of the following: > setOldValueForDelta(putInfo); > setOldValueInEvent(putInfo); > I think the logic in setOldValueForDelta can be moved into setOldValueInEvent. > As it is now we might fetch the old value twice when using delta. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5062) AbstractRegionMap.basicPut does not use lastModified parameter
[ https://issues.apache.org/jira/browse/GEODE-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5062: -- Labels: AbstractRegionMap (was: ) > AbstractRegionMap.basicPut does not use lastModified parameter > -- > > Key: GEODE-5062 > URL: https://issues.apache.org/jira/browse/GEODE-5062 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Darrel Schneider >Priority: Major > Labels: AbstractRegionMap > > The callers of AbstractRegionMap.basicPut do lots of work to pass the > lastModifiedTime to it. > Most, if not all callers, seems to set the lastModifiedTime parameter to > zero. If none of them ever set it to a non-zero value then the parameter > should be removed. > basicPut itself does not pass the parameter on to LocalRegion.basicPutPart2. > So it never uses the parameter. If we do have callers with non-zero then > basicPut needs to pass the parameter to basicPutPart2. If we don't then we > should simplify the code by removing the parameter. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-2375) GemFireException should not inherit from RuntimeException
[ https://issues.apache.org/jira/browse/GEODE-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-2375: -- Labels: geode2 (was: ) > GemFireException should not inherit from RuntimeException > - > > Key: GEODE-2375 > URL: https://issues.apache.org/jira/browse/GEODE-2375 > Project: Geode > Issue Type: Improvement > Components: core, general >Reporter: Galen O'Sullivan >Priority: Major > Labels: geode2 > > {{GemFireException}} inherits from {{RuntimeException}}, which means that the > majority of exceptions in Geode are unchecked. This means that we don't have > the type system helping us to check potential failure conditions of our code, > and it's not clear which functions may throw exceptions as a part of their > nomal failure modes -- for example, {{ReplyException}} has a > {{handleAsUnexpected}} method that seems to indicate that a normal > {{ReplyException}} is not unexpected -- but that's not what the type > inheritance says. {{GemFireException}} accounts for most of the exceptions in > the codebase. > Even if we were to convert most of the existing instances of > {{GemFireException}} to {{GemFireRuntimeException}}, developers (especially > new ones) would still be tempted to use {{GemFireException}} for new > exceptions. > Perhaps the best way to solve this (if we want all our exceptions to inherit > from a central exception type, which I'm not entirely sold on) would be to > create a new {{GeodeUncheckedException}} and {{GeodeCheckedException}}, and > deprecate both kinds of {{GemFireException}}? Then we could convert old > exceptions as time permits. > There's a significant amount of work involved here whatever way we decide to > change it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5054) I want to be able use gfsh jdbc list and describe commands if CC is disabled
[ https://issues.apache.org/jira/browse/GEODE-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5054: -- Description: GIVEN I am using JDBC Connector WHEN I cluster config is disabled (intentionally or not) THEN I should still be able to list and describe these mappings or connections. Their behavior should be consistent with other jdbc commands. list and describe are only looking at cluster config for that information. Background This came up in a conversation where, for example, what would happen in the future if an SDG user disabled cluster config, would all jdbc commands work? SDG was just the hypothetical scenario in this conversation that revealed there is an inconsistency in how list and describe work compared to other jdbc commands. was: GIVEN I am using SDG WHEN Creating or updating JDBC mappings or connections THEN I should be able to list and describe these mappings or connections Background JDBC gfsh create, alter and destroy will work as expected in SDG, but not describe and list since they are only looking into CC for that information. If we still want list and desc to work when cluster config service is disabled, can we require that the command would have a "--member" option, i.e. which member you want this information from. > I want to be able use gfsh jdbc list and describe commands if CC is disabled > > > Key: GEODE-5054 > URL: https://issues.apache.org/jira/browse/GEODE-5054 > Project: Geode > Issue Type: Bug > Components: extensions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > GIVEN > I am using JDBC Connector > WHEN > I cluster config is disabled (intentionally or not) > THEN > I should still be able to list and describe these mappings or connections. > Their behavior should be consistent with other jdbc commands. list and > describe are only looking at cluster config for that information. > > Background > This came up in a conversation where, for example, what would happen in the > future if an SDG user disabled cluster config, would all jdbc commands work? > SDG was just the hypothetical scenario in this conversation that revealed > there is an inconsistency in how list and describe work compared to other > jdbc commands. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5054) I want to be able use gfsh jdbc list and describe commands if CC is disabled
[ https://issues.apache.org/jira/browse/GEODE-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5054: -- Summary: I want to be able use gfsh jdbc list and describe commands if CC is disabled (was: As a SDG user I want to be able to list and describe for my JDBC settings in gfsh) > I want to be able use gfsh jdbc list and describe commands if CC is disabled > > > Key: GEODE-5054 > URL: https://issues.apache.org/jira/browse/GEODE-5054 > Project: Geode > Issue Type: Bug > Components: extensions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > GIVEN > I am using SDG > WHEN > Creating or updating JDBC mappings or connections > THEN > I should be able to list and describe these mappings or connections > > Background > JDBC gfsh create, alter and destroy will work as expected in SDG, but not > describe and list since they are only looking into CC for that information. > If we still want list and desc to work when cluster config service is > disabled, can we require that the command would have a "--member" option, > i.e. which member you want this information from. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-5054) As a SDG user I want to be able to list and describe for my JDBC settings in gfsh
[ https://issues.apache.org/jira/browse/GEODE-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447001#comment-16447001 ] Fred Krone commented on GEODE-5054: --- Yeah–this is just a rough placeholder story that needs refining. The background is: There is a proposed effort to convert all the extension commands to use the public cluster configuration service to retrieve/update the cluster configuration, and while going through the jdbc commands, we are making those assumptions. Please let us if these are ok or not. And one of the concerns was (maybe falsely) "SDG users tend to disable CC. If spring users disable CC then down the line alter and list wouldn't be able to work." Keeping in-mind that JDBC Connector is not currently supported in SDG – possibly never – I think the goal here was still to have the commands behave consistently with CC disabled. I'll reword the description. > As a SDG user I want to be able to list and describe for my JDBC settings in > gfsh > - > > Key: GEODE-5054 > URL: https://issues.apache.org/jira/browse/GEODE-5054 > Project: Geode > Issue Type: Bug > Components: extensions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > GIVEN > I am using SDG > WHEN > Creating or updating JDBC mappings or connections > THEN > I should be able to list and describe these mappings or connections > > Background > JDBC gfsh create, alter and destroy will work as expected in SDG, but not > describe and list since they are only looking into CC for that information. > If we still want list and desc to work when cluster config service is > disabled, can we require that the command would have a "--member" option, > i.e. which member you want this information from. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4707) Have jdbc connector look for configured jndi binding
[ https://issues.apache.org/jira/browse/GEODE-4707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4707: -- Labels: jdbc_connector (was: jdbc) > Have jdbc connector look for configured jndi binding > -- > > Key: GEODE-4707 > URL: https://issues.apache.org/jira/browse/GEODE-4707 > Project: Geode > Issue Type: Improvement > Components: extensions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > A user should be able to create a datasource using the gfsh command {{create > jndi-binding }} > Then a datasource will be created with the supplied options and the binding > will be created without the user having to restart the existing server(s) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5057) Remove 'experimental' from JDBC Connector code
[ https://issues.apache.org/jira/browse/GEODE-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5057: -- Component/s: (was: regions) > Remove 'experimental' from JDBC Connector code > -- > > Key: GEODE-5057 > URL: https://issues.apache.org/jira/browse/GEODE-5057 > Project: Geode > Issue Type: Task > Components: extensions >Affects Versions: 1.6.0 >Reporter: Anilkumar Gingade >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Remove 'experimental' from JDBC Connector code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5054) As a SDG user I want to be able to list and describe for my JDBC settings in gfsh
[ https://issues.apache.org/jira/browse/GEODE-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5054: -- Component/s: (was: regions) extensions > As a SDG user I want to be able to list and describe for my JDBC settings in > gfsh > - > > Key: GEODE-5054 > URL: https://issues.apache.org/jira/browse/GEODE-5054 > Project: Geode > Issue Type: Bug > Components: extensions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > GIVEN > I am using SDG > WHEN > Creating or updating JDBC mappings or connections > THEN > I should be able to list and describe these mappings or connections > > Background > JDBC gfsh create, alter and destroy will work as expected in SDG, but not > describe and list since they are only looking into CC for that information. > If we still want list and desc to work when cluster config service is > disabled, can we require that the command would have a "--member" option, > i.e. which member you want this information from. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5054) As a SDG user I want to be able to list and describe for my JDBC settings in gfsh
[ https://issues.apache.org/jira/browse/GEODE-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5054: -- Labels: jdbc_connector (was: ) > As a SDG user I want to be able to list and describe for my JDBC settings in > gfsh > - > > Key: GEODE-5054 > URL: https://issues.apache.org/jira/browse/GEODE-5054 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Fred Krone >Priority: Major > Labels: jdbc_connector > > GIVEN > I am using SDG > WHEN > Creating or updating JDBC mappings or connections > THEN > I should be able to list and describe these mappings or connections > > Background > JDBC gfsh create, alter and destroy will work as expected in SDG, but not > describe and list since they are only looking into CC for that information. > If we still want list and desc to work when cluster config service is > disabled, can we require that the command would have a "--member" option, > i.e. which member you want this information from. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5113) Update docs for EvictionAttributes.getMaximum() no longer throwing UnsupportedOperationException for LRU Heap
[ https://issues.apache.org/jira/browse/GEODE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5113: -- Summary: Update docs for EvictionAttributes.getMaximum() no longer throwing UnsupportedOperationException for LRU Heap (was: EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap) > Update docs for EvictionAttributes.getMaximum() no longer throwing > UnsupportedOperationException for LRU Heap > - > > Key: GEODE-5113 > URL: https://issues.apache.org/jira/browse/GEODE-5113 > Project: Geode > Issue Type: Bug > Components: eviction >Reporter: Fred Krone >Priority: Major > > TLDR: I think we can just document this updated change. I didn't have much > time to think about it earlier today but thinking about it now I can see why > we changed this. > Previously, the EvictionAttributes.getMaximum() used to throw an > UnsupportedOperationException if the user tried to configure a Maximum on an > LRU Heap Eviction Policy (Apache Geode 1.4). Now Geode (and by extension, > GemFire) will just silently return 0. > IF this change is intentional the docs should be updated accordingly. > [http://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/EvictionAttributes.html#getMaximum--] > > in 1.4 > [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] > > in 1.5 > [https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5113) EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap
[ https://issues.apache.org/jira/browse/GEODE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5113: -- Description: Previously, the EvictionAttributes.getMaximum() used to throw an UnsupportedOperationException if the user tried to configure a Maximum on an LRU Heap Eviction Policy (Apache Geode 1.4). Now Geode (and by extension, GemFire) will just silently return 0. IF this change is intentional the docs should be updated accordingly. We should avoid these API changes however so I think we should revert this for now if possible. If not I think we can update and move on. http://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/EvictionAttributes.html#getMaximum-- in 1.4 [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] in 1.5 [https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101] was: Previously, the EvictionAttributes.getMaximum() used to throw an UnsupportedOperationException if the user tried to configure a Maximum on an LRU Heap Eviction Policy (Apache Geode 1.4). Now Geode (and by extension, GemFire) will just silently return 0. in 1.4 [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] in 1.5 [https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101] > EvictionAttributes.getMaximum() no longer throws > UnsupportedOperationException for LRU Heap > --- > > Key: GEODE-5113 > URL: https://issues.apache.org/jira/browse/GEODE-5113 > Project: Geode > Issue Type: Bug > Components: eviction >Reporter: Fred Krone >Priority: Major > > > Previously, the EvictionAttributes.getMaximum() used to throw an > UnsupportedOperationException if the user tried to configure a Maximum on an > LRU Heap Eviction Policy (Apache Geode 1.4). Now Geode (and by extension, > GemFire) will just silently return 0. > IF this change is intentional the docs should be updated accordingly. We > should avoid these API changes however so I think we should revert this for > now if possible. If not I think we can update and move on. > http://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/EvictionAttributes.html#getMaximum-- > > in 1.4 > [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] > > in 1.5 > [https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5113) EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap
[ https://issues.apache.org/jira/browse/GEODE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5113: -- Summary: EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap (was: EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap Eviction) > EvictionAttributes.getMaximum() no longer throws > UnsupportedOperationException for LRU Heap > --- > > Key: GEODE-5113 > URL: https://issues.apache.org/jira/browse/GEODE-5113 > Project: Geode > Issue Type: Bug > Components: eviction >Reporter: Fred Krone >Priority: Major > > > Previously, the EvictionAttributes.getMaximum() used to throw an > UnsupportedOperationException if the user tried to configure a Maximum on an > LRU Heap Eviction Policy (Apache Geode 1.4). Well, now, Geode (and by > extension, GemFire) will just silently return 0 (Apache Geode 1.5). > > in 1.4 > [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] > > in 1.5 > https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101 > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5113) EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap
[ https://issues.apache.org/jira/browse/GEODE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5113: -- Description: Previously, the EvictionAttributes.getMaximum() used to throw an UnsupportedOperationException if the user tried to configure a Maximum on an LRU Heap Eviction Policy (Apache Geode 1.4). Now Geode (and by extension, GemFire) will just silently return 0. in 1.4 [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] in 1.5 [https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101] was: Previously, the EvictionAttributes.getMaximum() used to throw an UnsupportedOperationException if the user tried to configure a Maximum on an LRU Heap Eviction Policy (Apache Geode 1.4). Well, now, Geode (and by extension, GemFire) will just silently return 0 (Apache Geode 1.5). in 1.4 [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] in 1.5 https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101 > EvictionAttributes.getMaximum() no longer throws > UnsupportedOperationException for LRU Heap > --- > > Key: GEODE-5113 > URL: https://issues.apache.org/jira/browse/GEODE-5113 > Project: Geode > Issue Type: Bug > Components: eviction >Reporter: Fred Krone >Priority: Major > > > Previously, the EvictionAttributes.getMaximum() used to throw an > UnsupportedOperationException if the user tried to configure a Maximum on an > LRU Heap Eviction Policy (Apache Geode 1.4). Now Geode (and by extension, > GemFire) will just silently return 0. > > in 1.4 > [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] > > in 1.5 > [https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5113) EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap Eviction
[ https://issues.apache.org/jira/browse/GEODE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5113: -- Description: Previously, the EvictionAttributes.getMaximum() used to throw an UnsupportedOperationException if the user tried to configure a Maximum on an LRU Heap Eviction Policy (Apache Geode 1.4). Well, now, Geode (and by extension, GemFire) will just silently return 0 (Apache Geode 1.5). in 1.4 [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] in 1.5 https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101 was: Previously, the EvictionAttributes.getMaximum() used to throw an UnsupportedOperationException if the user tried to configure a Maximum on an LRU Heap Eviction Policy (Apache Geode 1.4). Well, now, Geode (and by extension, GemFire) will just silently return 0 (Apache Geode 1.5). > EvictionAttributes.getMaximum() no longer throws > UnsupportedOperationException for LRU Heap Eviction > > > Key: GEODE-5113 > URL: https://issues.apache.org/jira/browse/GEODE-5113 > Project: Geode > Issue Type: Bug > Components: eviction >Reporter: Fred Krone >Priority: Major > > > Previously, the EvictionAttributes.getMaximum() used to throw an > UnsupportedOperationException if the user tried to configure a Maximum on an > LRU Heap Eviction Policy (Apache Geode 1.4). Well, now, Geode (and by > extension, GemFire) will just silently return 0 (Apache Geode 1.5). > > in 1.4 > [https://github.com/apache/geode/blob/rel/v1.4.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L138-L144] > > in 1.5 > https://github.com/apache/geode/blob/rel/v1.5.0/geode-core/src/main/java/org/apache/geode/internal/cache/EvictionAttributesImpl.java#L95-L101 > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-5113) EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap Eviction
Fred Krone created GEODE-5113: - Summary: EvictionAttributes.getMaximum() no longer throws UnsupportedOperationException for LRU Heap Eviction Key: GEODE-5113 URL: https://issues.apache.org/jira/browse/GEODE-5113 Project: Geode Issue Type: Bug Components: eviction Reporter: Fred Krone Previously, the EvictionAttributes.getMaximum() used to throw an UnsupportedOperationException if the user tried to configure a Maximum on an LRU Heap Eviction Policy (Apache Geode 1.4). Well, now, Geode (and by extension, GemFire) will just silently return 0 (Apache Geode 1.5). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-5054) As a SDG user I want to be able to list and describe for my JDBC settings in gfsh
Fred Krone created GEODE-5054: - Summary: As a SDG user I want to be able to list and describe for my JDBC settings in gfsh Key: GEODE-5054 URL: https://issues.apache.org/jira/browse/GEODE-5054 Project: Geode Issue Type: Improvement Components: regions Reporter: Fred Krone GIVEN I am using SDG WHEN Creating or updating JDBC mappings or connections THEN I should be able to list and describe these mappings or connections Background JDBC gfsh create, alter and destroy will work as expected in SDG, but not describe and list since they are only looking into CC for that information. If we still want list and desc to work when cluster config service is disabled, can we require that the command would have a "--member" option, i.e. which member you want this information from. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5054) As a SDG user I want to be able to list and describe for my JDBC settings in gfsh
[ https://issues.apache.org/jira/browse/GEODE-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5054: -- Issue Type: Bug (was: Improvement) > As a SDG user I want to be able to list and describe for my JDBC settings in > gfsh > - > > Key: GEODE-5054 > URL: https://issues.apache.org/jira/browse/GEODE-5054 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Fred Krone >Priority: Major > > GIVEN > I am using SDG > WHEN > Creating or updating JDBC mappings or connections > THEN > I should be able to list and describe these mappings or connections > > Background > JDBC gfsh create, alter and destroy will work as expected in SDG, but not > describe and list since they are only looking into CC for that information. > If we still want list and desc to work when cluster config service is > disabled, can we require that the command would have a "--member" option, > i.e. which member you want this information from. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5039) EvictionAttributesMutator.setMaximum does not work
[ https://issues.apache.org/jira/browse/GEODE-5039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-5039: -- Description: EvictionAttributesMutator.setMaximum does not change the lru count. Given I am configuring eviction When setting EvictionAttributesMutator.setMaximum Then the lru count should update accordingly was:EvictionAttributesMutator.setMaximum does not change the lru count. > EvictionAttributesMutator.setMaximum does not work > -- > > Key: GEODE-5039 > URL: https://issues.apache.org/jira/browse/GEODE-5039 > Project: Geode > Issue Type: Bug > Components: eviction >Affects Versions: 1.5.0 >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > EvictionAttributesMutator.setMaximum does not change the lru count. > > Given I am configuring eviction > When setting EvictionAttributesMutator.setMaximum > Then the lru count should update accordingly -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4245) Support for Tombstone GC setting at region level
[ https://issues.apache.org/jira/browse/GEODE-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4245: -- Component/s: docs > Support for Tombstone GC setting at region level > > > Key: GEODE-4245 > URL: https://issues.apache.org/jira/browse/GEODE-4245 > Project: Geode > Issue Type: Improvement > Components: docs, regions >Reporter: Fred Krone >Priority: Major > > The Tombstone GC setting is at cache level. Which is applied across all the > regions in the cache. > Having these at region gives a better control on managing Tombstone in the > system. They can be configured based on their usage and consistency > requirement. > Also, Tombstone GC settings are time based (default 10minutes). Adding > Tombstone GC configuration based on number of tombstones will also help > managing tombstones and its impact on memory. > The proposal is to: > # Have Tombstone GC setting at region level. > # Add count based Tombstone GC setting. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4708) Users need the ability to tune tombstone GC on a per region basis
[ https://issues.apache.org/jira/browse/GEODE-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4708. --- Resolution: Duplicate Same as https://issues.apache.org/jira/browse/GEODE-4245 > Users need the ability to tune tombstone GC on a per region basis > - > > Key: GEODE-4708 > URL: https://issues.apache.org/jira/browse/GEODE-4708 > Project: Geode > Issue Type: Improvement > Components: docs, regions >Reporter: Fred Krone >Priority: Major > > We need to reduce memory requirements for a Lucene index. > > Most of the extra memory is consumed by tombstones created by Apache Lucene > generating many temporary 'files' (up to 10 per entry). > > We need a way to provide accelerated GC for these tombstones. It has been > suggested that, rather than doing something specific for Lucene, a more > general solution is to add the ability to tune GC on a per region basis. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4245) Support for Tombstone GC setting at region level
[ https://issues.apache.org/jira/browse/GEODE-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4245: -- Labels: (was: geode-150) > Support for Tombstone GC setting at region level > > > Key: GEODE-4245 > URL: https://issues.apache.org/jira/browse/GEODE-4245 > Project: Geode > Issue Type: Improvement > Components: docs, regions >Reporter: Fred Krone >Priority: Major > > The Tombstone GC setting is at cache level. Which is applied across all the > regions in the cache. > Having these at region gives a better control on managing Tombstone in the > system. They can be configured based on their usage and consistency > requirement. > Also, Tombstone GC settings are time based (default 10minutes). Adding > Tombstone GC configuration based on number of tombstones will also help > managing tombstones and its impact on memory. > The proposal is to: > # Have Tombstone GC setting at region level. > # Add count based Tombstone GC setting. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-4966) Add Security Permissions for JDBC gfsh commands
[ https://issues.apache.org/jira/browse/GEODE-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431351#comment-16431351 ] Fred Krone commented on GEODE-4966: --- We've updated the two wiki pages. We will assign to docs once we remove experimental tag. > Add Security Permissions for JDBC gfsh commands > --- > > Key: GEODE-4966 > URL: https://issues.apache.org/jira/browse/GEODE-4966 > Project: Geode > Issue Type: Task > Components: docs, regions >Reporter: Barbara Pruijn >Priority: Major > > Please make sure security permissions are documented for the jdbc commands on > these pages: > [http://geode.apache.org/docs/guide/14/managing/security/implementing_authorization.html] > [https://cwiki.apache.org/confluence/display/GEODE/Geode+Integrated+Security] > [https://cwiki.apache.org/confluence/display/GEODE/Finer%20grained%20security] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4947) Add tests that use MySql and Postrgres for JDBC connector
[ https://issues.apache.org/jira/browse/GEODE-4947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4947. --- Resolution: Fixed Fix Version/s: 1.6.0 > Add tests that use MySql and Postrgres for JDBC connector > - > > Key: GEODE-4947 > URL: https://issues.apache.org/jira/browse/GEODE-4947 > Project: Geode > Issue Type: Test > Components: regions >Reporter: Nick Reich >Assignee: Nick Reich >Priority: Major > Labels: pull-request-available > Fix For: 1.6.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Current tests only validate that in-memory Derby database is compatible with > the connector. Need to add tests that validate support for popular databases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4833) JdbcWriter and JdbcAsyncWriter may fail to write null fields to database
[ https://issues.apache.org/jira/browse/GEODE-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4833. --- Resolution: Fixed > JdbcWriter and JdbcAsyncWriter may fail to write null fields to database > > > Key: GEODE-4833 > URL: https://issues.apache.org/jira/browse/GEODE-4833 > Project: Geode > Issue Type: Bug > Components: extensions, regions >Affects Versions: 1.4.0 >Reporter: Darrel Schneider >Assignee: Nick Reich >Priority: Major > Labels: pull-request-available > Fix For: 1.6.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Both JdbcWriter and JdbcAsyncWriter end up calling the JDBC method > PreparedStatement.setObject with a value of "null" if the pdx field contains > "null". > This will work with jdbc drivers that support sending "non-typed Null" to the > backend database. > But some drivers do not support this and these puts will fail with a > SQLException. > For portability the jdbc connector should be changed to not pass "null" to > setObject without a type. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4922) JDBC connector does not handle java.util.Date
[ https://issues.apache.org/jira/browse/GEODE-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4922. --- Resolution: Fixed Fix Version/s: 1.6.0 > JDBC connector does not handle java.util.Date > - > > Key: GEODE-4922 > URL: https://issues.apache.org/jira/browse/GEODE-4922 > Project: Geode > Issue Type: Bug > Components: extensions, regions >Affects Versions: 1.4.0 >Reporter: Darrel Schneider >Assignee: Darrel Schneider >Priority: Major > Labels: pull-request-available > Fix For: 1.6.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Pdx types can have java.util.Date fields or object fields that contain > java.util.Date. > When these are written that java.util.Date may cause a failure from the jdbc > driver if it does not support java.util.Date. Jdbc drivers must support > java.sql.Date, java.sql.Time, and java.sql.Timestamp but may not support > java.util.Date. > The JDBC connector should convert java.util.Date to one of the java.sql > interfaces using the data type of the column to determine which one to > convert it to. > When reading from jdbc back into geode if the pdx field is java.util.Date > then we should convert the java.sql.* instance to java.util.Date. It is also > possible we should do this conversion if the pdx field is of type object. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4935) DiskRecoveryWithVersioningGiiRegressionTest does not appear to cover specified scenario
[ https://issues.apache.org/jira/browse/GEODE-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4935: -- Priority: Minor (was: Major) > DiskRecoveryWithVersioningGiiRegressionTest does not appear to cover > specified scenario > --- > > Key: GEODE-4935 > URL: https://issues.apache.org/jira/browse/GEODE-4935 > Project: Geode > Issue Type: Bug > Components: persistence, tests >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Minor > > Here's the original bug description: > Occurs when a crash and recovery from disk in one VM while another one is > spooling up its cache. The one that is starting attempts a GII from the one > that recovers and throws an exception when the initial image version tags do > not contain membership IDs. > Unfortunately DiskRecoveryWithVersioningGiiRegressionTest (the test > previously known as Bug45934DUnitTest) does not appear to cover the scenario > described in the original bug. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4894) JDBC connector needs to be work with databases support for mixed case identifiers
[ https://issues.apache.org/jira/browse/GEODE-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4894: -- Fix Version/s: 1.6.0 > JDBC connector needs to be work with databases support for mixed case > identifiers > - > > Key: GEODE-4894 > URL: https://issues.apache.org/jira/browse/GEODE-4894 > Project: Geode > Issue Type: Bug > Components: docs, extensions, regions >Affects Versions: 1.4.0 >Reporter: Darrel Schneider >Assignee: Darrel Schneider >Priority: Major > Labels: pull-request-available > Fix For: 1.6.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently, the jdbc connector will convert a database column name to all > lower case if a region mapping for that column does not exist. But it is > possible that the database supports mixed case identifiers in which case the > connector should honor the mixed case column name. > Also, when the connector is creating the prepared statement, the string it > uses does not quote the column names in that string. They should be quoted > since the database may support mixed case quoted identifiers. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4894) JDBC connector needs to be work with databases support for mixed case identifiers
[ https://issues.apache.org/jira/browse/GEODE-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4894. --- Resolution: Fixed > JDBC connector needs to be work with databases support for mixed case > identifiers > - > > Key: GEODE-4894 > URL: https://issues.apache.org/jira/browse/GEODE-4894 > Project: Geode > Issue Type: Bug > Components: docs, extensions, regions >Affects Versions: 1.4.0 >Reporter: Darrel Schneider >Assignee: Darrel Schneider >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Currently, the jdbc connector will convert a database column name to all > lower case if a region mapping for that column does not exist. But it is > possible that the database supports mixed case identifiers in which case the > connector should honor the mixed case column name. > Also, when the connector is creating the prepared statement, the string it > uses does not quote the column names in that string. They should be quoted > since the database may support mixed case quoted identifiers. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4907) Rebalance refusal reason broken in analyze serializables
[ https://issues.apache.org/jira/browse/GEODE-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4907. --- Resolution: Fixed Fix Version/s: 1.6.0 > Rebalance refusal reason broken in analyze serializables > > > Key: GEODE-4907 > URL: https://issues.apache.org/jira/browse/GEODE-4907 > Project: Geode > Issue Type: Bug > Components: regions >Affects Versions: 1.5.0 >Reporter: Nick Reich >Assignee: Nick Reich >Priority: Major > Labels: pull-request-available > Fix For: 1.6.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Moved class was not properly updated in > sanctioned-geode-core-serializables.txt -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4907) Rebalance refusal reason broken in analyze serializables
[ https://issues.apache.org/jira/browse/GEODE-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4907: -- Affects Version/s: 1.5.0 > Rebalance refusal reason broken in analyze serializables > > > Key: GEODE-4907 > URL: https://issues.apache.org/jira/browse/GEODE-4907 > Project: Geode > Issue Type: Bug > Components: regions >Affects Versions: 1.5.0 >Reporter: Nick Reich >Assignee: Nick Reich >Priority: Major > Labels: pull-request-available > Fix For: 1.6.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Moved class was not properly updated in > sanctioned-geode-core-serializables.txt -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4769) Serialize region entry before putting in local cache
[ https://issues.apache.org/jira/browse/GEODE-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4769: -- Fix Version/s: (was: 1.5.0) 1.6.0 > Serialize region entry before putting in local cache > > > Key: GEODE-4769 > URL: https://issues.apache.org/jira/browse/GEODE-4769 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: pull-request-available > Fix For: 1.6.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > This will prevent cache inconsistency when dealing with key or value objects > that have broken serialization. > Test should create two members with a REPLICATE region. If one member > performs a put, but the serialization fails then the region entry currently > ends up existing in only the cache of the member that performed the put > locally. The change will prevent this from occurring. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4266) JMX manager should not deserialize exceptions from server
[ https://issues.apache.org/jira/browse/GEODE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4266: -- Labels: jdbc (was: ) > JMX manager should not deserialize exceptions from server > - > > Key: GEODE-4266 > URL: https://issues.apache.org/jira/browse/GEODE-4266 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Fred Krone >Priority: Major > Labels: jdbc > > Steps > Created a jdbc-connection, jdbc-mapping, region in gfsh > Tried a .get from the region which attempted to read through with the > JdbcLoader > However it could not connect to the endpoint and threw an exception (fine). > But the exception wasn't handled in gfsh -- gfsh appeared hung. > [info 2018/01/09 13:42:41.338 PST s1 > tid=0x4b] Exception occurred: > java.lang.IllegalStateException: Could not connect ... > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.getConnection(SqlHandler.java:59) > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.read(SqlHandler.java:73) > at org.apache.geode.connectors.jdbc.JdbcLoader.load(JdbcLoader.java:52) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doLocalLoad(SearchLoadAndWriteProcessor.java:791) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.load(SearchLoadAndWriteProcessor.java:602) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.searchAndLoad(SearchLoadAndWriteProcessor.java:461) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doSearchAndLoad(SearchLoadAndWriteProcessor.java:177) > at > org.apache.geode.internal.cache.DistributedRegion.findUsingSearchLoad(DistributedRegion.java:2338) > at > org.apache.geode.internal.cache.DistributedRegion.findObjectInSystem(DistributedRegion.java:2208) > at > org.apache.geode.internal.cache.LocalRegion.nonTxnFindObject(LocalRegion.java:1477) > at > org.apache.geode.internal.cache.LocalRegionDataView.findObject(LocalRegionDataView.java:176) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285) > at > org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:443) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:151) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.execute(DataCommandFunction.java:116) > at > org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:187) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:382) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:448) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:1117) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.access$000(ClusterDistributionManager.java:108) > at > org.apache.geode.distributed.internal.ClusterDistributionManager$9$1.run(ClusterDistributionManager.java:987) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: > Communications link failure > The last packet sent successfully to the server was 0 milliseconds ago. The > driver has not received any packets from the server. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) > at > com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) > at com.mysql.jdbc.MysqlIO.(MysqlIO.java:341) > at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) > at > com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) > at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014) >
[jira] [Updated] (GEODE-4266) JMX manager should not deserialize exceptions from server
[ https://issues.apache.org/jira/browse/GEODE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4266: -- Component/s: (was: jmx) (was: gfsh) regions > JMX manager should not deserialize exceptions from server > - > > Key: GEODE-4266 > URL: https://issues.apache.org/jira/browse/GEODE-4266 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Fred Krone >Priority: Major > > Steps > Created a jdbc-connection, jdbc-mapping, region in gfsh > Tried a .get from the region which attempted to read through with the > JdbcLoader > However it could not connect to the endpoint and threw an exception (fine). > But the exception wasn't handled in gfsh -- gfsh appeared hung. > [info 2018/01/09 13:42:41.338 PST s1 > tid=0x4b] Exception occurred: > java.lang.IllegalStateException: Could not connect ... > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.getConnection(SqlHandler.java:59) > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.read(SqlHandler.java:73) > at org.apache.geode.connectors.jdbc.JdbcLoader.load(JdbcLoader.java:52) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doLocalLoad(SearchLoadAndWriteProcessor.java:791) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.load(SearchLoadAndWriteProcessor.java:602) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.searchAndLoad(SearchLoadAndWriteProcessor.java:461) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doSearchAndLoad(SearchLoadAndWriteProcessor.java:177) > at > org.apache.geode.internal.cache.DistributedRegion.findUsingSearchLoad(DistributedRegion.java:2338) > at > org.apache.geode.internal.cache.DistributedRegion.findObjectInSystem(DistributedRegion.java:2208) > at > org.apache.geode.internal.cache.LocalRegion.nonTxnFindObject(LocalRegion.java:1477) > at > org.apache.geode.internal.cache.LocalRegionDataView.findObject(LocalRegionDataView.java:176) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285) > at > org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:443) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:151) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.execute(DataCommandFunction.java:116) > at > org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:187) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:382) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:448) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:1117) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.access$000(ClusterDistributionManager.java:108) > at > org.apache.geode.distributed.internal.ClusterDistributionManager$9$1.run(ClusterDistributionManager.java:987) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: > Communications link failure > The last packet sent successfully to the server was 0 milliseconds ago. The > driver has not received any packets from the server. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) > at > com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) > at com.mysql.jdbc.MysqlIO.(MysqlIO.java:341) > at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) > at > com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) > at
[jira] [Updated] (GEODE-4748) Geode put may result in inconsistent cache if network problem occurs or serialization of key or value class fails
[ https://issues.apache.org/jira/browse/GEODE-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4748: -- Component/s: (was: regions) serialization > Geode put may result in inconsistent cache if network problem occurs or > serialization of key or value class fails > - > > Key: GEODE-4748 > URL: https://issues.apache.org/jira/browse/GEODE-4748 > Project: Geode > Issue Type: Bug > Components: membership, serialization >Affects Versions: 1.0.0-incubating, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.2.1, > 1.4.0 >Reporter: Vadim Lotarev >Priority: Critical > Attachments: clumsy.jpg, geode-4748.log > > > Geode cache became inconsistent in case if networking and serialization > problems occur at commit time. How to reproduce: > # create any simple _replicated_ region > # run two nodes > # put some value in the region (within a transaction or not) > # execute query on both nodes to check that the same value is returned (I > used JMX for that) > # emulate somehow temporary networking or serialization error (throw > IOException from toData() or use [clumsy|https://jagt.github.io/clumsy/] to > emulate network interruption) > # repeat [#3], exception should occur > # repeat [#4] - you should see different values on different nodes > It looks like errors occurred after {{TXState.applyChanges}} produce > inconsistency - it is impossible to rollback applied local changes what leads > to the state where local cache contains changed data but other node(s) old > data (before changes made in transaction). > To me, consistency is a key property for the systems like Geode so I would > consider this bug as a critical one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-3968) Document how rebalance actually works
[ https://issues.apache.org/jira/browse/GEODE-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-3968: -- Component/s: (was: docs) > Document how rebalance actually works > - > > Key: GEODE-3968 > URL: https://issues.apache.org/jira/browse/GEODE-3968 > Project: Geode > Issue Type: Sub-task > Components: regions >Reporter: Fred Krone >Priority: Major > Labels: rebalance > > There are a lot of user questions around how rebalance works, configuration > etc. > Example from Gideon: I still think we need to make important improvements to > the rebalancing documentation. One "big picture" item is to explain the > resource manager's role in rebalancing activity (RM isn't mentioned in the > doc's in this context). > We should also add more detail explaining how to optimize multi-threaded > rebalancing. > What exactly happens with multi-threaded rebalancing, and what are the limits > and/or consequences? For example, could too high a degree of parallelism > defeat the rebalancing algorithm (given the original design targets reaching > the right end-state by moving buckets one at-a-time)? Or, is the reverse > true, and we might bet better final results with more threads ? Is there some > guideline we can devise based on the number of nodes in the cluster, the > number of cores per server, and the configured number of buckets for a PR? > Is multi-threaded applied on each host individually, or are the extra threads > only running on the rebalance "coordinator" node? > I wasn't able to explain any of the nuances of rebalancing based on our docs > and cursory review of the Geode codebase (although for the latter I'm sure I > could eventually . . .). This actually screams for a dedicated section on > the Geode Wiki "Geode Internal Architecture" section . . . I would be happy > to help write this I can get get started with answers to the above questions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (GEODE-3968) Document how rebalance actually works
[ https://issues.apache.org/jira/browse/GEODE-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone closed GEODE-3968. - > Document how rebalance actually works > - > > Key: GEODE-3968 > URL: https://issues.apache.org/jira/browse/GEODE-3968 > Project: Geode > Issue Type: Sub-task > Components: regions >Reporter: Fred Krone >Priority: Major > Labels: rebalance > > There are a lot of user questions around how rebalance works, configuration > etc. > Example from Gideon: I still think we need to make important improvements to > the rebalancing documentation. One "big picture" item is to explain the > resource manager's role in rebalancing activity (RM isn't mentioned in the > doc's in this context). > We should also add more detail explaining how to optimize multi-threaded > rebalancing. > What exactly happens with multi-threaded rebalancing, and what are the limits > and/or consequences? For example, could too high a degree of parallelism > defeat the rebalancing algorithm (given the original design targets reaching > the right end-state by moving buckets one at-a-time)? Or, is the reverse > true, and we might bet better final results with more threads ? Is there some > guideline we can devise based on the number of nodes in the cluster, the > number of cores per server, and the configured number of buckets for a PR? > Is multi-threaded applied on each host individually, or are the extra threads > only running on the rebalance "coordinator" node? > I wasn't able to explain any of the nuances of rebalancing based on our docs > and cursory review of the Geode codebase (although for the latter I'm sure I > could eventually . . .). This actually screams for a dedicated section on > the Geode Wiki "Geode Internal Architecture" section . . . I would be happy > to help write this I can get get started with answers to the above questions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-3968) Document how rebalance actually works
[ https://issues.apache.org/jira/browse/GEODE-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-3968. --- Resolution: Fixed > Document how rebalance actually works > - > > Key: GEODE-3968 > URL: https://issues.apache.org/jira/browse/GEODE-3968 > Project: Geode > Issue Type: Sub-task > Components: docs, regions >Reporter: Fred Krone >Priority: Major > Labels: rebalance > > There are a lot of user questions around how rebalance works, configuration > etc. > Example from Gideon: I still think we need to make important improvements to > the rebalancing documentation. One "big picture" item is to explain the > resource manager's role in rebalancing activity (RM isn't mentioned in the > doc's in this context). > We should also add more detail explaining how to optimize multi-threaded > rebalancing. > What exactly happens with multi-threaded rebalancing, and what are the limits > and/or consequences? For example, could too high a degree of parallelism > defeat the rebalancing algorithm (given the original design targets reaching > the right end-state by moving buckets one at-a-time)? Or, is the reverse > true, and we might bet better final results with more threads ? Is there some > guideline we can devise based on the number of nodes in the cluster, the > number of cores per server, and the configured number of buckets for a PR? > Is multi-threaded applied on each host individually, or are the extra threads > only running on the rebalance "coordinator" node? > I wasn't able to explain any of the nuances of rebalancing based on our docs > and cursory review of the Geode codebase (although for the latter I'm sure I > could eventually . . .). This actually screams for a dedicated section on > the Geode Wiki "Geode Internal Architecture" section . . . I would be happy > to help write this I can get get started with answers to the above questions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-4721) Being invoked within JTA Region.values() (and all iteration related operations) does return empty collection
[ https://issues.apache.org/jira/browse/GEODE-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone reassigned GEODE-4721: - Assignee: (was: Eric Shu) > Being invoked within JTA Region.values() (and all iteration related > operations) does return empty collection > > > Key: GEODE-4721 > URL: https://issues.apache.org/jira/browse/GEODE-4721 > Project: Geode > Issue Type: Bug > Components: regions, transactions >Reporter: Vadim Lotarev >Priority: Critical > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > {{Region.values()}} returns empty collection being invoked within JTA. Other > operations returns data, for example this workaround works (though less > efficient): {{region.getAll(region.keySet()).values()}}, also > {{Region.size()}} returns correct value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-4641) CI Failure: PersistentColocatedPartitionedRegionDUnitTest.testHierarchyOfColocatedChildPRsMissingGrandchild
[ https://issues.apache.org/jira/browse/GEODE-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone reassigned GEODE-4641: - Assignee: Kirk Lund > CI Failure: > PersistentColocatedPartitionedRegionDUnitTest.testHierarchyOfColocatedChildPRsMissingGrandchild > --- > > Key: GEODE-4641 > URL: https://issues.apache.org/jira/browse/GEODE-4641 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Kirk Lund >Priority: Major > > {noformat} > org.apache.geode.internal.cache.partitioned.PersistentColocatedPartitionedRegionDUnitTest > > testHierarchyOfColocatedChildPRsMissingGrandchild FAILED > java.lang.AssertionError: An exception occurred during asynchronous > invocation. > at > org.apache.geode.test.dunit.AsyncInvocation.checkException(AsyncInvocation.java:150) > at > org.apache.geode.test.dunit.AsyncInvocation.get(AsyncInvocation.java:424) > at > org.apache.geode.internal.cache.partitioned.PersistentColocatedPartitionedRegionDUnitTest.testHierarchyOfColocatedChildPRsMissingGrandchild(PersistentColocatedPartitionedRegionDUnitTest.java:1143) > Caused by: > org.mockito.exceptions.verification.NoInteractionsWanted: > No interactions wanted here: > -> at > org.apache.geode.internal.cache.partitioned.PersistentColocatedPartitionedRegionDUnitTest$10.call(PersistentColocatedPartitionedRegionDUnitTest.java:521) > But found this interaction on mock 'appender': > -> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > *** > For your reference, here is the list of all invocations ([?] - means > unverified). > 1. -> at > org.apache.logging.log4j.core.config.AbstractConfiguration.addLoggerAppender(AbstractConfiguration.java:704) > 2. -> at > org.apache.logging.log4j.core.config.AppenderControl.(AppenderControl.java:51) > 3. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 4. -> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > 5. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 6. -> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > 7. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 8. -> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > 9. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 10. -> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > 11. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 12. -> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > 13. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 14. -> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > 15. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 16. [?]-> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > 17. -> at > org.apache.logging.log4j.core.config.AppenderControl.ensureAppenderStarted(AppenderControl.java:134) > 18. [?]-> at > org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4707) Have jdbc connector look for configured jndi binding
[ https://issues.apache.org/jira/browse/GEODE-4707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4707: -- Labels: jdbc (was: ) > Have jdbc connector look for configured jndi binding > -- > > Key: GEODE-4707 > URL: https://issues.apache.org/jira/browse/GEODE-4707 > Project: Geode > Issue Type: Improvement > Components: extensions >Reporter: Fred Krone >Priority: Major > Labels: jdbc > > A user should be able to create a datasource using the gfsh command {{create > jndi-binding }} > Then a datasource will be created with the supplied options and the binding > will be created without the user having to restart the existing server(s) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4707) Have jdbc connector look for configured jndi binding
[ https://issues.apache.org/jira/browse/GEODE-4707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4707: -- Component/s: (was: regions) > Have jdbc connector look for configured jndi binding > -- > > Key: GEODE-4707 > URL: https://issues.apache.org/jira/browse/GEODE-4707 > Project: Geode > Issue Type: Improvement > Components: extensions >Reporter: Fred Krone >Priority: Major > Labels: jdbc > > A user should be able to create a datasource using the gfsh command {{create > jndi-binding }} > Then a datasource will be created with the supplied options and the binding > will be created without the user having to restart the existing server(s) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4708) Users need the ability to tune tombstone GC on a per region basis
[ https://issues.apache.org/jira/browse/GEODE-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4708: -- Summary: Users need the ability to tune tombstone GC on a per region basis (was: Users need the ability to tune GC on a per region basis.) > Users need the ability to tune tombstone GC on a per region basis > - > > Key: GEODE-4708 > URL: https://issues.apache.org/jira/browse/GEODE-4708 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Fred Krone >Priority: Major > > We need to reduce memory requirements for a Lucene index. > > Most of the extra memory is consumed by tombstones created by Apache Lucene > generating many temporary 'files' (up to 10 per entry). > > We need a way to provide accelerated GC for these tombstones. It has been > suggested that, rather than doing something specific for Lucene, a more > general solution is to add the ability to tune GC on a per region basis. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-4708) Users need the ability to tune GC on a per region basis.
Fred Krone created GEODE-4708: - Summary: Users need the ability to tune GC on a per region basis. Key: GEODE-4708 URL: https://issues.apache.org/jira/browse/GEODE-4708 Project: Geode Issue Type: Improvement Components: regions Reporter: Fred Krone We need to reduce memory requirements for a Lucene index. Most of the extra memory is consumed by tombstones created by Apache Lucene generating many temporary 'files' (up to 10 per entry). We need a way to provide accelerated GC for these tombstones. It has been suggested that, rather than doing something specific for Lucene, a more general solution is to add the ability to tune GC on a per region basis. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4658) Expose how much time it takes to write to disk and what is the disk store size
[ https://issues.apache.org/jira/browse/GEODE-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4658: -- Description: *Given* I have a persistent region with a default disk store *When* I write data on the region *Then* I can see 2 new metrics {{average write time to disk}} ns time and {{size of disk store}} in bytes h3. Documentation Add this new metric to the docs This should be a matter of altering the ShowMetricsCommand.java was: *Given* I have a persistent region with a default disk store *When* I write data on the region *Then* I can see 2 new metrics {{average write time to disk}} ns time and {{size of disk store}} in bytes h3. Documentation Add this new metric to the docs > Expose how much time it takes to write to disk and what is the disk store size > -- > > Key: GEODE-4658 > URL: https://issues.apache.org/jira/browse/GEODE-4658 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Fred Krone >Priority: Major > > *Given* I have a persistent region with a default disk store > *When* I write data on the region > *Then* I can see 2 new metrics {{average write time to disk}} ns time and > {{size of disk store}} in bytes > h3. Documentation > Add this new metric to the docs > > This should be a matter of altering the ShowMetricsCommand.java -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4597) RemoteRemoveAllMessage and RemotePutAllMessage should not use a partitioned region class
[ https://issues.apache.org/jira/browse/GEODE-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4597: -- Labels: (was: refactoring) > RemoteRemoveAllMessage and RemotePutAllMessage should not use a partitioned > region class > > > Key: GEODE-4597 > URL: https://issues.apache.org/jira/browse/GEODE-4597 > Project: Geode > Issue Type: Improvement > Components: transactions >Reporter: Darrel Schneider >Priority: Major > > The RemoteRemoveAllMessage and RemotePutAllMessage implement > removeAll/putAll for transactions on non-partitioned classes. But they call a > static method (getEventForEntry) on RemoveAllPRMessage/PutAllPRMessage which > is a messaged used for partitioned regions. > This code should be refactored so that the common getEventForEntry is not in > a class dedicated to a specific type of region. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4597) RemoteRemoveAllMessage and RemotePutAllMessage should not use a partitioned region class
[ https://issues.apache.org/jira/browse/GEODE-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4597: -- Labels: refactoring (was: ) > RemoteRemoveAllMessage and RemotePutAllMessage should not use a partitioned > region class > > > Key: GEODE-4597 > URL: https://issues.apache.org/jira/browse/GEODE-4597 > Project: Geode > Issue Type: Improvement > Components: transactions >Reporter: Darrel Schneider >Priority: Major > > The RemoteRemoveAllMessage and RemotePutAllMessage implement > removeAll/putAll for transactions on non-partitioned classes. But they call a > static method (getEventForEntry) on RemoveAllPRMessage/PutAllPRMessage which > is a messaged used for partitioned regions. > This code should be refactored so that the common getEventForEntry is not in > a class dedicated to a specific type of region. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-4707) Have jdbc connector look for configured jndi binding
Fred Krone created GEODE-4707: - Summary: Have jdbc connector look for configured jndi binding Key: GEODE-4707 URL: https://issues.apache.org/jira/browse/GEODE-4707 Project: Geode Issue Type: Improvement Components: extensions, regions Reporter: Fred Krone A user should be able to create a datasource using the gfsh command {{create jndi-binding }} Then a datasource will be created with the supplied options and the binding will be created without the user having to restart the existing server(s) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-4693) Convert initial JDBC read with empty region to correct pdx-type
Fred Krone created GEODE-4693: - Summary: Convert initial JDBC read with empty region to correct pdx-type Key: GEODE-4693 URL: https://issues.apache.org/jira/browse/GEODE-4693 Project: Geode Issue Type: Improvement Components: extensions, regions Reporter: Fred Krone AC _Given_ I have an empty region with a pdx-type mapped to a jdbc connection _When_ I get an entry from the region _Then_ I it should pull a dataset from the backing db and convert it to the pdx-type associated with the region. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-788) Provide region.clear() implementation for Partitioned Regions
[ https://issues.apache.org/jira/browse/GEODE-788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone reassigned GEODE-788: Assignee: Kirk Lund > Provide region.clear() implementation for Partitioned Regions > - > > Key: GEODE-788 > URL: https://issues.apache.org/jira/browse/GEODE-788 > Project: Geode > Issue Type: New Feature > Components: core >Reporter: William Markito Oliveira >Assignee: Kirk Lund >Priority: Major > > The current PartitionedRegion API doesn't offer a clear operation. > {code} > // from PartitionedRegion.java > /** >* @since 5.0 >* @throws UnsupportedOperationException >* OVERRIDES >*/ > @Override > public void clear() { > throw new UnsupportedOperationException(); > } > @Override > void basicClear(RegionEventImpl regionEvent, boolean cacheWrite) { > throw new UnsupportedOperationException(); > } > @Override > void basicLocalClear(RegionEventImpl event) { > throw new UnsupportedOperationException(); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4120) Improvements to JDBC connector
[ https://issues.apache.org/jira/browse/GEODE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4120. --- Resolution: Won't Fix > Improvements to JDBC connector > -- > > Key: GEODE-4120 > URL: https://issues.apache.org/jira/browse/GEODE-4120 > Project: Geode > Issue Type: Wish > Components: regions >Reporter: Fred Krone >Priority: Major > > This is the overarching epic for next improvements to JDBC connector as we > move from MVP to GA. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (GEODE-4120) Improvements to JDBC connector
[ https://issues.apache.org/jira/browse/GEODE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone closed GEODE-4120. - > Improvements to JDBC connector > -- > > Key: GEODE-4120 > URL: https://issues.apache.org/jira/browse/GEODE-4120 > Project: Geode > Issue Type: Wish > Components: regions >Reporter: Fred Krone >Priority: Major > > This is the overarching epic for next improvements to JDBC connector as we > move from MVP to GA. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-4217) gfsh create jdbc-connection command allows a password to be set without a user name
[ https://issues.apache.org/jira/browse/GEODE-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone reassigned GEODE-4217: - Assignee: Kirk Lund > gfsh create jdbc-connection command allows a password to be set without a > user name > > > Key: GEODE-4217 > URL: https://issues.apache.org/jira/browse/GEODE-4217 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Darrel Schneider >Assignee: Kirk Lund >Priority: Major > > The gfsh create jdbc-connection command allows a password to be set without > a user name. > If you configure a password you must also configure a user name and this > should be validated. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4182) JDBC connector exception handling needs improvement
[ https://issues.apache.org/jira/browse/GEODE-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4182: -- Component/s: (was: regions) > JDBC connector exception handling needs improvement > --- > > Key: GEODE-4182 > URL: https://issues.apache.org/jira/browse/GEODE-4182 > Project: Geode > Issue Type: Improvement > Components: extensions >Reporter: Darrel Schneider >Assignee: Kirk Lund >Priority: Major > > The JDBC connector currently has multiple places it catches SQLException and > turns around and throws IllegalStateException. It should instead throw a new > exception that is dedicated to the jdbc connector. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-4182) JDBC connector exception handling needs improvement
[ https://issues.apache.org/jira/browse/GEODE-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone reassigned GEODE-4182: - Assignee: Kirk Lund > JDBC connector exception handling needs improvement > --- > > Key: GEODE-4182 > URL: https://issues.apache.org/jira/browse/GEODE-4182 > Project: Geode > Issue Type: Improvement > Components: extensions, regions >Reporter: Darrel Schneider >Assignee: Kirk Lund >Priority: Major > > The JDBC connector currently has multiple places it catches SQLException and > turns around and throws IllegalStateException. It should instead throw a new > exception that is dedicated to the jdbc connector. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4182) JDBC connector exception handling needs improvement
[ https://issues.apache.org/jira/browse/GEODE-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4182: -- Component/s: extensions > JDBC connector exception handling needs improvement > --- > > Key: GEODE-4182 > URL: https://issues.apache.org/jira/browse/GEODE-4182 > Project: Geode > Issue Type: Improvement > Components: extensions, regions >Reporter: Darrel Schneider >Priority: Major > > The JDBC connector currently has multiple places it catches SQLException and > turns around and throws IllegalStateException. It should instead throw a new > exception that is dedicated to the jdbc connector. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4266) JMX manager should not deserialize exceptions from server
[ https://issues.apache.org/jira/browse/GEODE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4266: -- Component/s: (was: regions) jmx gfsh > JMX manager should not deserialize exceptions from server > - > > Key: GEODE-4266 > URL: https://issues.apache.org/jira/browse/GEODE-4266 > Project: Geode > Issue Type: Bug > Components: gfsh, jmx >Reporter: Fred Krone >Priority: Major > > Steps > Created a jdbc-connection, jdbc-mapping, region in gfsh > Tried a .get from the region which attempted to read through with the > JdbcLoader > However it could not connect to the endpoint and threw an exception (fine). > But the exception wasn't handled in gfsh -- gfsh appeared hung. > [info 2018/01/09 13:42:41.338 PST s1 > tid=0x4b] Exception occurred: > java.lang.IllegalStateException: Could not connect ... > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.getConnection(SqlHandler.java:59) > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.read(SqlHandler.java:73) > at org.apache.geode.connectors.jdbc.JdbcLoader.load(JdbcLoader.java:52) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doLocalLoad(SearchLoadAndWriteProcessor.java:791) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.load(SearchLoadAndWriteProcessor.java:602) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.searchAndLoad(SearchLoadAndWriteProcessor.java:461) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doSearchAndLoad(SearchLoadAndWriteProcessor.java:177) > at > org.apache.geode.internal.cache.DistributedRegion.findUsingSearchLoad(DistributedRegion.java:2338) > at > org.apache.geode.internal.cache.DistributedRegion.findObjectInSystem(DistributedRegion.java:2208) > at > org.apache.geode.internal.cache.LocalRegion.nonTxnFindObject(LocalRegion.java:1477) > at > org.apache.geode.internal.cache.LocalRegionDataView.findObject(LocalRegionDataView.java:176) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285) > at > org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:443) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:151) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.execute(DataCommandFunction.java:116) > at > org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:187) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:382) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:448) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:1117) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.access$000(ClusterDistributionManager.java:108) > at > org.apache.geode.distributed.internal.ClusterDistributionManager$9$1.run(ClusterDistributionManager.java:987) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: > Communications link failure > The last packet sent successfully to the server was 0 milliseconds ago. The > driver has not received any packets from the server. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) > at > com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) > at com.mysql.jdbc.MysqlIO.(MysqlIO.java:341) > at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) > at > com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) > at
[jira] [Updated] (GEODE-4266) JMX manager should not deserialize exceptions from server
[ https://issues.apache.org/jira/browse/GEODE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4266: -- Summary: JMX manager should not deserialize exceptions from server (was: jdbc exception not handled in gfsh) > JMX manager should not deserialize exceptions from server > - > > Key: GEODE-4266 > URL: https://issues.apache.org/jira/browse/GEODE-4266 > Project: Geode > Issue Type: Bug > Components: gfsh, jmx >Reporter: Fred Krone >Priority: Major > > Steps > Created a jdbc-connection, jdbc-mapping, region in gfsh > Tried a .get from the region which attempted to read through with the > JdbcLoader > However it could not connect to the endpoint and threw an exception (fine). > But the exception wasn't handled in gfsh -- gfsh appeared hung. > [info 2018/01/09 13:42:41.338 PST s1 > tid=0x4b] Exception occurred: > java.lang.IllegalStateException: Could not connect ... > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.getConnection(SqlHandler.java:59) > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.read(SqlHandler.java:73) > at org.apache.geode.connectors.jdbc.JdbcLoader.load(JdbcLoader.java:52) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doLocalLoad(SearchLoadAndWriteProcessor.java:791) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.load(SearchLoadAndWriteProcessor.java:602) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.searchAndLoad(SearchLoadAndWriteProcessor.java:461) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doSearchAndLoad(SearchLoadAndWriteProcessor.java:177) > at > org.apache.geode.internal.cache.DistributedRegion.findUsingSearchLoad(DistributedRegion.java:2338) > at > org.apache.geode.internal.cache.DistributedRegion.findObjectInSystem(DistributedRegion.java:2208) > at > org.apache.geode.internal.cache.LocalRegion.nonTxnFindObject(LocalRegion.java:1477) > at > org.apache.geode.internal.cache.LocalRegionDataView.findObject(LocalRegionDataView.java:176) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285) > at > org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:443) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:151) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.execute(DataCommandFunction.java:116) > at > org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:187) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:382) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:448) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:1117) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.access$000(ClusterDistributionManager.java:108) > at > org.apache.geode.distributed.internal.ClusterDistributionManager$9$1.run(ClusterDistributionManager.java:987) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: > Communications link failure > The last packet sent successfully to the server was 0 milliseconds ago. The > driver has not received any packets from the server. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) > at > com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) > at com.mysql.jdbc.MysqlIO.(MysqlIO.java:341) > at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) > at > com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) > at
[jira] [Created] (GEODE-4658) Expose how much time it takes to write to disk and what is the disk store size
Fred Krone created GEODE-4658: - Summary: Expose how much time it takes to write to disk and what is the disk store size Key: GEODE-4658 URL: https://issues.apache.org/jira/browse/GEODE-4658 Project: Geode Issue Type: Improvement Components: regions Reporter: Fred Krone *Given* I have a persistent region with a default disk store *When* I write data on the region *Then* I can see 2 new metrics {{average write time to disk}} ns time and {{size of disk store}} in bytes h3. Documentation Add this new metric to the docs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4629) CI Failure: DiskRegionDUnitTest.testNoFaults FAILED
[ https://issues.apache.org/jira/browse/GEODE-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4629: -- Fix Version/s: 1.5.0 > CI Failure: DiskRegionDUnitTest.testNoFaults FAILED > --- > > Key: GEODE-4629 > URL: https://issues.apache.org/jira/browse/GEODE-4629 > Project: Geode > Issue Type: Bug > Components: eviction >Reporter: Eric Shu >Assignee: Nick Reich >Priority: Major > Labels: pull-request-available > Fix For: 1.5.0 > > Time Spent: 40m > Remaining Estimate: 0h > > {noformat} > org.apache.geode.cache30.DiskRegionDUnitTest > testNoFaults FAILED > java.lang.AssertionError: Key 98 caused an eviction expected:<21> but > was:<22> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at > org.apache.geode.cache30.DiskRegionDUnitTest.testNoFaults(DiskRegionDUnitTest.java:320) > {noformat} > This is possibly caused by we enabled the new LRU algorithm. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4435) DiskStoreImpl.FlusherThread.run() should increment queue size stat after writing to disk
[ https://issues.apache.org/jira/browse/GEODE-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4435: -- Fix Version/s: 1.5.0 > DiskStoreImpl.FlusherThread.run() should increment queue size stat after > writing to disk > > > Key: GEODE-4435 > URL: https://issues.apache.org/jira/browse/GEODE-4435 > Project: Geode > Issue Type: Bug >Reporter: Lynn Gallinat >Assignee: Lynn Gallinat >Priority: Major > Labels: pull-request-available > Fix For: 1.5.0 > > Time Spent: 50m > Remaining Estimate: 0h > > com.gemstone.gemfire.internal.cache.DiskStoreImpl.FlusherThread.run() > should move this call: > stats.incQueueSize(-drainCount); > to the end of the while loop that drains the queue. It currently calls it at > the beginning before it actually writes to async ops to disk. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4573) Query execution within a transaction (JTA) produces a ClassCastException in version 1.4.0
[ https://issues.apache.org/jira/browse/GEODE-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4573: -- Affects Version/s: 1.4.0 > Query execution within a transaction (JTA) produces a ClassCastException in > version 1.4.0 > - > > Key: GEODE-4573 > URL: https://issues.apache.org/jira/browse/GEODE-4573 > Project: Geode > Issue Type: Bug > Components: transactions >Affects Versions: 1.4.0 >Reporter: Vadim Lotarev >Priority: Major > > The stack trace is like that: > {code:java} > Caused by: java.lang.ClassCastException: > org.apache.geode.internal.cache.TXEntry cannot be cast to > org.apache.geode.internal.cache.LocalRegion$NonTXEntry > at > org.apache.geode.internal.cache.EntriesSet$EntriesIterator.moveNext(EntriesSet.java:179) > at > org.apache.geode.internal.cache.EntriesSet$EntriesIterator.(EntriesSet.java:118) > at org.apache.geode.internal.cache.EntriesSet.iterator(EntriesSet.java:83) > at > org.apache.geode.cache.query.internal.ResultsCollectionWrapper.iterator(ResultsCollectionWrapper.java:184) > at org.apache.geode.cache.query.internal.QRegion.iterator(QRegion.java:244) > at > org.apache.geode.cache.query.internal.CompiledSelect.doNestedIterations(CompiledSelect.java:834) > at > org.apache.geode.cache.query.internal.CompiledSelect.doIterationEvaluate(CompiledSelect.java:701) > at > org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:545) > at > org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:55) > at > org.apache.geode.cache.query.internal.DefaultQuery.executeUsingContext(DefaultQuery.java:557) > at > org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:384) > {code} > When property {{-Dgemfire.restoreSetOperationTransactionBehavior=true}} is > set than everything works without errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4573) Query execution within a transaction (JTA) produces a ClassCastException in version 1.4.0
[ https://issues.apache.org/jira/browse/GEODE-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4573: -- Component/s: transactions > Query execution within a transaction (JTA) produces a ClassCastException in > version 1.4.0 > - > > Key: GEODE-4573 > URL: https://issues.apache.org/jira/browse/GEODE-4573 > Project: Geode > Issue Type: Bug > Components: transactions >Affects Versions: 1.4.0 >Reporter: Vadim Lotarev >Priority: Major > > The stack trace is like that: > {code:java} > Caused by: java.lang.ClassCastException: > org.apache.geode.internal.cache.TXEntry cannot be cast to > org.apache.geode.internal.cache.LocalRegion$NonTXEntry > at > org.apache.geode.internal.cache.EntriesSet$EntriesIterator.moveNext(EntriesSet.java:179) > at > org.apache.geode.internal.cache.EntriesSet$EntriesIterator.(EntriesSet.java:118) > at org.apache.geode.internal.cache.EntriesSet.iterator(EntriesSet.java:83) > at > org.apache.geode.cache.query.internal.ResultsCollectionWrapper.iterator(ResultsCollectionWrapper.java:184) > at org.apache.geode.cache.query.internal.QRegion.iterator(QRegion.java:244) > at > org.apache.geode.cache.query.internal.CompiledSelect.doNestedIterations(CompiledSelect.java:834) > at > org.apache.geode.cache.query.internal.CompiledSelect.doIterationEvaluate(CompiledSelect.java:701) > at > org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:545) > at > org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:55) > at > org.apache.geode.cache.query.internal.DefaultQuery.executeUsingContext(DefaultQuery.java:557) > at > org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:384) > {code} > When property {{-Dgemfire.restoreSetOperationTransactionBehavior=true}} is > set than everything works without errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4157) Expiration and eviction have static thread pools which should instead belong to the cache
[ https://issues.apache.org/jira/browse/GEODE-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4157: -- Labels: (was: geode-150) > Expiration and eviction have static thread pools which should instead belong > to the cache > - > > Key: GEODE-4157 > URL: https://issues.apache.org/jira/browse/GEODE-4157 > Project: Geode > Issue Type: Bug > Components: eviction, expiration >Reporter: Darrel Schneider >Priority: Major > > Expiration has a static thread pool: > org.apache.geode.internal.cache.ExpiryTask.executor > LRUListWithAsyncSorting in Eviction also has a static thread pool. > The life of these executors should be tied to the cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4405) Pass information for BackupDestination in the prepare message of backup command
[ https://issues.apache.org/jira/browse/GEODE-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4405. --- Resolution: Fixed > Pass information for BackupDestination in the prepare message of backup > command > --- > > Key: GEODE-4405 > URL: https://issues.apache.org/jira/browse/GEODE-4405 > Project: Geode > Issue Type: Sub-task > Components: persistence >Reporter: Nick Reich >Assignee: Nick Reich >Priority: Major > Labels: pull-request-available > Fix For: 1.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Gfsh executes backup in 2 phases/messages. 1. prepare 2. doBackup > The target location is passed in the doBackup phase. If instead, all > information about the backup task was provided in the first (prepare) > message, more aspects of the task could be set in its constructor and thus be > final. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4405) Pass information for BackupDestination in the prepare message of backup command
[ https://issues.apache.org/jira/browse/GEODE-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4405: -- Fix Version/s: 1.5.0 > Pass information for BackupDestination in the prepare message of backup > command > --- > > Key: GEODE-4405 > URL: https://issues.apache.org/jira/browse/GEODE-4405 > Project: Geode > Issue Type: Sub-task > Components: persistence >Reporter: Nick Reich >Assignee: Nick Reich >Priority: Major > Labels: pull-request-available > Fix For: 1.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Gfsh executes backup in 2 phases/messages. 1. prepare 2. doBackup > The target location is passed in the doBackup phase. If instead, all > information about the backup task was provided in the first (prepare) > message, more aspects of the task could be set in its constructor and thus be > final. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4295) Setting domain class on region-mapping but initial instance is always PdxInstance
[ https://issues.apache.org/jira/browse/GEODE-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4295: -- Component/s: (was: extensions) regions > Setting domain class on region-mapping but initial instance is always > PdxInstance > - > > Key: GEODE-4295 > URL: https://issues.apache.org/jira/browse/GEODE-4295 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Fred Krone >Priority: Major > > For JDBC Connector > Read serialized on the cache is set to false. Region mapping was created > with pdx-instance-type set to a specific domain class (example: Employee). > This means it should return this class on reads using the JdbcLoader. > However the initial instance is a PdxInstance which results in a collision. > Something like Employee employee = region.get(2); <-- blows up > The type is correct (Employee) but wont cast. > [vm1] PDX Instance: > PDX[14933766,org.apache.geode.connectors.jdbc.Employee]{first_name=Fred, > hire_date=2018-01-12 00:00:00.0, id=3, last_name=Krone} read serialized: false -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4266) jdbc exception not handled in gfsh
[ https://issues.apache.org/jira/browse/GEODE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4266: -- Component/s: (was: extensions) regions > jdbc exception not handled in gfsh > -- > > Key: GEODE-4266 > URL: https://issues.apache.org/jira/browse/GEODE-4266 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Fred Krone >Priority: Major > > Steps > Created a jdbc-connection, jdbc-mapping, region in gfsh > Tried a .get from the region which attempted to read through with the > JdbcLoader > However it could not connect to the endpoint and threw an exception (fine). > But the exception wasn't handled in gfsh -- gfsh appeared hung. > [info 2018/01/09 13:42:41.338 PST s1 > tid=0x4b] Exception occurred: > java.lang.IllegalStateException: Could not connect ... > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.getConnection(SqlHandler.java:59) > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.read(SqlHandler.java:73) > at org.apache.geode.connectors.jdbc.JdbcLoader.load(JdbcLoader.java:52) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doLocalLoad(SearchLoadAndWriteProcessor.java:791) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.load(SearchLoadAndWriteProcessor.java:602) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.searchAndLoad(SearchLoadAndWriteProcessor.java:461) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doSearchAndLoad(SearchLoadAndWriteProcessor.java:177) > at > org.apache.geode.internal.cache.DistributedRegion.findUsingSearchLoad(DistributedRegion.java:2338) > at > org.apache.geode.internal.cache.DistributedRegion.findObjectInSystem(DistributedRegion.java:2208) > at > org.apache.geode.internal.cache.LocalRegion.nonTxnFindObject(LocalRegion.java:1477) > at > org.apache.geode.internal.cache.LocalRegionDataView.findObject(LocalRegionDataView.java:176) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285) > at > org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:443) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:151) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.execute(DataCommandFunction.java:116) > at > org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:187) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:382) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:448) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:1117) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.access$000(ClusterDistributionManager.java:108) > at > org.apache.geode.distributed.internal.ClusterDistributionManager$9$1.run(ClusterDistributionManager.java:987) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: > Communications link failure > The last packet sent successfully to the server was 0 milliseconds ago. The > driver has not received any packets from the server. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) > at > com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) > at com.mysql.jdbc.MysqlIO.(MysqlIO.java:341) > at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) > at > com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) > at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014) > at
[jira] [Comment Edited] (GEODE-4434) Need an API to confirm recovery is finished
[ https://issues.apache.org/jira/browse/GEODE-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347643#comment-16347643 ] Fred Krone edited comment on GEODE-4434 at 2/1/18 6:40 PM: --- I haven't been able to get any feedback from the field on what this command should look like for the user. Any suggestions? gfsh --check-redundancy ? was (Author: fkrone): I haven't been able to get any feedback from the field on what this command should look like for the user. Any suggestions? gfsh --check-redundancy ? > Need an API to confirm recovery is finished > --- > > Key: GEODE-4434 > URL: https://issues.apache.org/jira/browse/GEODE-4434 > Project: Geode > Issue Type: New Feature > Components: persistence >Reporter: xiaojian zhou >Priority: Major > > This feature is expected especially when we need to decide if it's safe to > shutdown one member. So far we were using following workarrounds instead: > > 1) wait until cacheserver is listening on the port (to make sure all > replicated regions have finished recovery) > 2) call rebalance() to make sure the defined redundancy are satisfied (for PR) > > we need to introduce a boolean parameter into rebalance() to only check > redundancy, not to do real rebalance. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-3800) Create backup service
[ https://issues.apache.org/jira/browse/GEODE-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-3800: -- Fix Version/s: 1.5.0 > Create backup service > - > > Key: GEODE-3800 > URL: https://issues.apache.org/jira/browse/GEODE-3800 > Project: Geode > Issue Type: Sub-task > Components: persistence >Reporter: Nick Reich >Assignee: Anilkumar Gingade >Priority: Major > Labels: pull-request-available > Fix For: 1.5.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Instead of creating a backup management class for each backup request, create > a service to which the requests are made and handles the logic of launching > the backup and what to do if more than one backup is requested at a time. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-4434) Need an API to confirm recovery is finished
[ https://issues.apache.org/jira/browse/GEODE-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347643#comment-16347643 ] Fred Krone commented on GEODE-4434: --- I haven't been able to get any feedback from the field on what this command should look like for the user. Any suggestions? gfsh --check-redundancy ? > Need an API to confirm recovery is finished > --- > > Key: GEODE-4434 > URL: https://issues.apache.org/jira/browse/GEODE-4434 > Project: Geode > Issue Type: New Feature > Components: persistence >Reporter: xiaojian zhou >Priority: Major > > This feature is expected especially when we need to decide if it's safe to > shutdown one member. So far we were using following workarrounds instead: > > 1) wait until cacheserver is listening on the port (to make sure all > replicated regions have finished recovery) > 2) call rebalance() to make sure the defined redundancy are satisfied (for PR) > > we need to introduce a boolean parameter into rebalance() to only check > redundancy, not to do real rebalance. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-4250) Users would like a command to re-establish redundancy without rebalancing
[ https://issues.apache.org/jira/browse/GEODE-4250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338213#comment-16338213 ] Fred Krone commented on GEODE-4250: --- [~gideonlow] we'll probably have a community proposal for this ... but was wondering if you had an opinion on what the command should look like? quickest route: It can be added in as a parameter onto current rebalance (basically set --moveBuckets and --movePrimary to false t'll check for redundancy but wont rebalance). Or cleanest route but a little more work: it can be a new gfsh command 'gfsh redundancy --check' > Users would like a command to re-establish redundancy without rebalancing > - > > Key: GEODE-4250 > URL: https://issues.apache.org/jira/browse/GEODE-4250 > Project: Geode > Issue Type: Improvement > Components: docs, regions >Reporter: Fred Krone >Priority: Major > > Acceptance criteria: > -- There is a way for a user to detect that redundancy is restored > -- There is a way to check current redundancy > -- Can set moveBuckets and movePrimary to false and run rebalance > > Command would only succeed when the system is fully redundant. > Re-establishing Redundancy after the loss of a peer node is typically far > more urgent and important than achieving better balance. The operational > impact of rebalancing is also much higher, forcing impacted buckets' updates > to be distributed to _redundancy-copies + 1_ peer processes and potentially > spiking p2p connections/threads (and thus load) far beyond normal operations. > If the system is already close to exhausting available capacity for some > hardware component, this can be enough to push it over-the-edge (and may > force the original fault to recur). This problem is exacerbated when the > cluster's overall capacity has been reduced due to the loss of a physical > server. Without the ability to separate the operational tasks of > re-establishing full data redundancy and rebalancing bucket partitions (that > are already safely redundant), system administrators may be forced to > provision replacement capacity _before_ they can restore full service, thus > increasing downtime unnecessarily. > For these reasons, we must add the option to execute these operational tasks > separately. > It still makes sense for _rebalancing_ ops to first re-establish redundancy, > so we can keep the existing GFSH command/behavior (it would still be useful > to clearly log completion of one step before the next one begins). We need a > new GFSH command/ResourceManager API to execute re-establishment of > redundancy _without_ rebalancing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-4330) Move logic for temporary files during backups out of BackupManager
[ https://issues.apache.org/jira/browse/GEODE-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4330. --- Resolution: Fixed Fix Version/s: 1.5.0 > Move logic for temporary files during backups out of BackupManager > -- > > Key: GEODE-4330 > URL: https://issues.apache.org/jira/browse/GEODE-4330 > Project: Geode > Issue Type: Sub-task > Components: persistence >Reporter: Nick Reich >Priority: Major > Fix For: 1.5.0 > > > Keep track of files to be removed after backup is complete in their own class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-4363) Add new distributed destroy action configuration
Fred Krone created GEODE-4363: - Summary: Add new distributed destroy action configuration Key: GEODE-4363 URL: https://issues.apache.org/jira/browse/GEODE-4363 Project: Geode Issue Type: Sub-task Reporter: Fred Krone Should add action="distributed-destroy" eviction-action currently must be either 'local-destroy' or 'overflow-to-disk' Acceptance: can set this on region configuration via cache.xml, API or gfsh gfsh help and docs have been updated -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-3554) Document not invoking CacheFactory.getAnyInstance() from user callbacks
[ https://issues.apache.org/jira/browse/GEODE-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-3554. --- Resolution: Fixed Fix Version/s: 1.5.0 > Document not invoking CacheFactory.getAnyInstance() from user callbacks > --- > > Key: GEODE-3554 > URL: https://issues.apache.org/jira/browse/GEODE-3554 > Project: Geode > Issue Type: New Feature > Components: core >Reporter: Fred Krone >Assignee: Anilkumar Gingade >Priority: Major > Labels: cache, geode-150, pull-request-available > Fix For: 1.5.0 > > > As long as the product continues to invoke user plug-in callbacks during > startup (and with the main thread), we probably need to document a warning > not to use CacheFactory.getAnyInstance() as the API call is common in > plug-ins and this will freeze startup. > We are planning on deprecating and removing CacheFactory.getAnyInstance() > anyway. > Documentation should recommend using preferred APIs: > * FunctionContext.getCache() > * Event.getRegion().getCache() > We could expose Event.getCache() to make it more obvious to users. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4250) Users would like a command to re-establish redundancy without rebalancing
[ https://issues.apache.org/jira/browse/GEODE-4250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4250: -- Description: Acceptance criteria: -- There is a way for a user to detect that redundancy is restored -- There is a way to check current redundancy -- Can set moveBuckets and movePrimary to false and run rebalance Command would only succeed when the system is fully redundant. Re-establishing Redundancy after the loss of a peer node is typically far more urgent and important than achieving better balance. The operational impact of rebalancing is also much higher, forcing impacted buckets' updates to be distributed to _redundancy-copies + 1_ peer processes and potentially spiking p2p connections/threads (and thus load) far beyond normal operations. If the system is already close to exhausting available capacity for some hardware component, this can be enough to push it over-the-edge (and may force the original fault to recur). This problem is exacerbated when the cluster's overall capacity has been reduced due to the loss of a physical server. Without the ability to separate the operational tasks of re-establishing full data redundancy and rebalancing bucket partitions (that are already safely redundant), system administrators may be forced to provision replacement capacity _before_ they can restore full service, thus increasing downtime unnecessarily. For these reasons, we must add the option to execute these operational tasks separately. It still makes sense for _rebalancing_ ops to first re-establish redundancy, so we can keep the existing GFSH command/behavior (it would still be useful to clearly log completion of one step before the next one begins). We need a new GFSH command/ResourceManager API to execute re-establishment of redundancy _without_ rebalancing. was: Command would only succeed when the system is fully redundant. Re-establishing Redundancy after the loss of a peer node is typically far more urgent and important than achieving better balance. The operational impact of rebalancing is also much higher, forcing impacted buckets' updates to be distributed to _redundancy-copies + 1_ peer processes and potentially spiking p2p connections/threads (and thus load) far beyond normal operations. If the system is already close to exhausting available capacity for some hardware component, this can be enough to push it over-the-edge (and may force the original fault to recur).This problem is exacerbated when the cluster's overall capacity has been reduced due to the loss of a physical server. Without the ability to separate the operational tasks of re-establishing full data redundancy and rebalancing bucket partitions (that are already safely redundant), system administrators may be forced to provision replacement capacity _before_ they can restore full service, thus increasing downtime unnecessarily. For these reasons, we must add the option to execute these operational tasks separately. It still makes sense for _rebalancing_ ops to first re-establish redundancy, so we can keep the existing GFSH command/behavior (it would still be useful to clearly log completion of one step before the next one begins). We need a new GFSH command/ResourceManager API to execute re-establishment of redundancy _without_ rebalancing. > Users would like a command to re-establish redundancy without rebalancing > - > > Key: GEODE-4250 > URL: https://issues.apache.org/jira/browse/GEODE-4250 > Project: Geode > Issue Type: Improvement > Components: docs, regions >Reporter: Fred Krone >Priority: Major > > Acceptance criteria: > -- There is a way for a user to detect that redundancy is restored > -- There is a way to check current redundancy > -- Can set moveBuckets and movePrimary to false and run rebalance > > Command would only succeed when the system is fully redundant. > Re-establishing Redundancy after the loss of a peer node is typically far > more urgent and important than achieving better balance. The operational > impact of rebalancing is also much higher, forcing impacted buckets' updates > to be distributed to _redundancy-copies + 1_ peer processes and potentially > spiking p2p connections/threads (and thus load) far beyond normal operations. > If the system is already close to exhausting available capacity for some > hardware component, this can be enough to push it over-the-edge (and may > force the original fault to recur). This problem is exacerbated when the > cluster's overall capacity has been reduced due to the loss of a physical > server. Without the ability to separate the operational tasks of > re-establishing full data redundancy and rebalancing bucket partitions (that > are already
[jira] [Resolved] (GEODE-4025) Write functional tests with external DB
[ https://issues.apache.org/jira/browse/GEODE-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-4025. --- Resolution: Won't Fix > Write functional tests with external DB > --- > > Key: GEODE-4025 > URL: https://issues.apache.org/jira/browse/GEODE-4025 > Project: Geode > Issue Type: Sub-task > Components: regions >Reporter: Fred Krone >Priority: Major > > We will need an operational test infrastructure to do integration testing > with an external database. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (GEODE-4025) Write functional tests with external DB
[ https://issues.apache.org/jira/browse/GEODE-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone closed GEODE-4025. - > Write functional tests with external DB > --- > > Key: GEODE-4025 > URL: https://issues.apache.org/jira/browse/GEODE-4025 > Project: Geode > Issue Type: Sub-task > Components: regions >Reporter: Fred Krone >Priority: Major > > We will need an operational test infrastructure to do integration testing > with an external database. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-3981) Cache should not provide access to internal regions from public APIs
[ https://issues.apache.org/jira/browse/GEODE-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone resolved GEODE-3981. --- Resolution: Won't Fix > Cache should not provide access to internal regions from public APIs > > > Key: GEODE-3981 > URL: https://issues.apache.org/jira/browse/GEODE-3981 > Project: Geode > Issue Type: Improvement > Components: regions >Reporter: Fred Krone >Priority: Major > Labels: geode-150 > > We should protect the user from accidentally modifying an internal region. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4273) org.apache.geode.internal.cache.DiskRegionJUnitTest.testEntryDestructionInSynchPersistOnlyForIOExceptionCase fails with IllegalStateException
[ https://issues.apache.org/jira/browse/GEODE-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4273: -- Labels: flaky (was: ) > org.apache.geode.internal.cache.DiskRegionJUnitTest.testEntryDestructionInSynchPersistOnlyForIOExceptionCase > fails with IllegalStateException > - > > Key: GEODE-4273 > URL: https://issues.apache.org/jira/browse/GEODE-4273 > Project: Geode > Issue Type: Bug > Components: persistence >Reporter: Lynn Gallinat >Priority: Major > Labels: flaky > > org.apache.geode.internal.cache.DiskRegionJUnitTest > > testEntryDestructionInSynchPersistOnlyForIOExceptionCase FAILED > java.lang.IllegalStateException: A connection to a distributed system > already exists in this VM. It has the following configuration: > ack-severe-alert-threshold="0" > ack-wait-threshold="15" > archive-disk-space-limit="0" > archive-file-size-limit="0" > async-distribution-timeout="0" > async-max-queue-size="8" > async-queue-timeout="6" > bind-address="" > cache-xml-file="cache.xml" > cluster-configuration-dir="" > cluster-ssl-ciphers="any" > cluster-ssl-enabled="false" > cluster-ssl-keystore="" > cluster-ssl-keystore-password="" > cluster-ssl-keystore-type="" > cluster-ssl-protocols="any" > cluster-ssl-require-authentication="true" > cluster-ssl-truststore="" > cluster-ssl-truststore-password="" > conflate-events="server" > conserve-sockets="true" > delta-propagation="true" > > deploy-working-dir="/tmp/build/ae3c03f4/built-geode/test/geode-core/build/integrationTest" > disable-auto-reconnect="false" > disable-tcp="false" > distributed-system-id="-1" > distributed-transactions="false" > durable-client-id="" > durable-client-timeout="300" > enable-cluster-configuration="true" > enable-network-partition-detection="true" > enable-time-statistics="true" > enforce-unique-host="false" > gateway-ssl-ciphers="any" > gateway-ssl-enabled="false" > gateway-ssl-keystore="" > gateway-ssl-keystore-password="" > gateway-ssl-keystore-type="" > gateway-ssl-protocols="any" > gateway-ssl-require-authentication="true" > gateway-ssl-truststore="" > gateway-ssl-truststore-password="" > groups="" > http-service-bind-address="" > http-service-port="7070" > http-service-ssl-ciphers="any" > http-service-ssl-enabled="false" > http-service-ssl-keystore="" > http-service-ssl-keystore-password="" > http-service-ssl-keystore-type="" > http-service-ssl-protocols="any" > http-service-ssl-require-authentication="false" > http-service-ssl-truststore="" > http-service-ssl-truststore-password="" > jmx-manager="false" > jmx-manager-access-file="" > jmx-manager-bind-address="" > jmx-manager-hostname-for-clients="" > jmx-manager-http-port="7070" > jmx-manager-password-file="" > jmx-manager-port="1099" > jmx-manager-ssl-ciphers="any" > jmx-manager-ssl-enabled="false" > jmx-manager-ssl-keystore="" > jmx-manager-ssl-keystore-password="" > jmx-manager-ssl-keystore-type="" > jmx-manager-ssl-protocols="any" > jmx-manager-ssl-require-authentication="true" > jmx-manager-ssl-truststore="" > jmx-manager-ssl-truststore-password="" > jmx-manager-start="false" > jmx-manager-update-rate="2000" > load-cluster-configuration-from-dir="false" > locator-wait-time="0" > locators="" > lock-memory="false" > log-disk-space-limit="0" > log-file="" > log-file-size-limit="0" > log-level="config" > max-num-reconnect-tries="3" > max-wait-time-reconnect="6" > mcast-address="/239.192.81.1" > mcast-flow-control="1048576, 0.25, 5000" > mcast-port="0" > mcast-recv-buffer-size="1048576" > mcast-send-buffer-size="65535" > mcast-ttl="32" > member-timeout="5000" > membership-port-range="[1024,65535]" > memcached-bind-address="" > memcached-port="0" > memcached-protocol="ASCII" > name="" > off-heap-memory-size="" > redis-bind-address="" > redis-password="" > redis-port="0" > redundancy-zone="" > remote-locators="" > remove-unresponsive-client="false" > roles="" > security-client-accessor="" > security-client-accessor-pp="" > security-client-auth-init="" > security-client-authenticator="" > security-client-dhalgo="" >
[jira] [Updated] (GEODE-4295) Setting domain class on region-mapping but initial instance is always PdxInstance
[ https://issues.apache.org/jira/browse/GEODE-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4295: -- Component/s: (was: regions) > Setting domain class on region-mapping but initial instance is always > PdxInstance > - > > Key: GEODE-4295 > URL: https://issues.apache.org/jira/browse/GEODE-4295 > Project: Geode > Issue Type: Bug > Components: extensions >Reporter: Fred Krone >Priority: Major > > For JDBC Connector > Read serialized on the cache is set to false. Region mapping was created > with pdx-instance-type set to a specific domain class (example: Employee). > This means it should return this class on reads using the JdbcLoader. > However the initial instance is a PdxInstance which results in a collision. > Something like Employee employee = region.get(2); <-- blows up > The type is correct (Employee) but wont cast. > [vm1] PDX Instance: > PDX[14933766,org.apache.geode.connectors.jdbc.Employee]{first_name=Fred, > hire_date=2018-01-12 00:00:00.0, id=3, last_name=Krone} read serialized: false -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4266) jdbc exception not handled in gfsh
[ https://issues.apache.org/jira/browse/GEODE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4266: -- Component/s: (was: regions) extensions > jdbc exception not handled in gfsh > -- > > Key: GEODE-4266 > URL: https://issues.apache.org/jira/browse/GEODE-4266 > Project: Geode > Issue Type: Bug > Components: extensions >Reporter: Fred Krone >Priority: Major > > Steps > Created a jdbc-connection, jdbc-mapping, region in gfsh > Tried a .get from the region which attempted to read through with the > JdbcLoader > However it could not connect to the endpoint and threw an exception (fine). > But the exception wasn't handled in gfsh -- gfsh appeared hung. > [info 2018/01/09 13:42:41.338 PST s1 > tid=0x4b] Exception occurred: > java.lang.IllegalStateException: Could not connect ... > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.getConnection(SqlHandler.java:59) > at > org.apache.geode.connectors.jdbc.internal.SqlHandler.read(SqlHandler.java:73) > at org.apache.geode.connectors.jdbc.JdbcLoader.load(JdbcLoader.java:52) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doLocalLoad(SearchLoadAndWriteProcessor.java:791) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.load(SearchLoadAndWriteProcessor.java:602) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.searchAndLoad(SearchLoadAndWriteProcessor.java:461) > at > org.apache.geode.internal.cache.SearchLoadAndWriteProcessor.doSearchAndLoad(SearchLoadAndWriteProcessor.java:177) > at > org.apache.geode.internal.cache.DistributedRegion.findUsingSearchLoad(DistributedRegion.java:2338) > at > org.apache.geode.internal.cache.DistributedRegion.findObjectInSystem(DistributedRegion.java:2208) > at > org.apache.geode.internal.cache.LocalRegion.nonTxnFindObject(LocalRegion.java:1477) > at > org.apache.geode.internal.cache.LocalRegionDataView.findObject(LocalRegionDataView.java:176) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300) > at > org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285) > at > org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:443) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.get(DataCommandFunction.java:151) > at > org.apache.geode.management.internal.cli.functions.DataCommandFunction.execute(DataCommandFunction.java:116) > at > org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:187) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:382) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:448) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:1117) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.access$000(ClusterDistributionManager.java:108) > at > org.apache.geode.distributed.internal.ClusterDistributionManager$9$1.run(ClusterDistributionManager.java:987) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: > Communications link failure > The last packet sent successfully to the server was 0 milliseconds ago. The > driver has not received any packets from the server. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) > at > com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) > at com.mysql.jdbc.MysqlIO.(MysqlIO.java:341) > at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) > at > com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) > at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014) > at
[jira] [Updated] (GEODE-4194) DiskDistributedNoAckAsyncOverflowRegionDUnitTest testNBRegionDestructionDuringGetInitialImage hangs
[ https://issues.apache.org/jira/browse/GEODE-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-4194: -- Component/s: (was: regions) extensions > DiskDistributedNoAckAsyncOverflowRegionDUnitTest > testNBRegionDestructionDuringGetInitialImage hangs > --- > > Key: GEODE-4194 > URL: https://issues.apache.org/jira/browse/GEODE-4194 > Project: Geode > Issue Type: Bug > Components: extensions >Reporter: Lynn Gallinat >Priority: Major > > This test caused concourse to timeout when it hung: > Started @ 2018-01-03 18:49:16.700 + > 2018-01-03 19:21:00.165 + > org.apache.geode.cache30.DiskDistributedNoAckAsyncOverflowRegionDUnitTest > testNBRegionDestructionDuringGetInitialImage > Ended @ 2018-01-03 20:55:26.951 + -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-3554) Document not invoking CacheFactory.getAnyInstance() from user callbacks
[ https://issues.apache.org/jira/browse/GEODE-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-3554: -- Description: As long as the product continues to invoke user plug-in callbacks during startup (and with the main thread), we probably need to document a warning not to use CacheFactory.getAnyInstance() as the API call is common in plug-ins and this will freeze startup. We are planning on deprecating and removing CacheFactory.getAnyInstance() anyway. was: As long as the product continues to invoke user plug-in callbacks during startup (and with the main thread), we probably need to document a warning not to use CacheFactory.getAnyInstance() as the API call is common in plug-ins and this will freeze startup. A better fix could be to make changes to cache initialization that would prevent this deadlock. > Document not invoking CacheFactory.getAnyInstance() from user callbacks > --- > > Key: GEODE-3554 > URL: https://issues.apache.org/jira/browse/GEODE-3554 > Project: Geode > Issue Type: New Feature > Components: core >Reporter: Fred Krone >Priority: Major > Labels: cache, geode-150 > > As long as the product continues to invoke user plug-in callbacks during > startup (and with the main thread), we probably need to document a warning > not to use CacheFactory.getAnyInstance() as the API call is common in > plug-ins and this will freeze startup. > We are planning on deprecating and removing CacheFactory.getAnyInstance() > anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-3554) Document not invoking CacheFactory.getAnyInstance() from user callbacks
[ https://issues.apache.org/jira/browse/GEODE-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-3554: -- Description: As long as the product continues to invoke user plug-in callbacks during startup (and with the main thread), we probably need to document a warning not to use CacheFactory.getAnyInstance() as the API call is common in plug-ins and this will freeze startup. A better fix could be to make changes to cache initialization that would prevent this deadlock. was: As long as the product continues to invoke user plug-in callbacks during startup (and with the main thread), we probably need to document a warning not to use CacheFactory.getAnyInstance() as the API call is common in plug-ins and this will freeze startup. A better fix could be to make changes to cache initialization that would prevent this deadlock. > Document not invoking CacheFactory.getAnyInstance() from user callbacks > --- > > Key: GEODE-3554 > URL: https://issues.apache.org/jira/browse/GEODE-3554 > Project: Geode > Issue Type: New Feature > Components: core >Reporter: Fred Krone >Priority: Major > Labels: cache, geode-150 > > As long as the product continues to invoke user plug-in callbacks during > startup (and with the main thread), we probably need to document a warning > not to use CacheFactory.getAnyInstance() as the API call is common in > plug-ins and this will freeze startup. > > > A better fix could be to make changes to cache initialization that would > prevent this deadlock. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-3554) Document not invoking CacheFactory.getAnyInstance() from user callbacks
[ https://issues.apache.org/jira/browse/GEODE-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fred Krone updated GEODE-3554: -- Summary: Document not invoking CacheFactory.getAnyInstance() from user callbacks (was: Fix startup freeze at callback to CacheFactory.getAnyInstance() ) > Document not invoking CacheFactory.getAnyInstance() from user callbacks > --- > > Key: GEODE-3554 > URL: https://issues.apache.org/jira/browse/GEODE-3554 > Project: Geode > Issue Type: New Feature > Components: core >Reporter: Fred Krone >Priority: Major > Labels: cache, geode-150 > > As long as the product continues to invoke user plug-in callbacks during > startup (and with the main thread), we probably need to document a warning > not to use CacheFactory.getAnyInstance() as the API call is common in > plug-ins and this will freeze startup. > A better fix could be to make changes to cache initialization that would > prevent this deadlock. -- This message was sent by Atlassian JIRA (v7.6.3#76005)