[ https://issues.apache.org/jira/browse/GEODE-3845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Nick Vallely updated GEODE-3845: -------------------------------- Description: I use entry-idle-time and eviction together in a partition region that holds one redundant copy. Details of setting are as follows: {code:xml} <region name="Data" refid="PARTITION"> <region-attributes> <entry-idle-time> <expiration-attributes timeout="60" action="destroy" /> </entry-idle-time> <partition-attributes redundant-copies="1" /> <eviction-attributes> <lru-entry-count maximum="10" action="local-destroy" /> </eviction-attributes> </region-attributes> </region> {code} In this setting, the data held by the cache server is different. Then, inconsistent results are returned depending on the server to be connected. Eviction of the partition region can only select local-destroy or overflow-to-disk. On the other hand, it is written that the expire chapter of the document can not use local-destroy, local-invalidate in the partition region. Likewise, I think that data inconsistency will occur even with the settings like this time. Below is the test code: [https://github.com/masaki-yamakawa/geode/blob/bug-partition-local-destroy/geode-core/src/test/java/org/apache/geode/internal/cache/partitioned/BugExpireAndEvictionDUnitTest.java] I think that it is necessary to add a check at the time of region creation or write it in the document. ** Problem: currently eviction-action currently must be either 'local-destroy' or 'overflow-to-disk' ex: <expiration-attributes timeout="60" action="local-destroy" /> Solution: This story is to add the additional option "distributed-destroy" as an action setting for expiration-attributes. This will enable a local destroy action to be distributed across the cluster (currently this does not exist) Acceptance: can set this on region configuration via cache.xml, API or gfsh gfsh help and docs have been updated was: I use entry-idle-time and eviction together in a partition region that holds one redundant copy. Details of setting are as follows: {code:xml} <region name="Data" refid="PARTITION"> <region-attributes> <entry-idle-time> <expiration-attributes timeout="60" action="destroy" /> </entry-idle-time> <partition-attributes redundant-copies="1" /> <eviction-attributes> <lru-entry-count maximum="10" action="local-destroy" /> </eviction-attributes> </region-attributes> </region> {code} In this setting, the data held by the cache server is different. Then, inconsistent results are returned depending on the server to be connected. Eviction of the partition region can only select local-destroy or overflow-to-disk. On the other hand, it is written that the expire chapter of the document can not use local-destroy, local-invalidate in the partition region. Likewise, I think that data inconsistency will occur even with the settings like this time. Below is the test code: https://github.com/masaki-yamakawa/geode/blob/bug-partition-local-destroy/geode-core/src/test/java/org/apache/geode/internal/cache/partitioned/BugExpireAndEvictionDUnitTest.java I think that it is necessary to add a check at the time of region creation or write it in the document. > As a user, I want to be able to do a distributed destroy action from a local > region > ----------------------------------------------------------------------------------- > > Key: GEODE-3845 > URL: https://issues.apache.org/jira/browse/GEODE-3845 > Project: Geode > Issue Type: New Feature > Components: docs, eviction > Reporter: Masaki Yamakawa > Priority: Major > > I use entry-idle-time and eviction together in a partition region that holds > one redundant copy. > Details of setting are as follows: > {code:xml} > <region name="Data" refid="PARTITION"> > <region-attributes> > <entry-idle-time> > <expiration-attributes timeout="60" action="destroy" /> > </entry-idle-time> > <partition-attributes redundant-copies="1" /> > <eviction-attributes> > <lru-entry-count maximum="10" action="local-destroy" /> > </eviction-attributes> > </region-attributes> > </region> > {code} > In this setting, the data held by the cache server is different. Then, > inconsistent results are returned depending on the server to be connected. > Eviction of the partition region can only select local-destroy or > overflow-to-disk. On the other hand, it is written that the expire chapter of > the document can not use local-destroy, local-invalidate in the partition > region. Likewise, I think that data inconsistency will occur even with the > settings like this time. > Below is the test code: > [https://github.com/masaki-yamakawa/geode/blob/bug-partition-local-destroy/geode-core/src/test/java/org/apache/geode/internal/cache/partitioned/BugExpireAndEvictionDUnitTest.java] > I think that it is necessary to add a check at the time of region creation or > write it in the document. > > ** > Problem: currently eviction-action currently must be either 'local-destroy' > or 'overflow-to-disk' > ex: <expiration-attributes timeout="60" action="local-destroy" /> > Solution: This story is to add the additional option "distributed-destroy" as > an action setting for expiration-attributes. > This will enable a local destroy action to be distributed across the cluster > (currently this does not exist) > > Acceptance: can set this on region configuration via cache.xml, API or gfsh > gfsh help and docs have been updated -- This message was sent by Atlassian JIRA (v7.6.3#76005)