Hi Eric

Thanks for clearing that up. Suspected that was what was happening.

I guess I misread the documentation and thought it would just evict from
memory.

Looking forward to 1.7

@John. Thanks for the link.

Kindly
Pieter


On Tue, May 22, 2018 at 7:30 PM, Eric Shu <[email protected]> wrote:

> If you set TTL invalidate or destroy, data will be reflected in the
> persistent layer as well. It is same as you perform a invalidate or destroy
> on an entry.
>
> The original issue has been fixed on 1.7 (see https://issues.apache.
> org/jira/browse/GEODE-5173). Transaction will be working with eviction
> overflow on restart.
>
>
> On Tue, May 22, 2018 at 8:57 AM, Pieter van Zyl <[email protected]
> > wrote:
>
>> Hi guys,
>>
>> Just a question wrt to this topic.
>>
>> I can see the main issue has been fixed on 1.7.0 according to Jira....
>>
>> https://issues.apache.org/jira/browse/GEODE-5173
>> Tried to get the snapshot but cannot get it to work. As it seems to only
>> allow clients of version 1.5.0 and the spring-date-geode version still
>> requires 1.6.0.
>> But this is off-topic and another question for later today.
>>
>> In the mean time I have tried to use *Expiration* with a *persistent*
>> region and *transactions*
>>
>> Currently we are trying to import data from our old database into Geode.
>>
>> So the region was:
>>
>>
>>
>>
>>
>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>    disk-store-ref="tauDiskStore"                       persistent="true">
>>   <gfe:eviction type="HEAP_PERCENTAGE"
>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>
>> *Changed to*
>>
>>
>>
>>
>>
>>
>>
>> *<gfe:replicated-region id="ClassName-ClassID-LookUp"
>> disk-store-ref="tauDiskStore" statistics="true" persistent="true">
>>  <gfe:region-ttl timeout="60"
>> action="INVALIDATE"/></gfe:replicated-region><gfe:disk-store
>> id="tauDiskStore">    <gfe:disk-dir
>> location="geode/tauDiskStore"/></gfe:disk-store>*
>>
>>
>> But after running the import and testing if we can read the data then all
>> the data is there.
>> But as soon as I restart the server and check again the data is not
>> there.
>>
>> I would have thought that after TTL the data would be
>> invalidated/destroyed in the in-memory region/cache but would still be on
>> disk as this is a persistent region?
>>
>> I am I wrong to expect that this combination should still have ALL the
>> data persisted on disk after a restart?
>>
>> https://geode.apache.org/docs/guide/11/developing/eviction/c
>> onfiguring_data_eviction.html
>> https://geode.apache.org/docs/guide/11/developing/storing_da
>> ta_on_disk/how_persist_overflow_work.html
>>
>>
>> Kindly
>> Pieter
>>
>> On Fri, May 4, 2018 at 7:32 PM, Anilkumar Gingade <[email protected]>
>> wrote:
>>
>>> Setting eviction overflow helps keeping the system running out-of memory
>>> in critical situations. Its true for both persistent and non-persistent
>>> region. In case of persistent region, if overflow is not set, the data is
>>> both in-memory and disk.
>>>
>>> One way to handle the memory situation is through resource manager, but
>>> if the system is under memory pressure, it may impact the system
>>> performance.
>>>
>>> -Anil
>>>
>>>
>>> On Fri, May 4, 2018 at 4:10 AM, Pieter van Zyl <
>>> [email protected]> wrote:
>>>
>>>> Good day.
>>>>
>>>> Thanks again for all the feedback.
>>>>
>>>> I hope the bug will get sorted out.
>>>>
>>>> For now I have removed the eviction policies and there error is no more
>>>> after a restart.
>>>>
>>>> I assume that if one uses persistent regions, then the
>>>> eviction+overflow is not that critical as the data will be "backed" in the
>>>> store/disk. One just need enough memory.
>>>> Eviction+Overflow I suspect is quite critical when one has a full
>>>> in-memory grid and running out of memory could cause issues if there is no
>>>> overflow to disk?
>>>>
>>>> I am thinking that for now I could look at *expiration* rather on the
>>>> region? To keep only *relevant* data in the in-memory regions for now
>>>> to prevent running out of memory.
>>>> Will try and keep data in memory for as long as possible.
>>>>
>>>> Currently we cannot remove the transactions that we use with the
>>>> persistent regions. We might in the future.
>>>>
>>>> Kindly
>>>> Pieter
>>>>
>>>>
>>>> On Thu, May 3, 2018 at 1:16 AM, Dan Smith <[email protected]> wrote:
>>>>
>>>>> > I assume this will happen on partitioned regions as well as the
>>>>> issue is the combination of transactions on persistent regions and 
>>>>> overflow.
>>>>>
>>>>> Unfortunately yes, this bug also affects partitioned regions
>>>>>
>>>>> > Also I see this bug is marked as *major* so is there any chance
>>>>> this will be fixed in the next couple of months?
>>>>>
>>>>> I'm not sure. Geode is an open source project, so we don't really
>>>>> promise fixes in any specific timeframe.
>>>>>
>>>>> > If I do change the region to not use overflow what will happen when
>>>>> it reaches the "heap percentage"?
>>>>>
>>>>> The data will stay in memory. Oveflow lets you avoid running out of
>>>>> memory by overflowing data to disk. Without that you could end up running
>>>>> out of memory if your region gets to large.
>>>>>
>>>>> -Dan
>>>>>
>>>>> On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Hi Dan,
>>>>>>
>>>>>> Thanks for tracking this down!
>>>>>>
>>>>>> Much appreciated.
>>>>>>
>>>>>> This might also be why I didn't see it at first as we didn't activate
>>>>>> the transactions on the persistent regions when we started with this
>>>>>> evaluation.
>>>>>>
>>>>>> Based on this discussion
>>>>>>
>>>>>> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Ea
>>>>>> pache%2Egeode%2Euser+order:date-backward+pieter#query:list%3
>>>>>> Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page
>>>>>> :1+mid:n25nznu7zur4xmar+state:results
>>>>>>
>>>>>> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>>>>>>
>>>>>> Once we have the basics up and running we will still look at the
>>>>>> TransactionWriter as recommended.
>>>>>>
>>>>>> We are currently trying to import our old data from Berkeley into
>>>>>> Geode and for now I have one node locally with a replicated region.
>>>>>> But we are planning to move to more nodes and partition/sharded
>>>>>> regions.
>>>>>>
>>>>>> I assume this will happen on partitioned regions as well as the issue
>>>>>> is the combination of transactions on persistent regions and overflow.
>>>>>>
>>>>>> Also I see this bug is marked as *major* so is there any chance this
>>>>>> will be fixed in the next couple of months?
>>>>>> Or is our use of transactions across persistent regions just to out
>>>>>> of the norm?
>>>>>>
>>>>>> If I do change the region to not use overflow what will happen when
>>>>>> it reaches the "heap percentage"?
>>>>>>
>>>>>> Kindly
>>>>>> Pieter
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <[email protected]> wrote:
>>>>>>
>>>>>>> I created GEODE-5173 for this issue.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> -Dan
>>>>>>>
>>>>>>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Pieter,
>>>>>>>>
>>>>>>>> I was able to reproduce this problem. It looks like it is an issue
>>>>>>>> with doing a get inside of a transaction along with a replicated region
>>>>>>>> using persistence and overflow. The value is still on disk, and for
>>>>>>>> whatever reason if you do the get inside of a transaction it is 
>>>>>>>> returning
>>>>>>>> you this bogus NOT_AVAILABLE token instead of reading the value off 
>>>>>>>> disk.
>>>>>>>>
>>>>>>>> I'll create a JIRA and attach my test. In the meantime, you could
>>>>>>>> do the get outside of a transaction, or you could change your region 
>>>>>>>> to not
>>>>>>>> use overflow. If you try changing the region to not use overflow, I 
>>>>>>>> think
>>>>>>>> you'll also have to set the system property 
>>>>>>>> gemfire.disk.recoverValuesSync
>>>>>>>> to true to make sure that in all cases you never have to read from 
>>>>>>>> disk.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> -Dan
>>>>>>>>
>>>>>>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> Good day.
>>>>>>>>>
>>>>>>>>> I am constantly seeing this error below when we stop and start
>>>>>>>>> Geode server after a data import.
>>>>>>>>>
>>>>>>>>> When the client connects the second time after the restart we get 
>>>>>>>>> NotSerializableException:
>>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>>>>>>
>>>>>>>>> Any ideas why we are getting this error or why it would state
>>>>>>>>> "NotAvailable"?
>>>>>>>>>
>>>>>>>>> *Versions:*
>>>>>>>>>
>>>>>>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>>>>>>> compile group: 'org.apache.geode', name: 'geode-core', version:
>>>>>>>>> '1.5.0'
>>>>>>>>>
>>>>>>>>> Trying to access this region on startup:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>>>>>>          disk-store-ref="tauDiskStore"
>>>>>>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>>>>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>>>>>>
>>>>>>>>> *Server config:*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *<util:properties id="gemfire-props"><prop
>>>>>>>>> key="log-level">info</prop><prop 
>>>>>>>>> key="locators">pvz-dell[10334]</prop><prop
>>>>>>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>>>>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>>>>>>> key="jmx-manager">true</prop><prop 
>>>>>>>>> key="jmx-manager-port">1099</prop><prop
>>>>>>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *<gfe:cache properties-ref="gemfire-props"
>>>>>>>>> pdx-serializer-ref="pdxSerializer"
>>>>>>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" 
>>>>>>>>> /><gfe:cache-server
>>>>>>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>>>>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>>>>>>> id="pdxSerializer"
>>>>>>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>>>>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>>>>>>
>>>>>>>>> The server seems to be up and running
>>>>>>>>> *Cache server connection listener bound to address
>>>>>>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>>>>>>
>>>>>>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>>>>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>>>>>>
>>>>>>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max
>>>>>>>>> Pool size: 4*
>>>>>>>>>
>>>>>>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>>>>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>>>>>>
>>>>>>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>>>>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>>>>>>> notify-by-subscription=true socket-buffer-size=65536
>>>>>>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>>>>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>>>>>>> directory=. groups=[] loadProbe=ConnectionCountProbe 
>>>>>>>>> loadPollInterval=5000
>>>>>>>>> tcpNoDelay=true*
>>>>>>>>>
>>>>>>>>> *server running on port 40404*
>>>>>>>>> *Press <Enter> to terminate the server*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Exception in thread "main" 
>>>>>>>>> org.apache.geode.cache.client.ServerOperationException:
>>>>>>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>>>>>>> org.apache.geode.SerializationException: failed serializing object
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>>> Exception(OpExecutorImpl.java:669)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>>> Exception(OpExecutorImpl.java:742)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>>> Exception(OpExecutorImpl.java:611)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> eOnServer(OpExecutorImpl.java:373)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> e(OpExecutorImpl.java:129)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> e(OpExecutorImpl.java:116)
>>>>>>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>>>>>>> Impl.java:774)
>>>>>>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.j
>>>>>>>>> ava:91)
>>>>>>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>>>>>>> (ServerRegionProxy.java:113)
>>>>>>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>>>>>>> ject(ClientTXRegionStub.java:72)
>>>>>>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>>>>>>> teStub.java:453)
>>>>>>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>>>>>>> TXStateProxyImpl.java:496)
>>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>>> java:1366)
>>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>>> java:1300)
>>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>>> java:1285)
>>>>>>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>>>>>>> egion.java:320)
>>>>>>>>> ......
>>>>>>>>> Caused by: org.apache.geode.SerializationException: failed
>>>>>>>>> serializing object
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>>> zeAndAddPart(Message.java:399)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>>>>>>> InAnyForm(Message.java:360)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>>>>>>> riteResponse(Get70.java:424)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>>>>>>> mdExecute(Get70.java:211)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>>>>>>> cute(BaseCommand.java:157)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>>> n.doNormalMsg(ServerConnection.java:797)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>>>>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>>> n.run(ServerConnection.java:1148)
>>>>>>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>>>>>>> Executor.java:1149)
>>>>>>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>>>>>>> lExecutor.java:624)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>>>>>>> 1.run(AcceptorImpl.java:641)
>>>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>>>>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>>>>>>> ava:1184)
>>>>>>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.ja
>>>>>>>>> va:348)
>>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>>>>>>> izableObject(InternalDataSerializer.java:2341)
>>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>>>>>>> bject(InternalDataSerializer.java:2216)
>>>>>>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>>>>>>> ava:2936)
>>>>>>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>>>>>>> lper.java:66)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>>> zeAndAddPart(Message.java:397)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Kindly
>>>>>>>>> Pieter
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to