[jira] [Commented] (GEODE-8999) When max-threads is specified for a cache server its reader threads may be reported as Stuck

2021-03-03 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294880#comment-17294880
 ] 

Darrel Schneider commented on GEODE-8999:
-

On second thought it seems like this should work. When max-threads is set 
ServerConnection.run should not be called until we have detected that the 
client socket has something on it to read. It then only does a single message 
and returns itself to the Selector waiting to be run again. Is it possible in 
the above stack that the client started to write a message and for some reason 
did not finish writing it? If that happened then the Selector would have 
detected a read event on the socket; asked the thread pool to execute the 
ServerConnection; and then been stuck in it trying to read the complete message.


> When max-threads is specified for a cache server its reader threads may be 
> reported as Stuck
> 
>
> Key: GEODE-8999
> URL: https://issues.apache.org/jira/browse/GEODE-8999
> Project: Geode
>  Issue Type: Bug
>  Components: client/server, membership
>Affects Versions: 1.14.0
>Reporter: Bruce J Schuchardt
>Priority: Major
>
> We noticed this report of a stuck thread in a test that enabled max-threads 
> in a cache server:
> {noformat}
> [warn 2021/03/02 19:54:31.041 PST bridgep2_host2_17822  
> tid=0x1b] Thread <104> (0x68) that was executed at <02 Mar 2021 19:53:44 PST> 
> has been stuck for <46.356 seconds> and number of thread monitor iteration <1>
> Thread Name  state 
> Executor Group 
> Monitored metric 
> Thread stack:
> sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> sun.nio.ch.IOUtil.read(IOUtil.java:192)
> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:378)
> org.apache.geode.internal.cache.tier.sockets.Message.readWrappedHeaders(Message.java:1237)
> org.apache.geode.internal.cache.tier.sockets.Message.fetchHeader(Message.java:859)
> org.apache.geode.internal.cache.tier.sockets.Message.readHeaderAndBody(Message.java:698)
> org.apache.geode.internal.cache.tier.sockets.Message.receive(Message.java:1213)
> org.apache.geode.internal.cache.tier.sockets.Message.receive(Message.java:1229)
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.readRequest(BaseCommand.java:816)
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:777)
> org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:73)
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1185)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:710)
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$$Lambda$351/1357226696.invoke(Unknown
>  Source)
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:120)
> org.apache.geode.logging.internal.executors.LoggingThreadFactory$$Lambda$88/1800187767.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)
> {noformat}
> The cache server should suspend thread monitoring before reading from a 
> socket and resume monitoring afterward.  An example of this can be found in 
> org.apache.geode.internal.tcp.Connection.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8999) When max-threads is specified for a cache server its reader threads may be reported as Stuck

2021-03-03 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294872#comment-17294872
 ] 

Darrel Schneider commented on GEODE-8999:
-

I think this bug should be fixed by no longer monitoring these threads.
We have another ticket to improve the monitoring by included all server 
connection threads (see GEODE-8761).
For this ticket (which may have been around since thread monitoring was added) 
I think we should just change initializeServerConnectionThreadPool to pass null 
instead of getThreadMonitorObj() when it creates selector thread pool

> When max-threads is specified for a cache server its reader threads may be 
> reported as Stuck
> 
>
> Key: GEODE-8999
> URL: https://issues.apache.org/jira/browse/GEODE-8999
> Project: Geode
>  Issue Type: Bug
>  Components: client/server, membership
>Affects Versions: 1.14.0
>Reporter: Bruce J Schuchardt
>Priority: Major
>
> We noticed this report of a stuck thread in a test that enabled max-threads 
> in a cache server:
> {noformat}
> [warn 2021/03/02 19:54:31.041 PST bridgep2_host2_17822  
> tid=0x1b] Thread <104> (0x68) that was executed at <02 Mar 2021 19:53:44 PST> 
> has been stuck for <46.356 seconds> and number of thread monitor iteration <1>
> Thread Name  state 
> Executor Group 
> Monitored metric 
> Thread stack:
> sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> sun.nio.ch.IOUtil.read(IOUtil.java:192)
> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:378)
> org.apache.geode.internal.cache.tier.sockets.Message.readWrappedHeaders(Message.java:1237)
> org.apache.geode.internal.cache.tier.sockets.Message.fetchHeader(Message.java:859)
> org.apache.geode.internal.cache.tier.sockets.Message.readHeaderAndBody(Message.java:698)
> org.apache.geode.internal.cache.tier.sockets.Message.receive(Message.java:1213)
> org.apache.geode.internal.cache.tier.sockets.Message.receive(Message.java:1229)
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.readRequest(BaseCommand.java:816)
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:777)
> org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:73)
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1185)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:710)
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$$Lambda$351/1357226696.invoke(Unknown
>  Source)
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:120)
> org.apache.geode.logging.internal.executors.LoggingThreadFactory$$Lambda$88/1800187767.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)
> {noformat}
> The cache server should suspend thread monitoring before reading from a 
> socket and resume monitoring afterward.  An example of this can be found in 
> org.apache.geode.internal.tcp.Connection.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8997) remove protobuf client server code

2021-03-03 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8997:
---

Assignee: Bill Burcham

> remove protobuf client server code
> --
>
> Key: GEODE-8997
> URL: https://issues.apache.org/jira/browse/GEODE-8997
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server
>Reporter: Darrel Schneider
>Assignee: Bill Burcham
>Priority: Major
>
> The protobuf based client/server project is essentially dead but code for it 
> is still part of geode.
> This complicates the implementation. For example I was working on an 
> improvement to have the thread monitor detect stuck server connection threads 
> and found myself trying to figure out how to make this work for 
> ProtobufServerConnection.
> I think it would be best to remove the dead protobuf code. I'm not sure what 
> all of it is but here is what I have found so far:
> ProtobufServerConnection
> package org.apache.geode.internal.cache.client.protocol
> package org.apache.geode.internal.protocol.protobuf.v1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8998) setting thread-monitoring-enabled to false causes NullPointerException

2021-03-03 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8998:

Labels: GeodeOperationAPI  (was: )

> setting thread-monitoring-enabled to false causes NullPointerException
> --
>
> Key: GEODE-8998
> URL: https://issues.apache.org/jira/browse/GEODE-8998
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI
>
> If you set the geode property thread-monitoring-enabled to false then any 
> geode cluster messaging is broken. As cluster messages are read the p2p 
> reader thread throws a NullPointerException.
> This bug was introduced in GEODE-8521 so it has not yet been released.
> I have a test that reproduces the NPE and this fix will be simple.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8998) setting thread-monitoring-enabled to false causes NullPointerException

2021-03-03 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8998:
---

 Summary: setting thread-monitoring-enabled to false causes 
NullPointerException
 Key: GEODE-8998
 URL: https://issues.apache.org/jira/browse/GEODE-8998
 Project: Geode
  Issue Type: Bug
  Components: core
Reporter: Darrel Schneider


If you set the geode property thread-monitoring-enabled to false then any geode 
cluster messaging is broken. As cluster messages are read the p2p reader thread 
throws a NullPointerException.

This bug was introduced in GEODE-8521 so it has not yet been released.
I have a test that reproduces the NPE and this fix will be simple.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8998) setting thread-monitoring-enabled to false causes NullPointerException

2021-03-03 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8998:
---

Assignee: Darrel Schneider

> setting thread-monitoring-enabled to false causes NullPointerException
> --
>
> Key: GEODE-8998
> URL: https://issues.apache.org/jira/browse/GEODE-8998
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> If you set the geode property thread-monitoring-enabled to false then any 
> geode cluster messaging is broken. As cluster messages are read the p2p 
> reader thread throws a NullPointerException.
> This bug was introduced in GEODE-8521 so it has not yet been released.
> I have a test that reproduces the NPE and this fix will be simple.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8997) remove protobuf client server code

2021-03-03 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8997:
---

 Summary: remove protobuf client server code
 Key: GEODE-8997
 URL: https://issues.apache.org/jira/browse/GEODE-8997
 Project: Geode
  Issue Type: Improvement
  Components: client/server
Reporter: Darrel Schneider


The protobuf based client/server project is essentially dead but code for it is 
still part of geode.
This complicates the implementation. For example I was working on an 
improvement to have the thread monitor detect stuck server connection threads 
and found myself trying to figure out how to make this work for 
ProtobufServerConnection.
I think it would be best to remove the dead protobuf code. I'm not sure what 
all of it is but here is what I have found so far:
ProtobufServerConnection
package org.apache.geode.internal.cache.client.protocol
package org.apache.geode.internal.protocol.protobuf.v1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8977) Thread monitoring service should also show locked monitors and synchronizers

2021-03-01 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8977:

Labels: GeodeOperationAPI  (was: )

> Thread monitoring service should also show locked monitors and synchronizers
> 
>
> Key: GEODE-8977
> URL: https://issues.apache.org/jira/browse/GEODE-8977
> Project: Geode
>  Issue Type: Improvement
>  Components: core
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI
>
> The thread monitoring service shows the call stack of a hung thread but it 
> does how show the synchronizations obtained by the frames in the call stack 
> like a normal stack dump does.
> It looks like this is available from the ThreadInfo class that the service is 
> already using by calling getLockedMonitors and getLockedSynchronizers. The 
> getLockedMonitors returns a MonitorInfo which has information in it about 
> which frame of the stack obtained it. MonitorInfo subclasses LockInfo which 
> is what getLockedSynchronizers returns so it is possible that 
> getLockedSynchronizers does not provide any additional information to be 
> logged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8977) Thread monitoring service should also show locked monitors and synchronizers

2021-02-26 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8977:
---

Assignee: Darrel Schneider

> Thread monitoring service should also show locked monitors and synchronizers
> 
>
> Key: GEODE-8977
> URL: https://issues.apache.org/jira/browse/GEODE-8977
> Project: Geode
>  Issue Type: Improvement
>  Components: core
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> The thread monitoring service shows the call stack of a hung thread but it 
> does how show the synchronizations obtained by the frames in the call stack 
> like a normal stack dump does.
> It looks like this is available from the ThreadInfo class that the service is 
> already using by calling getLockedMonitors and getLockedSynchronizers. The 
> getLockedMonitors returns a MonitorInfo which has information in it about 
> which frame of the stack obtained it. MonitorInfo subclasses LockInfo which 
> is what getLockedSynchronizers returns so it is possible that 
> getLockedSynchronizers does not provide any additional information to be 
> logged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8977) Thread monitoring service should also show locked monitors and synchronizers

2021-02-26 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8977:
---

 Summary: Thread monitoring service should also show locked 
monitors and synchronizers
 Key: GEODE-8977
 URL: https://issues.apache.org/jira/browse/GEODE-8977
 Project: Geode
  Issue Type: Improvement
  Components: core
Reporter: Darrel Schneider


The thread monitoring service shows the call stack of a hung thread but it does 
how show the synchronizations obtained by the frames in the call stack like a 
normal stack dump does.
It looks like this is available from the ThreadInfo class that the service is 
already using by calling getLockedMonitors and getLockedSynchronizers. The 
getLockedMonitors returns a MonitorInfo which has information in it about which 
frame of the stack obtained it. MonitorInfo subclasses LockInfo which is what 
getLockedSynchronizers returns so it is possible that getLockedSynchronizers 
does not provide any additional information to be logged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8976) Change thread monitoring service to detect hung redis threads

2021-02-26 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8976:
---

 Summary: Change thread monitoring service to detect hung redis 
threads
 Key: GEODE-8976
 URL: https://issues.apache.org/jira/browse/GEODE-8976
 Project: Geode
  Issue Type: Improvement
  Components: redis
Reporter: Darrel Schneider


In the same way that GEODE-8521 enhanced the thread monitoring service to 
detect hung p2p reader threads, if a redis thread is hung processing a request 
from a redis client then the thread monitoring service should issue a warning. 
The monitoring should be suspended while the thread is waiting for another 
request from a client. 
Most of the work for this has been done in GEODE-8521 (see 
https://github.com/apache/geode/pull/5763) so this improvement should be pretty 
easy



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8776) Log warning messages for threads waiting for local resources (locks read/write) after certain time interval

2021-02-26 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8776:
---

Assignee: Darrel Schneider

> Log warning messages for threads waiting for local resources (locks 
> read/write) after certain time interval
> ---
>
> Key: GEODE-8776
> URL: https://issues.apache.org/jira/browse/GEODE-8776
> Project: Geode
>  Issue Type: Improvement
>  Components: regions
>Affects Versions: 1.14.0
>Reporter: Anilkumar Gingade
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI
>
> Similar to have thread stuck for certain periods are logged. It will be 
> useful to log warning messages, relating to threads waiting for local 
> resources (locks read/write), with the information on thread holding that 
> resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8777) Log warning messages for threads waiting for remote resources (dlocks) after certain time interval

2021-02-26 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8777:
---

Assignee: Darrel Schneider

> Log warning messages for threads waiting for remote resources (dlocks) after 
> certain time interval
> --
>
> Key: GEODE-8777
> URL: https://issues.apache.org/jira/browse/GEODE-8777
> Project: Geode
>  Issue Type: Improvement
>  Components: regions
>Affects Versions: 1.14.0
>Reporter: Anilkumar Gingade
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI
>
> Similar to threads stuck for certain periods are logged. It will be useful to 
> log warning messages, relating to threads waiting for remote resources 
> (dlocks), with the information on thread and remote jvm(node)holding that 
> resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8859) Redis data structures may not accurately reflect their size in Geode stats

2021-01-25 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8859:

Issue Type: Bug  (was: Improvement)

> Redis data structures may not accurately reflect their size in Geode stats
> --
>
> Key: GEODE-8859
> URL: https://issues.apache.org/jira/browse/GEODE-8859
> Project: Geode
>  Issue Type: Bug
>  Components: redis, statistics
>Reporter: Jens Deppe
>Priority: Major
>
> Here is a comment from Darrel regarding this issue. For some background, the 
> Redis structures implement {{Delta}}.
>  
> {quote}I was playing around with RedisInsight and was able to get most the 
> the overview dashboard and the data browser working with geode redis. But I 
> found a problem with how we are using geode that causes the geode stats that 
> track how much data is stored in a partitioned region to be wrong and the 
> bucket sizes used for rebalancing are also wrong. Basically when we do create 
> ops on the region the stats track it okay. But when we do updates then geode 
> always thinks that nothing (size wise) changed. So for example I created a 
> string by doing a redis “set” command. I saw the size of the string accounted 
> for in dataStoreBytesInUse. But then I kept doing redis “append” commands on 
> that key and the dataStoreBytesInUse did not change at all. I think the 
> problem is in how we are updating the data structure in place instead of 
> getting a copy, modifying it, and then putting the copy into the region. 
> Avoiding this copy gives us MUCH better performance but it messes up geode 
> when it is trying to calculate the memory increase or decrease. It is 
> possible that this is only an issue on the primary and that the secondary 
> sizing may be correct. If so that could lead to other problems because for a 
> given bucket our primary size would be different than the secondary. The 
> bucket sizes are used when you do a rebalance but basically we can have a 
> bunch of memory that is “untracked” so we might see the JVM heaps unbalanced 
> but geode will think the buckets are balanced. I’m not sure what we should do 
> about this.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8859) Redis data structures may not accurately reflect their size in Geode stats

2021-01-21 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8859:

Component/s: redis

> Redis data structures may not accurately reflect their size in Geode stats
> --
>
> Key: GEODE-8859
> URL: https://issues.apache.org/jira/browse/GEODE-8859
> Project: Geode
>  Issue Type: Improvement
>  Components: redis, statistics
>Reporter: Jens Deppe
>Priority: Major
>
> Here is a comment from Darrel regarding this issue. For some background, the 
> Redis structures implement {{Delta}}.
>  
> {quote}I was playing around with RedisInsight and was able to get most the 
> the overview dashboard and the data browser working with geode redis. But I 
> found a problem with how we are using geode that causes the geode stats that 
> track how much data is stored in a partitioned region to be wrong and the 
> bucket sizes used for rebalancing are also wrong. Basically when we do create 
> ops on the region the stats track it okay. But when we do updates then geode 
> always thinks that nothing (size wise) changed. So for example I created a 
> string by doing a redis “set” command. I saw the size of the string accounted 
> for in dataStoreBytesInUse. But then I kept doing redis “append” commands on 
> that key and the dataStoreBytesInUse did not change at all. I think the 
> problem is in how we are updating the data structure in place instead of 
> getting a copy, modifying it, and then putting the copy into the region. 
> Avoiding this copy gives us MUCH better performance but it messes up geode 
> when it is trying to calculate the memory increase or decrease. It is 
> possible that this is only an issue on the primary and that the secondary 
> sizing may be correct. If so that could lead to other problems because for a 
> given bucket our primary size would be different than the secondary. The 
> bucket sizes are used when you do a rebalance but basically we can have a 
> bunch of memory that is “untracked” so we might see the JVM heaps unbalanced 
> but geode will think the buckets are balanced. I’m not sure what we should do 
> about this.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8859) Redis data structures may not accurately reflect their size in Geode stats

2021-01-21 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269462#comment-17269462
 ] 

Darrel Schneider commented on GEODE-8859:
-

Setting the system property "gemfire.DELTAS_RECALCULATE_SIZE" to "true" would 
fix this issue. The only problem with doing this is it is a global setting for 
the whole JVM it is set in. It would be better if geode offered us something 
that would only change the behavior of how redis deltas are sized. One option 
would be to add a new default method on the Delta interface which by default 
returns false but that implementors of Delta could override and have it return 
true. This new method could be named "isSizeCalculatedOnUpdate". The redis 
Delta classes would implement the method to return true. The only places that 
need to call this new method are the same places in geode that currently check 
the system property (two places). 

> Redis data structures may not accurately reflect their size in Geode stats
> --
>
> Key: GEODE-8859
> URL: https://issues.apache.org/jira/browse/GEODE-8859
> Project: Geode
>  Issue Type: Improvement
>  Components: statistics
>Reporter: Jens Deppe
>Priority: Major
>
> Here is a comment from Darrel regarding this issue. For some background, the 
> Redis structures implement {{Delta}}.
>  
> {quote}I was playing around with RedisInsight and was able to get most the 
> the overview dashboard and the data browser working with geode redis. But I 
> found a problem with how we are using geode that causes the geode stats that 
> track how much data is stored in a partitioned region to be wrong and the 
> bucket sizes used for rebalancing are also wrong. Basically when we do create 
> ops on the region the stats track it okay. But when we do updates then geode 
> always thinks that nothing (size wise) changed. So for example I created a 
> string by doing a redis “set” command. I saw the size of the string accounted 
> for in dataStoreBytesInUse. But then I kept doing redis “append” commands on 
> that key and the dataStoreBytesInUse did not change at all. I think the 
> problem is in how we are updating the data structure in place instead of 
> getting a copy, modifying it, and then putting the copy into the region. 
> Avoiding this copy gives us MUCH better performance but it messes up geode 
> when it is trying to calculate the memory increase or decrease. It is 
> possible that this is only an issue on the primary and that the secondary 
> sizing may be correct. If so that could lead to other problems because for a 
> given bucket our primary size would be different than the secondary. The 
> bucket sizes are used when you do a rebalance but basically we can have a 
> bunch of memory that is “untracked” so we might see the JVM heaps unbalanced 
> but geode will think the buckets are balanced. I’m not sure what we should do 
> about this.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (GEODE-8719) CI Failure: org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > givenServerCrashesDuringAPPEND_thenDataIsNotLost

2021-01-08 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reopened GEODE-8719:
-

This reproduced here: 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/698

{noformat}
org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
org.apache.geode.test.dunit.RMIException: While invoking 
org.apache.geode.test.dunit.internal.IdentifiableCallable.call in VM 3 running 
on Host 56886cab3c85 with 4 VMs
at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:623)
at org.apache.geode.test.dunit.VM.invoke(VM.java:460)
at 
org.apache.geode.test.dunit.rules.ClusterStartupRule.startServerVM(ClusterStartupRule.java:268)
at 
org.apache.geode.test.dunit.rules.ClusterStartupRule.startServerVM(ClusterStartupRule.java:261)
at 
org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest.startRedisVM(CrashAndNoRepeatDUnitTest.java:131)
at 
org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest.givenServerCrashesDuringAPPEND_thenDataIsNotLost(CrashAndNoRepeatDUnitTest.java:160)

Caused by:
org.apache.geode.management.ManagementException: Could not start Redis 
Server using bind address: localhost/127.0.0.1 and port: 42809. Please make 
sure nothing else is running on this address/port combination.

Caused by:
java.net.BindException: Address already in use

org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
classMethod FAILED
java.lang.AssertionError: Suspicious strings were written to the log during 
this run.
Fix the strings or use IgnoredException.addIgnoredException to ignore.
---
Found suspect string in 'dunit_suspect-vm3.log' at line 6657

[error 2021/01/08 21:34:13.724 GMT  
tid=35] org.apache.geode.management.ManagementException: Could not start Redis 
Server using bind address: localhost/127.0.0.1 and port: 42809. Please make 
sure nothing else is running on this address/port combination.
{noformat}


> CI Failure: 
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost
> -
>
> Key: GEODE-8719
> URL: https://issues.apache.org/jira/browse/GEODE-8719
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: John Hutchison
>Priority: Minor
>
> CI failure: https://concourse.apachegeode-ci.info/builds/207449
> {code:java}
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.test.dunit.internal.IdentifiableCallable.call in VM 2 
> running on Host e0e2f6af9445 with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:623)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:460)
> at 
> org.apache.geode.test.dunit.rules.ClusterStartupRule.startServerVM(ClusterStartupRule.java:268)
> at 
> org.apache.geode.test.dunit.rules.ClusterStartupRule.startServerVM(ClusterStartupRule.java:261)
> at 
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest.startRedisVM(CrashAndNoRepeatDUnitTest.java:131)
> at 
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest.givenServerCrashesDuringAPPEND_thenDataIsNotLost(CrashAndNoRepeatDUnitTest.java:164)
> Caused by:
> org.apache.geode.management.ManagementException: Could not start 
> Redis Server using bind address: localhost/127.0.0.1 and port: 44579. Please 
> make sure nothing else is running on this address/port combination.   
>  Caused by:
> java.net.BindException: Address already in use
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-6504) CI: RegionExpirationIntegrationTest > increaseRegionTtl[EMPTY] FAILED

2021-01-08 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17261461#comment-17261461
 ] 

Darrel Schneider commented on GEODE-6504:
-

again: 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/WindowsCoreIntegrationTestOpenJDK11/builds/658

> CI: RegionExpirationIntegrationTest > increaseRegionTtl[EMPTY] FAILED
> -
>
> Key: GEODE-6504
> URL: https://issues.apache.org/jira/browse/GEODE-6504
> Project: Geode
>  Issue Type: Bug
>  Components: expiration
>Reporter: Mark Hanson
>Priority: Major
>  Labels: GeodeOperationAPI
> Attachments: RegionExpirationIntegrationTest.log
>
>
> CI failure:  see attached log.
>  
> Failed on WindowsIntegrationTestOpenJDK11  
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/WindowsIntegrationTestOpenJDK11/builds/272]
> {noformat}
> org.apache.geode.cache.RegionExpirationIntegrationTest > 
> increaseRegionTtl[EMPTY] FAILED
> java.lang.AssertionError: 
> Expecting:
>  <7L>
> to be greater than or equal to:
>  <8L> 
> at 
> org.apache.geode.cache.RegionExpirationIntegrationTest.increaseRegionTtl(RegionExpirationIntegrationTest.java:88)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8521) Add P2P message reader threads to thread monitoring

2020-12-03 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8521.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Add P2P message reader threads to thread monitoring
> ---
>
> Key: GEODE-8521
> URL: https://issues.apache.org/jira/browse/GEODE-8521
> Project: Geode
>  Issue Type: Wish
>  Components: messaging
>Reporter: Kirk Lund
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
> Fix For: 1.14.0
>
>
> Add P2P message reader threads to thread monitoring to help with identifying 
> stuck P2P message readers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8730) CI failure: DualServerSNIAcceptanceTest fails to start server because port is in use

2020-11-19 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8730:
---

 Summary: CI failure: DualServerSNIAcceptanceTest fails to start 
server because port is in use
 Key: GEODE-8730
 URL: https://issues.apache.org/jira/browse/GEODE-8730
 Project: Geode
  Issue Type: Test
  Components: membership
Reporter: Darrel Schneider


The run is here: 
[https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/AcceptanceTestOpenJDK8/builds/587]
{noformat}
org.apache.geode.client.sni.DualServerSNIAcceptanceTest > classMethod FAILED
com.palantir.docker.compose.execution.DockerExecutionException: 
'docker-compose exec -T geode gfsh run 
--file=/geode/scripts/geode-starter-2.gfsh' returned exit code 1
The output was:
1. Executing - start locator --name=locator-maeve --connect=false 
--redirect-output --hostname-for-clients=locator-maeve 
--properties-file=/geode/config/gemfire.properties 
--security-properties-file= 
--J=-Dgemfire.ssl-keystore=/geode/config/locator-maeve-keystore.jks

...
Locator in /locator-maeve on geode[10334] as locator-maeve is currently 
online.
Process ID: 47
Uptime: 16 seconds
Geode Version: 1.14.0-build.0
Java Version: 11.0.9.1
Log File: /locator-maeve/locator-maeve.log
JVM Arguments: -DgemfirePropertyFile=/geode/config/gemfire.properties 
-DgemfireSecurityPropertyFile=/geode/config/gfsecurity.properties 
-Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false 
-Dgemfire.ssl-keystore=/geode/config/locator-maeve-keystore.jks 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806 
-Dgemfire.OSProcess.DISABLE_REDIRECTION_CONFIGURATION=true
Class-Path: 
/geode/lib/geode-core-1.14.0-build.0.jar:/geode/lib/geode-dependencies.jar

2. Executing - start server --name=server-dolores --group=group-dolores 
--hostname-for-clients=server-dolores --locators=geode[10334] 
--properties-file=/geode/config/gemfire.properties 
--security-properties-file= 
--J=-Dgemfire.ssl-keystore=/geode/config/server-dolores-keystore.jks

...
Server in /server-dolores on geode[40404] as server-dolores is currently 
online.
Process ID: 199
Uptime: 5 seconds
Geode Version: 1.14.0-build.0
Java Version: 11.0.9.1
Log File: /server-dolores/server-dolores.log
JVM Arguments: -DgemfirePropertyFile=/geode/config/gemfire.properties 
-DgemfireSecurityPropertyFile=/geode/config/gfsecurity.properties 
-Dgemfire.start-dev-rest-api=false -Dgemfire.locators=geode[10334] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.groups=group-dolores 
-Dgemfire.ssl-keystore=/geode/config/server-dolores-keystore.jks 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/geode/lib/geode-core-1.14.0-build.0.jar:/geode/lib/geode-dependencies.jar

3. Executing - start server --name=server-clementine 
--group=group-clementine --hostname-for-clients=server-clementine 
--server-port=40405 --locators=geode[10334] 
--properties-file=/geode/config/gemfire.properties 
--security-properties-file= 
--J=-Dgemfire.ssl-keystore=/geode/config/server-clementine-keystore.jks

..The Cache Server process terminated unexpectedly with exit status 1. 
Please refer to the log file in /server-clementine for full details.

Exception in thread "main" java.lang.RuntimeException: An IO error occurred 
while starting a Server in /server-clementine on geode[40405]: Network is 
unreachable; port (40405) is not available on localhost.

at 
org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:852)

at 
org.apache.geode.distributed.ServerLauncher.run(ServerLauncher.java:737)

at 
org.apache.geode.distributed.ServerLauncher.main(ServerLauncher.java:256)

Caused by: java.net.BindException: Network is unreachable; port (40405) is 
not available on localhost.

at 
org.apache.geode.distributed.AbstractLauncher.assertPortAvailable(AbstractLauncher.java:142)

at 
org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:794)

... 2 more



* Execution Summary ***
Script file: /geode/scripts/geode-starter-2.gfsh

Command-1 : start locator --name=locator-maeve --connect=false 
--redirect-output --hostname-for-clients=locator-maeve 
--properties-file=/geode/config/gemfire.properties 
--security-properties-file=/geode/config/gfsecurity.properties 
--J=-Dgemfire.ssl-keystore=/geode/config/locator-maeve-keystore.jks
Status: PASSED

Command-2 : start server --name=server-dolores --group=group-dolores 
--hostname-for-clients=server-dolores --locators=geode[10334] 

[jira] [Assigned] (GEODE-8730) CI failure: DualServerSNIAcceptanceTest fails to start server because port is in use

2020-11-19 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8730:
---

Assignee: Bill Burcham

> CI failure: DualServerSNIAcceptanceTest fails to start server because port is 
> in use
> 
>
> Key: GEODE-8730
> URL: https://issues.apache.org/jira/browse/GEODE-8730
> Project: Geode
>  Issue Type: Test
>  Components: membership
>Reporter: Darrel Schneider
>Assignee: Bill Burcham
>Priority: Major
>
> The run is here: 
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/AcceptanceTestOpenJDK8/builds/587]
> {noformat}
> org.apache.geode.client.sni.DualServerSNIAcceptanceTest > classMethod FAILED
> com.palantir.docker.compose.execution.DockerExecutionException: 
> 'docker-compose exec -T geode gfsh run 
> --file=/geode/scripts/geode-starter-2.gfsh' returned exit code 1
> The output was:
> 1. Executing - start locator --name=locator-maeve --connect=false 
> --redirect-output --hostname-for-clients=locator-maeve 
> --properties-file=/geode/config/gemfire.properties 
> --security-properties-file= 
> --J=-Dgemfire.ssl-keystore=/geode/config/locator-maeve-keystore.jks
> ...
> Locator in /locator-maeve on geode[10334] as locator-maeve is currently 
> online.
> Process ID: 47
> Uptime: 16 seconds
> Geode Version: 1.14.0-build.0
> Java Version: 11.0.9.1
> Log File: /locator-maeve/locator-maeve.log
> JVM Arguments: -DgemfirePropertyFile=/geode/config/gemfire.properties 
> -DgemfireSecurityPropertyFile=/geode/config/gfsecurity.properties 
> -Dgemfire.enable-cluster-configuration=true 
> -Dgemfire.load-cluster-configuration-from-dir=false 
> -Dgemfire.ssl-keystore=/geode/config/locator-maeve-keystore.jks 
> -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
> -Dsun.rmi.dgc.server.gcInterval=9223372036854775806 
> -Dgemfire.OSProcess.DISABLE_REDIRECTION_CONFIGURATION=true
> Class-Path: 
> /geode/lib/geode-core-1.14.0-build.0.jar:/geode/lib/geode-dependencies.jar
> 2. Executing - start server --name=server-dolores --group=group-dolores 
> --hostname-for-clients=server-dolores --locators=geode[10334] 
> --properties-file=/geode/config/gemfire.properties 
> --security-properties-file= 
> --J=-Dgemfire.ssl-keystore=/geode/config/server-dolores-keystore.jks
> ...
> Server in /server-dolores on geode[40404] as server-dolores is currently 
> online.
> Process ID: 199
> Uptime: 5 seconds
> Geode Version: 1.14.0-build.0
> Java Version: 11.0.9.1
> Log File: /server-dolores/server-dolores.log
> JVM Arguments: -DgemfirePropertyFile=/geode/config/gemfire.properties 
> -DgemfireSecurityPropertyFile=/geode/config/gfsecurity.properties 
> -Dgemfire.start-dev-rest-api=false -Dgemfire.locators=geode[10334] 
> -Dgemfire.use-cluster-configuration=true -Dgemfire.groups=group-dolores 
> -Dgemfire.ssl-keystore=/geode/config/server-dolores-keystore.jks 
> -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
> -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
> Class-Path: 
> /geode/lib/geode-core-1.14.0-build.0.jar:/geode/lib/geode-dependencies.jar
> 3. Executing - start server --name=server-clementine 
> --group=group-clementine --hostname-for-clients=server-clementine 
> --server-port=40405 --locators=geode[10334] 
> --properties-file=/geode/config/gemfire.properties 
> --security-properties-file= 
> --J=-Dgemfire.ssl-keystore=/geode/config/server-clementine-keystore.jks
> ..The Cache Server process terminated unexpectedly with exit status 
> 1. Please refer to the log file in /server-clementine for full details.
> Exception in thread "main" java.lang.RuntimeException: An IO error 
> occurred while starting a Server in /server-clementine on geode[40405]: 
> Network is unreachable; port (40405) is not available on localhost.
>   at 
> org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:852)
>   at 
> org.apache.geode.distributed.ServerLauncher.run(ServerLauncher.java:737)
>   at 
> org.apache.geode.distributed.ServerLauncher.main(ServerLauncher.java:256)
> Caused by: java.net.BindException: Network is unreachable; port (40405) 
> is not available on localhost.
>   at 
> org.apache.geode.distributed.AbstractLauncher.assertPortAvailable(AbstractLauncher.java:142)
>   at 
> org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:794)
>   ... 2 more
> * Execution Summary ***
> Script file: /geode/scripts/geode-starter-2.gfsh
> Command-1 : start locator --name=locator-maeve --connect=false 

[jira] [Resolved] (GEODE-8685) Exporting data causes a ClassNotFoundException

2020-11-17 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8685.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

This bug appears to have been in every release of geode. It would only happen 
when a partitioned region is being exported. Perhaps the export that was 
working in a previous release was on a replicate region (the replicate export 
code did not deserialize).

> Exporting data causes a ClassNotFoundException
> --
>
> Key: GEODE-8685
> URL: https://issues.apache.org/jira/browse/GEODE-8685
> Project: Geode
>  Issue Type: Task
>  Components: regions
>Affects Versions: 1.10.0, 1.11.0, 1.12.0, 1.13.0
>Reporter: Anthony Baker
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
> Fix For: 1.14.0
>
>
> See 
> [https://lists.apache.org/thread.html/rfa4fc47eb4cb4e75c39d7cb815416bebf2ec233d4db24e37728e922e%40%3Cuser.geode.apache.org%3E.]
>  
> Report is that exporting data whose values are Classes defined in a deployed 
> jar result in a ClassNotFound exception:
> {noformat}
> [error 2020/10/30 08:54:29.317 PDT  tid=0x41] 
> org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> java.io.IOException: org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> at 
> org.apache.geode.internal.cache.snapshot.WindowedExporter.export(WindowedExporter.java:106)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.exportOnMember(RegionSnapshotServiceImpl.java:361)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.save(RegionSnapshotServiceImpl.java:161)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.save(RegionSnapshotServiceImpl.java:146)
> at 
> org.apache.geode.management.internal.cli.functions.ExportDataFunction.executeFunction(ExportDataFunction.java:62)
> at 
> org.apache.geode.management.cli.CliFunction.execute(CliFunction.java:37)
> at 
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:201)
> at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
> at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:441)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:442)
> at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doFunctionExecutionThread(ClusterOperationExecutors.java:377)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> at 
> org.apache.geode.internal.cache.snapshot.WindowedExporter$WindowedExportCollector.setException(WindowedExporter.java:383)
> at 
> org.apache.geode.internal.cache.snapshot.WindowedExporter$WindowedExportCollector.addResult(WindowedExporter.java:346)
> at 
> org.apache.geode.internal.cache.execute.PartitionedRegionFunctionResultSender.lastResult(PartitionedRegionFunctionResultSender.java:195)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.handleException(AbstractExecution.java:502)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:353)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.lambda$executeFunctionOnLocalPRNode$0(AbstractExecution.java:273)
> ... 6 more
> Caused by: org.apache.geode.SerializationException: A ClassNotFoundException 
> was thrown while trying to deserialize cached value.
> at 
> org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:2046)
> at 
> org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:2032)
> at 
> org.apache.geode.internal.cache.VMCachedDeserializable.getDeserializedValue(VMCachedDeserializable.java:135)
>  

[jira] [Updated] (GEODE-8685) Exporting data causes a ClassNotFoundException

2020-11-12 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8685:

Affects Version/s: 1.10.0
   1.11.0
   1.12.0

> Exporting data causes a ClassNotFoundException
> --
>
> Key: GEODE-8685
> URL: https://issues.apache.org/jira/browse/GEODE-8685
> Project: Geode
>  Issue Type: Task
>  Components: regions
>Affects Versions: 1.10.0, 1.11.0, 1.12.0, 1.13.0
>Reporter: Anthony Baker
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
>
> See 
> [https://lists.apache.org/thread.html/rfa4fc47eb4cb4e75c39d7cb815416bebf2ec233d4db24e37728e922e%40%3Cuser.geode.apache.org%3E.]
>  
> Report is that exporting data whose values are Classes defined in a deployed 
> jar result in a ClassNotFound exception:
> {noformat}
> [error 2020/10/30 08:54:29.317 PDT  tid=0x41] 
> org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> java.io.IOException: org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> at 
> org.apache.geode.internal.cache.snapshot.WindowedExporter.export(WindowedExporter.java:106)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.exportOnMember(RegionSnapshotServiceImpl.java:361)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.save(RegionSnapshotServiceImpl.java:161)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.save(RegionSnapshotServiceImpl.java:146)
> at 
> org.apache.geode.management.internal.cli.functions.ExportDataFunction.executeFunction(ExportDataFunction.java:62)
> at 
> org.apache.geode.management.cli.CliFunction.execute(CliFunction.java:37)
> at 
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:201)
> at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
> at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:441)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:442)
> at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doFunctionExecutionThread(ClusterOperationExecutors.java:377)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> at 
> org.apache.geode.internal.cache.snapshot.WindowedExporter$WindowedExportCollector.setException(WindowedExporter.java:383)
> at 
> org.apache.geode.internal.cache.snapshot.WindowedExporter$WindowedExportCollector.addResult(WindowedExporter.java:346)
> at 
> org.apache.geode.internal.cache.execute.PartitionedRegionFunctionResultSender.lastResult(PartitionedRegionFunctionResultSender.java:195)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.handleException(AbstractExecution.java:502)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:353)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.lambda$executeFunctionOnLocalPRNode$0(AbstractExecution.java:273)
> ... 6 more
> Caused by: org.apache.geode.SerializationException: A ClassNotFoundException 
> was thrown while trying to deserialize cached value.
> at 
> org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:2046)
> at 
> org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:2032)
> at 
> org.apache.geode.internal.cache.VMCachedDeserializable.getDeserializedValue(VMCachedDeserializable.java:135)
> at 
> org.apache.geode.internal.cache.EntrySnapshot.getRawValue(EntrySnapshot.java:111)
> at 
> org.apache.geode.internal.cache.EntrySnapshot.getRawValue(EntrySnapshot.java:99)
> at 
> 

[jira] [Comment Edited] (GEODE-8685) Exporting data causes a ClassNotFoundException

2020-11-12 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230293#comment-17230293
 ] 

Darrel Schneider edited comment on GEODE-8685 at 11/12/20, 9:59 PM:


*Why is the value being deserialized at all?*

_The following is based on code on the develop branch (post 1.13)._

In this particular case is is because the Entry instances being iterated over 
are instances of EntrySnapshot (only used by partitioned regions) instead of 
NonTXEntry instances.
 This code would have saved us and passed the serialized CachedDeserializable 
to "convertToBytes" if the entry had been a NonTXEntry:
{code:java}
    public  SnapshotRecord(LocalRegion region, Entry entry) throws 
IOException {
      key = BlobHelper.serializeToBlob(entry.getKey());
      if (entry instanceof NonTXEntry && region != null) {
        @Released
        Object v =
            ((NonTXEntry) 
entry).getRegionEntry().getValueOffHeapOrDiskWithoutFaultIn(region);
        try {
          value = convertToBytes(v);
        } finally {
          OffHeapHelper.release(v);
        }
      } else {
        value = convertToBytes(entry.getValue());
      }
    }
 
{code}
 But because it was an EntrySnapshot it goes down to the else and just call 
entry.getValue() which on an EntrySnapshot always returns the deserialized 
value. This is the getValue() call we see fail because the class is not found.
 I could not find any evidence that we have changed the code that iterates the 
region entries or that we changed the implementation of the entry iteration on 
a partitioned region. It looks like it has used EntrySnapshot instances since 
geode existed. We probably have export tests for partitioned regions but they 
may not check that the value is not being deserialized.
 It would be rather easy to add a method on EntrySnapshot that exposes the 
CachedDeserializable and pass that to convertToBytes which already does the 
right thing with a CachedDeserializable.

_Does 1.10 have this same issue?_

Jens and I checked out the git tag "rel/v1.10.0" and ran the test that was 
added while working on this fix and found 1.10 has the same issue. It also will 
deserialize on export entry values stored in a partitioned region. As far as we 
could tell from code inspection this issue goes back all the way to the first 
release of geode. 

*1) Why is the class not resolving?*

Jens and I verified that export will attempt to load the class from the 
deployed jar. We used the given test case to do this. What we found if we 
deployed the small *.jar.original jar then it did find the class but then 
failed to initialize it do to dependencies on other classes. If instead we 
deployed the larger *.jar it would not even find "Class1" in it because that 
jar puts Class1 in something like BOOT-INF. Jens said it was a different type 
of jar; an executable one. But we were able to verify that export honors 
deployed jars and will load classes from them.


was (Author: dschneider):
Why is the value being deserialized at all?

_The following is based on code on the develop branch (post 1.13)._


 In this particular case is is because the Entry instances being iterated over 
are instances of EntrySnapshot (only used by partitioned regions) instead of 
NonTXEntry instances.
 This code would have saved us and passed the serialized CachedDeserializable 
to "convertToBytes" if the entry had been a NonTXEntry:
{code:java}
    public  SnapshotRecord(LocalRegion region, Entry entry) throws 
IOException {
      key = BlobHelper.serializeToBlob(entry.getKey());
      if (entry instanceof NonTXEntry && region != null) {
        @Released
        Object v =
            ((NonTXEntry) 
entry).getRegionEntry().getValueOffHeapOrDiskWithoutFaultIn(region);
        try {
          value = convertToBytes(v);
        } finally {
          OffHeapHelper.release(v);
        }
      } else {
        value = convertToBytes(entry.getValue());
      }
    }
 
{code}
 But because it was an EntrySnapshot it goes down to the else and just call 
entry.getValue() which on an EntrySnapshot always returns the deserialized 
value. This is the getValue() call we see fail because the class is not found.
 I could not find any evidence that we have changed the code that iterates the 
region entries or that we changed the implementation of the entry iteration on 
a partitioned region. It looks like it has used EntrySnapshot instances since 
geode existed. We probably have export tests for partitioned regions but they 
may not check that the value is not being deserialized.
 It would be rather easy to add a method on EntrySnapshot that exposes the 
CachedDeserializable and pass that to convertToBytes which already does the 
right thing with a CachedDeserializable.

> Exporting data causes a ClassNotFoundException
> --
>
> Key: 

[jira] [Comment Edited] (GEODE-8685) Exporting data causes a ClassNotFoundException

2020-11-11 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230293#comment-17230293
 ] 

Darrel Schneider edited comment on GEODE-8685 at 11/11/20, 11:42 PM:
-

Why is the value being deserialized at all?

_The following is based on code on the develop branch (post 1.13)._


 In this particular case is is because the Entry instances being iterated over 
are instances of EntrySnapshot (only used by partitioned regions) instead of 
NonTXEntry instances.
 This code would have saved us and passed the serialized CachedDeserializable 
to "convertToBytes" if the entry had been a NonTXEntry:
{code:java}
    public  SnapshotRecord(LocalRegion region, Entry entry) throws 
IOException {
      key = BlobHelper.serializeToBlob(entry.getKey());
      if (entry instanceof NonTXEntry && region != null) {
        @Released
        Object v =
            ((NonTXEntry) 
entry).getRegionEntry().getValueOffHeapOrDiskWithoutFaultIn(region);
        try {
          value = convertToBytes(v);
        } finally {
          OffHeapHelper.release(v);
        }
      } else {
        value = convertToBytes(entry.getValue());
      }
    }
 
{code}
 But because it was an EntrySnapshot it goes down to the else and just call 
entry.getValue() which on an EntrySnapshot always returns the deserialized 
value. This is the getValue() call we see fail because the class is not found.
 I could not find any evidence that we have changed the code that iterates the 
region entries or that we changed the implementation of the entry iteration on 
a partitioned region. It looks like it has used EntrySnapshot instances since 
geode existed. We probably have export tests for partitioned regions but they 
may not check that the value is not being deserialized.
 It would be rather easy to add a method on EntrySnapshot that exposes the 
CachedDeserializable and pass that to convertToBytes which already does the 
right thing with a CachedDeserializable.


was (Author: dschneider):
Why is the value being deserialized at all?
In this particular case is is because the Entry instances being iterated over 
are instances of EntrySnapshot (only used by partitioned regions) instead of 
NonTXEntry instances.
This code would have saved us and passed the serialized CachedDeserializable to 
"convertToBytes" if the entry had been a NonTXEntry:
{code:java}
    public  SnapshotRecord(LocalRegion region, Entry entry) throws 
IOException {
      key = BlobHelper.serializeToBlob(entry.getKey());
      if (entry instanceof NonTXEntry && region != null) {
        @Released
        Object v =
            ((NonTXEntry) 
entry).getRegionEntry().getValueOffHeapOrDiskWithoutFaultIn(region);
        try {
          value = convertToBytes(v);
        } finally {
          OffHeapHelper.release(v);
        }
      } else {
        value = convertToBytes(entry.getValue());
      }
    }
 
{code}
 But because it was an EntrySnapshot it goes down to the else and just call 
entry.getValue() which on an EntrySnapshot always returns the deserialized 
value. This is the getValue() call we see fail because the class is not found.
I could not find any evidence that we have changed the code that iterates the 
region entries or that we changed the implementation of the entry iteration on 
a partitioned region. It looks like it has used EntrySnapshot instances since 
geode existed. We probably have export tests for partitioned regions but they 
may not check that the value is not being deserialized.
It would be rather easy to add a method on EntrySnapshot that exposes the 
CachedDeserializable and pass that to convertToBytes which already does the 
right thing with a CachedDeserializable.

> Exporting data causes a ClassNotFoundException
> --
>
> Key: GEODE-8685
> URL: https://issues.apache.org/jira/browse/GEODE-8685
> Project: Geode
>  Issue Type: Task
>  Components: regions
>Affects Versions: 1.13.0
>Reporter: Anthony Baker
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI
>
> See 
> [https://lists.apache.org/thread.html/rfa4fc47eb4cb4e75c39d7cb815416bebf2ec233d4db24e37728e922e%40%3Cuser.geode.apache.org%3E.]
>  
> Report is that exporting data whose values are Classes defined in a deployed 
> jar result in a ClassNotFound exception:
> {noformat}
> [error 2020/10/30 08:54:29.317 PDT  tid=0x41] 
> org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> java.io.IOException: org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> at 
> 

[jira] [Commented] (GEODE-8685) Exporting data causes a ClassNotFoundException

2020-11-11 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230293#comment-17230293
 ] 

Darrel Schneider commented on GEODE-8685:
-

Why is the value being deserialized at all?
In this particular case is is because the Entry instances being iterated over 
are instances of EntrySnapshot (only used by partitioned regions) instead of 
NonTXEntry instances.
This code would have saved us and passed the serialized CachedDeserializable to 
"convertToBytes" if the entry had been a NonTXEntry:
{code:java}
    public  SnapshotRecord(LocalRegion region, Entry entry) throws 
IOException {
      key = BlobHelper.serializeToBlob(entry.getKey());
      if (entry instanceof NonTXEntry && region != null) {
        @Released
        Object v =
            ((NonTXEntry) 
entry).getRegionEntry().getValueOffHeapOrDiskWithoutFaultIn(region);
        try {
          value = convertToBytes(v);
        } finally {
          OffHeapHelper.release(v);
        }
      } else {
        value = convertToBytes(entry.getValue());
      }
    }
 
{code}
 But because it was an EntrySnapshot it goes down to the else and just call 
entry.getValue() which on an EntrySnapshot always returns the deserialized 
value. This is the getValue() call we see fail because the class is not found.
I could not find any evidence that we have changed the code that iterates the 
region entries or that we changed the implementation of the entry iteration on 
a partitioned region. It looks like it has used EntrySnapshot instances since 
geode existed. We probably have export tests for partitioned regions but they 
may not check that the value is not being deserialized.
It would be rather easy to add a method on EntrySnapshot that exposes the 
CachedDeserializable and pass that to convertToBytes which already does the 
right thing with a CachedDeserializable.

> Exporting data causes a ClassNotFoundException
> --
>
> Key: GEODE-8685
> URL: https://issues.apache.org/jira/browse/GEODE-8685
> Project: Geode
>  Issue Type: Task
>  Components: regions
>Affects Versions: 1.13.0
>Reporter: Anthony Baker
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: GeodeOperationAPI
>
> See 
> [https://lists.apache.org/thread.html/rfa4fc47eb4cb4e75c39d7cb815416bebf2ec233d4db24e37728e922e%40%3Cuser.geode.apache.org%3E.]
>  
> Report is that exporting data whose values are Classes defined in a deployed 
> jar result in a ClassNotFound exception:
> {noformat}
> [error 2020/10/30 08:54:29.317 PDT  tid=0x41] 
> org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> java.io.IOException: org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException was thrown 
> while trying to deserialize cached value.
> at 
> org.apache.geode.internal.cache.snapshot.WindowedExporter.export(WindowedExporter.java:106)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.exportOnMember(RegionSnapshotServiceImpl.java:361)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.save(RegionSnapshotServiceImpl.java:161)
> at 
> org.apache.geode.internal.cache.snapshot.RegionSnapshotServiceImpl.save(RegionSnapshotServiceImpl.java:146)
> at 
> org.apache.geode.management.internal.cli.functions.ExportDataFunction.executeFunction(ExportDataFunction.java:62)
> at 
> org.apache.geode.management.cli.CliFunction.execute(CliFunction.java:37)
> at 
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:201)
> at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
> at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:441)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:442)
> at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doFunctionExecutionThread(ClusterOperationExecutors.java:377)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.SerializationException: A ClassNotFoundException 

[jira] [Resolved] (GEODE-8541) ReplicateWithExpirationClearIntegrationTest should be in integrationTest instead of test

2020-09-25 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8541.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> ReplicateWithExpirationClearIntegrationTest should be in integrationTest 
> instead of test
> 
>
> Key: GEODE-8541
> URL: https://issues.apache.org/jira/browse/GEODE-8541
> Project: Geode
>  Issue Type: Test
>  Components: tests
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> ReplicateWithExpirationClearIntegrationTest is currently in the "test" folder 
> which is for unit tests. It should instead be in "integrationTest".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8541) ReplicateWithExpirationClearIntegrationTest should be in integrationTest instead of test

2020-09-25 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8541:
---

Assignee: Darrel Schneider

> ReplicateWithExpirationClearIntegrationTest should be in integrationTest 
> instead of test
> 
>
> Key: GEODE-8541
> URL: https://issues.apache.org/jira/browse/GEODE-8541
> Project: Geode
>  Issue Type: Test
>  Components: tests
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> ReplicateWithExpirationClearIntegrationTest is currently in the "test" folder 
> which is for unit tests. It should instead be in "integrationTest".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8541) ReplicateWithExpirationClearIntegrationTest should be in integrationTest instead of test

2020-09-25 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8541:
---

 Summary: ReplicateWithExpirationClearIntegrationTest should be in 
integrationTest instead of test
 Key: GEODE-8541
 URL: https://issues.apache.org/jira/browse/GEODE-8541
 Project: Geode
  Issue Type: Test
  Components: tests
Reporter: Darrel Schneider


ReplicateWithExpirationClearIntegrationTest is currently in the "test" folder 
which is for unit tests. It should instead be in "integrationTest".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8515) Redis PING should respond appropriately when called from within a SUBSCRIBE/PSUBSCRIBE

2020-09-24 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8515.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Redis PING should respond appropriately when called from within a 
> SUBSCRIBE/PSUBSCRIBE
> --
>
> Key: GEODE-8515
> URL: https://issues.apache.org/jira/browse/GEODE-8515
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Sarah Abbey
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> From PING documentation (https://redis.io/commands/ping):
> If the client is subscribed to a channel or a pattern, it will instead return 
> a multi-bulk with a "pong" in the first position and an empty bulk in the 
> second position, unless an argument is provided in which case it returns a 
> copy of the argument.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8504) implement (but not support) Redis Info command

2020-09-22 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8504.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> implement (but not support) Redis Info command 
> ---
>
> Key: GEODE-8504
> URL: https://issues.apache.org/jira/browse/GEODE-8504
> Project: Geode
>  Issue Type: Improvement
>Reporter: John Hutchison
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> create no-ops command to support apps using this command 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8504) implement (but not support) Redis Info command

2020-09-22 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8504:
---

Assignee: Darrel Schneider

> implement (but not support) Redis Info command 
> ---
>
> Key: GEODE-8504
> URL: https://issues.apache.org/jira/browse/GEODE-8504
> Project: Geode
>  Issue Type: Improvement
>Reporter: John Hutchison
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
>
> create no-ops command to support apps using this command 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8500) Tests for Redis QUIT command

2020-09-17 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8500.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Tests for Redis QUIT command
> 
>
> Key: GEODE-8500
> URL: https://issues.apache.org/jira/browse/GEODE-8500
> Project: Geode
>  Issue Type: Test
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Raymond Ingles
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8500) Tests for Redis QUIT command

2020-09-17 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8500:
---

Assignee: Raymond Ingles  (was: Sarah Abbey)

> Tests for Redis QUIT command
> 
>
> Key: GEODE-8500
> URL: https://issues.apache.org/jira/browse/GEODE-8500
> Project: Geode
>  Issue Type: Test
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Raymond Ingles
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8492) Redis "clients" statistic goes negative

2020-09-16 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8492.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Redis "clients" statistic goes negative
> ---
>
> Key: GEODE-8492
> URL: https://issues.apache.org/jira/browse/GEODE-8492
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Sarah Abbey
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> When running a long running app, we noticed that our "clients" statistic was 
> negative.  It should always be greater than or equal to 0. 
> It seems like the call to decrement the number of clients was being invoked 
> multiple times for each client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8493) idle clients can cause server stuck thread warnings

2020-09-14 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8493:
---

 Summary: idle clients can cause server stuck thread warnings
 Key: GEODE-8493
 URL: https://issues.apache.org/jira/browse/GEODE-8493
 Project: Geode
  Issue Type: Bug
  Components: redis
Reporter: Darrel Schneider


Idle connection threads may produce warning messages like:

 

{{[vm1] [warn 2020/09/02 14:31:36.580 PDT  tid=0x1c] Thread 
<87> (0x57) that was executed at <02 Sep 2020 14:29:12 PDT> has been stuck for 
<144.113 seconds> and number of thread monitor iteration <2> 
[vm1] Thread Name  state 
[vm1] Waiting on 

[vm1] Executor Group 
[vm1] Monitored metric 
[vm1] Thread stack:
[vm1] sun.misc.Unsafe.park(Native Method)
[vm1] java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
[vm1] 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
[vm1] 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
[vm1] 
org.apache.geode.redis.internal.netty.ExecutionHandlerContext.takeCommandFromQueue(ExecutionHandlerContext.java:139)
[vm1] 
org.apache.geode.redis.internal.netty.ExecutionHandlerContext.processCommandQueue(ExecutionHandlerContext.java:125)
[vm1] 
org.apache.geode.redis.internal.netty.ExecutionHandlerContext$$Lambda$320/28815321.run(Unknown
 Source)
[vm1] java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}

if the thread (client) becomes idle for some time. These messages will probably 
worry users. We should be able to safely switch to a having the 
{{ExecutionHandlerContext}} simply run its own thread to process the command Q.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (GEODE-8393) CI: CrashAndNoRepeatDUnitTest > givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED

2020-08-13 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17177353#comment-17177353
 ] 

Darrel Schneider edited comment on GEODE-8393 at 8/13/20, 10:33 PM:


This test has been disabled (in

4637498e1f933d13f298b10b65dbc2af776c5c98)

until we can figure out why it fails intermittently


was (Author: dschneider):
This test has been disabled until we can figure out why it fails intermittently

> CI: CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> ---
>
> Key: GEODE-8393
> URL: https://issues.apache.org/jira/browse/GEODE-8393
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Jinmei Liao
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: CI, pull-request-available
>
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/384
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> unexpected 0 at index 3967 in string 
> 

[jira] [Commented] (GEODE-8393) CI: CrashAndNoRepeatDUnitTest > givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED

2020-08-13 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17177353#comment-17177353
 ] 

Darrel Schneider commented on GEODE-8393:
-

This test has been disabled until we can figure out why it fails intermittently

> CI: CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> ---
>
> Key: GEODE-8393
> URL: https://issues.apache.org/jira/browse/GEODE-8393
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Jinmei Liao
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: CI, pull-request-available
>
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/384
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> unexpected 0 at index 3967 in string 
> 

[jira] [Resolved] (GEODE-8427) change redis to disconnect client instead of failing with memberDeparted

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8427.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> change redis to disconnect client instead of failing with memberDeparted
> 
>
> Key: GEODE-8427
> URL: https://issues.apache.org/jira/browse/GEODE-8427
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Currently when a redis operation fails because one of the servers departed 
> while it was in progress an exception is thrown that contains 
> "memberDeparted". Clients are required to handle this non-standard error 
> message and decide if the operation should be retried or not.
> A better approach is to have the server close the client connection that the 
> operation came in on. Some clients already handle this and support things 
> like auto reconnect. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8379) CI failure: TimeIntegrationTest.timeCommandRespondsWIthTwoValue

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8379.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> CI failure: TimeIntegrationTest.timeCommandRespondsWIthTwoValue
> ---
>
> Key: GEODE-8379
> URL: https://issues.apache.org/jira/browse/GEODE-8379
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Bruce J Schuchardt
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> This relatively new test failed in a Windows CI run: 
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/WindowsIntegrationTestOpenJDK11/builds/339]
>  
> {noformat}
> org.apache.geode.redis.internal.executor.server.TimeIntegrationTest > 
> timeCommandRespondsWIthTwoValues FAILED
> 16:38:28java.lang.AssertionError: 
> 16:38:28Expecting:
> 16:38:28 <0L>
> 16:38:28to be greater than:
> 16:38:28 <0L> 
> 16:38:28at 
> org.apache.geode.redis.internal.executor.server.TimeIntegrationTest.timeCommandRespondsWIthTwoValues(TimeIntegrationTest.java:57)
> 16:48:28 {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-2503) Add first class support for list

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-2503:

Component/s: redis

> Add first class support for list
> 
>
> Key: GEODE-2503
> URL: https://issues.apache.org/jira/browse/GEODE-2503
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Swapnil Bawaskar
>Priority: Major
>
> In addition to Region which implements {{java.util.ConcurrentHashMap}} we 
> should add first class support for other data structures like list i.e. 
> implement {{java.util.ConcurrentLinkedDeque}}.
> Our Redis implementation currently supports Redis lists that are backed by 
> Partitioned Region; a scalable list implementation would be useful for 
> non-redis users as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8177) Change Redis Rename Functions to Make use of Striped Executor

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8177.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Change Redis Rename Functions to Make use of Striped Executor 
> --
>
> Key: GEODE-8177
> URL: https://issues.apache.org/jira/browse/GEODE-8177
> Project: Geode
>  Issue Type: Improvement
>Reporter: John Hutchison
>Priority: Major
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-2504) Add first class support for SortedSet

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-2504:

Component/s: redis

> Add first class support for SortedSet
> -
>
> Key: GEODE-2504
> URL: https://issues.apache.org/jira/browse/GEODE-2504
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Swapnil Bawaskar
>Priority: Major
>
> In addition to Region which implements {{java.util.ConcurrentHashMap}} we 
> should add first class support for other data structures like SortedMap i.e. 
> implement {{java.util.ConcurrentSkipListMap}}.
> Our Redis implementation currently supports Redis SortedSets that are backed 
> by Partitioned Region; a scalable SortedMap implementation would be useful 
> for non-redis users as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-2468) The Redis adapter (start server --name=server1 --r...

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-2468.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> The Redis adapter (start server --name=server1 --r...
> -
>
> Key: GEODE-2468
> URL: https://issues.apache.org/jira/browse/GEODE-2468
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Gregory Green
>Priority: Major
> Fix For: 1.14.0
>
>
> The Redis adapter (start server --name=server1 --redis-port=11211 
> --redis-bind-address=127.0.0.1  --use-cluster-configuration) does not appear 
> to handle hash keys correctly.
> The following Example: Redis CLI works.
> localhost:11211>  HSET companies name "John Smith"
> Using a  HSET :id  .. produces an error
> Example:
> localhost:11211>  HSET companies:1000 name "John Smith"
> [Server error]
> [fine 2017/02/10 16:04:33.289 EST server1  
> tid=0x6a] Region names may only be alphanumeric and may contain hyphens or 
> underscores: companies: 1000
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: companies: 1000
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> //Example Spring Data Redis Object sample
> @Data
> @EqualsAndHashCode()
> @RedisHash(value="companies")
> @NoArgsConstructor
> public class Company
> {
>   private @Id String id;
>
> //Repository
> public interface CompanyRepository extends CrudRepository 
> {
>  
> }
> //When saving using a repository
> repository.save(this.myCompany);
> [Same Server error]
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: 
> companies:f05405c2-86f2-4aaf-bd0c-6fecd483bf28
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> *Reporter*: Gregory Green
> *E-mail*: [mailto:ggr...@pivotoal.io]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-2468) The Redis adapter (start server --name=server1 --r...

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-2468:

Component/s: redis

> The Redis adapter (start server --name=server1 --r...
> -
>
> Key: GEODE-2468
> URL: https://issues.apache.org/jira/browse/GEODE-2468
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Gregory Green
>Priority: Major
>
> The Redis adapter (start server --name=server1 --redis-port=11211 
> --redis-bind-address=127.0.0.1  --use-cluster-configuration) does not appear 
> to handle hash keys correctly.
> The following Example: Redis CLI works.
> localhost:11211>  HSET companies name "John Smith"
> Using a  HSET :id  .. produces an error
> Example:
> localhost:11211>  HSET companies:1000 name "John Smith"
> [Server error]
> [fine 2017/02/10 16:04:33.289 EST server1  
> tid=0x6a] Region names may only be alphanumeric and may contain hyphens or 
> underscores: companies: 1000
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: companies: 1000
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> //Example Spring Data Redis Object sample
> @Data
> @EqualsAndHashCode()
> @RedisHash(value="companies")
> @NoArgsConstructor
> public class Company
> {
>   private @Id String id;
>
> //Repository
> public interface CompanyRepository extends CrudRepository 
> {
>  
> }
> //When saving using a repository
> repository.save(this.myCompany);
> [Same Server error]
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: 
> companies:f05405c2-86f2-4aaf-bd0c-6fecd483bf28
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> *Reporter*: Gregory Green
> *E-mail*: [mailto:ggr...@pivotoal.io]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7910) Update RENAME command in Geode Redis to match native Redis

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-7910.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

lists and sorted sets removed from 1.14

> Update RENAME command in Geode Redis to match native Redis
> --
>
> Key: GEODE-7910
> URL: https://issues.apache.org/jira/browse/GEODE-7910
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Raymond Ingles
>Priority: Major
> Fix For: 1.14.0
>
>
> See the tests marked with TODO in RenameDockerAcceptanceTest.java; the RENAME 
> operation currently does not work for sorted sets or lists in Geode Redis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7910) Update RENAME command in Geode Redis to match native Redis

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-7910:

Component/s: redis

> Update RENAME command in Geode Redis to match native Redis
> --
>
> Key: GEODE-7910
> URL: https://issues.apache.org/jira/browse/GEODE-7910
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Raymond Ingles
>Priority: Major
>
> See the tests marked with TODO in RenameDockerAcceptanceTest.java; the RENAME 
> operation currently does not work for sorted sets or lists in Geode Redis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8101) HsetDUnitTest loses connection to one remote process (broken pipe)

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8101:

Labels: flaky  (was: )

> HsetDUnitTest loses connection to one remote process (broken pipe)
> --
>
> Key: GEODE-8101
> URL: https://issues.apache.org/jira/browse/GEODE-8101
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Bill Burcham
>Assignee: Jens Deppe
>Priority: Major
>  Labels: flaky
>
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/146
> A test fails trying to communicate to a remote process (through jedis2):
> {code}
> org.apache.geode.redis.executors.hash.HsetDUnitTest > 
> should_distributeDataAmongMultipleServers_givenMultipleClients FAILED
> redis.clients.jedis.exceptions.JedisConnectionException: 
> java.net.SocketException: Broken pipe (Write failed)
> at redis.clients.jedis.Connection.flush(Connection.java:308)
> at 
> redis.clients.jedis.Connection.getBinaryMultiBulkReply(Connection.java:269)
> at redis.clients.jedis.Jedis.hgetAll(Jedis.java:942)
> at 
> org.apache.geode.redis.executors.hash.HsetDUnitTest.should_distributeDataAmongMultipleServers_givenMultipleClients(HsetDUnitTest.java:121)
> Caused by:
> java.net.SocketException: Broken pipe (Write failed)
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:150)
> at 
> redis.clients.jedis.util.RedisOutputStream.flushBuffer(RedisOutputStream.java:52)
> at 
> redis.clients.jedis.util.RedisOutputStream.flush(RedisOutputStream.java:133)
> at redis.clients.jedis.Connection.flush(Connection.java:305)
> {code}
> and the @After method fails trying to communicate (disconnect) to the same 
> remote process (again through jedis2):
> {code}
> org.apache.geode.redis.executors.hash.HsetDUnitTest > classMethod FAILED
> redis.clients.jedis.exceptions.JedisConnectionException: 
> java.net.SocketException: Broken pipe (Write failed)
> at redis.clients.jedis.Connection.disconnect(Connection.java:222)
> at redis.clients.jedis.BinaryClient.disconnect(BinaryClient.java:918)
> at redis.clients.jedis.BinaryJedis.disconnect(BinaryJedis.java:1898)
> at 
> org.apache.geode.redis.executors.hash.HsetDUnitTest.tearDown(HsetDUnitTest.java:104)
> Caused by:
> java.net.SocketException: Broken pipe (Write failed)
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:150)
> at 
> redis.clients.jedis.util.RedisOutputStream.flushBuffer(RedisOutputStream.java:52)
> at 
> redis.clients.jedis.util.RedisOutputStream.flush(RedisOutputStream.java:133)
> at redis.clients.jedis.Connection.disconnect(Connection.java:218)
> ... 3 more
> {code}
> 5 tests failed in all:
> {code}
> 2020-05-08 20:46:08.420 + Completed test 
> org.apache.geode.redis.executors.hash.HsetDUnitTest 
> should_distributeDataAmongMultipleServers_givenMultipleClientsOnSameServer_addingDifferentDataToSameSetConcurrently
>  with result: FAILURE
> 2020-05-08 20:46:15.914 + Completed test 
> org.apache.geode.redis.executors.hash.HsetDUnitTest 
> should_distributeDataAmongMultipleServers_givenMultipleClients_addingToDifferentHashesConcurrently
>  with result: FAILURE
> 2020-05-08 20:46:18.765 + Completed test 
> org.apache.geode.redis.executors.hash.HsetDUnitTest 
> should_distributeDataAmongMultipleServers_givenMultipleClients with result: 
> FAILURE
> 2020-05-08 20:46:21.417 + Completed test 
> org.apache.geode.redis.executors.hash.HsetDUnitTest 
> should_distributeDataAmongMultipleServers_givenMultipleClients_addingDifferentDataToSameHashConcurrently
>  with result: FAILURE
> 2020-05-08 20:46:31.408 + Completed test 
> org.apache.geode.redis.executors.hash.HsetDUnitTest classMethod with result: 
> FAILURE
> {code}
> This might be due to a CI infrastructure failure. Or perhaps a crash in that 
> remote JVM.
> 100 runs of 
> should_distributeDataAmongMultipleServers_givenMultipleClientsOnSameServer_addingDifferentDataToSameSetConcurrently
>  in IntelliJ were all successful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8409) SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key even if target exists and is not a set

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8409.
-
Resolution: Fixed

> SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key even if 
> target exists and is not a set
> 
>
> Key: GEODE-8409
> URL: https://issues.apache.org/jira/browse/GEODE-8409
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> This sequence:
>  
> {{SET setres xxx
> SUNIONSTORE setres foo111 bar222}}
> ({{foo111}} and {{bar222}} do not exist)
> {{setres}} should be deleted and we should get a response of
> {{(integer) 0}}
> Instead we get
> {{(error) WRONGTYPE Operation against a key holding the wrong kind of value}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8427) change redis to disconnect client instead of failing with memberDeparted

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8427:
---

Assignee: Darrel Schneider

> change redis to disconnect client instead of failing with memberDeparted
> 
>
> Key: GEODE-8427
> URL: https://issues.apache.org/jira/browse/GEODE-8427
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> Currently when a redis operation fails because one of the servers departed 
> while it was in progress an exception is thrown that contains 
> "memberDeparted". Clients are required to handle this non-standard error 
> message and decide if the operation should be retried or not.
> A better approach is to have the server close the client connection that the 
> operation came in on. Some clients already handle this and support things 
> like auto reconnect. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8092) redis dbSize on empty redis server returns 2

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8092.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

This now returns the size the redis data region

> redis dbSize on empty redis server returns 2
> 
>
> Key: GEODE-8092
> URL: https://issues.apache.org/jira/browse/GEODE-8092
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Priority: Major
>  Labels: easy-fix
> Fix For: 1.14.0
>
>
> On an empty redis server dbSize should return 0.
> But currently it is always 2 more than it should be.
> The bug is caused by us loading 5 region names into the keyRegistrar but 
> NUM_DEFAULT_KEYS in RedisConstants is 3 (should be 5)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-4479) Remove singleton calls from all tests in org.apache.geode.redis

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-4479.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

ConcurrentStartTest no longer exists

> Remove singleton calls from all tests in org.apache.geode.redis
> ---
>
> Key: GEODE-4479
> URL: https://issues.apache.org/jira/browse/GEODE-4479
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis, tests
>Reporter: Kirk Lund
>Priority: Major
> Fix For: 1.14.0
>
>
> These tests in org.apache.geode.redis invoke singleton getters.
> GemFireCacheImpl.getInstance():
> * ConcurrentStartTest



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-4560) Remove singleton calls from product code in org.apache.geode.redis

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-4560.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

GeodeRedisServer is now passed a Cache

> Remove singleton calls from product code in org.apache.geode.redis
> --
>
> Key: GEODE-4560
> URL: https://issues.apache.org/jira/browse/GEODE-4560
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Kirk Lund
>Priority: Major
> Fix For: 1.14.0
>
>
> These product classes in org.apache.geode.redis invoke singleton getters.
> GemFireCacheImpl.getInstance():
> * GeodeRedisServer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-4569) Remove singleton calls from product code in org.apache.geode.redis.internal

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-4569.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

RegionProvider is now passed a Cache

> Remove singleton calls from product code in org.apache.geode.redis.internal
> ---
>
> Key: GEODE-4569
> URL: https://issues.apache.org/jira/browse/GEODE-4569
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Kirk Lund
>Priority: Major
> Fix For: 1.14.0
>
>
> These product classes in org.apache.geode.redis.internal invoke singleton 
> getters.
> GemFireCacheImpl.getInstance():
> * RegionProvider



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-4720) data is not re-loaded after cluster rebooted while geode acts as a redis server

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-4720.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

resolved in 1.14 by no longer supporting persistence

> data is not re-loaded after cluster rebooted while geode acts as a redis 
> server
> ---
>
> Key: GEODE-4720
> URL: https://issues.apache.org/jira/browse/GEODE-4720
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.4.0
>Reporter: pengxu
>Priority: Major
> Fix For: 1.14.0
>
>
> In Apache 1.4.0,  the redis keys cann't be reloaded after the cluster is 
> rebooted. 
> How to reproduce it?
> 1. start server,   
> ```bash
> start server --name=server1 --redis-bind-address=localhost --redis-port=11211 
> --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
> ```
> 2.  using redis-cli connect to the server
> sadd hello 1
> smembers hello
> 3. stop the redis server
> ```bash
> stop server --name=server1
> ```
> 4. restart the server
> after the server is restarted,  no data is reloaded automatically.
> 5.  connect to the server by using redis-cli
> sadd hello 2
> the strange thing is that the old entry recovered while adding one new entry



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7637) CI: org.apache.geode.redis.RedisDistDUnitTest > testConcCreateDestroy FAILED

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-7637.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

this issue has been fixed in 1.14

> CI: org.apache.geode.redis.RedisDistDUnitTest > testConcCreateDestroy FAILED
> 
>
> Key: GEODE-7637
> URL: https://issues.apache.org/jira/browse/GEODE-7637
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Jinmei Liao
>Priority: Major
> Fix For: 1.14.0
>
>
> this test failed multiple times in one day: 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1431
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1429
> org.apache.geode.redis.RedisDistDUnitTest > testConcCreateDestroy FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.redis.RedisDistDUnitTest$1ConcCreateDestroy.call in VM 3 
> running on Host c8e60fd9bed2 with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:462)
> at 
> org.apache.geode.redis.RedisDistDUnitTest.testConcCreateDestroy(RedisDistDUnitTest.java:197)
> Caused by:
> redis.clients.jedis.exceptions.JedisConnectionException: 
> java.net.SocketTimeoutException: Read timed out
> Caused by:
> java.net.SocketTimeoutException: Read timed out



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7843) CI test failures in HashesIntegrationTest

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-7843.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

Concurrency issues in redis have been addressed in 1.14 that fix this issue

> CI test failures in HashesIntegrationTest
> -
>
> Key: GEODE-7843
> URL: https://issues.apache.org/jira/browse/GEODE-7843
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Jens Deppe
>Priority: Major
> Fix For: 1.14.0
>
>
> Failing intermittently with:
> {noformat}
> org.apache.geode.redis.HashesIntegrationTest > testConcurrentHSetNX FAILED
> java.lang.AssertionError: 
> Expecting:
>  <0>
> to be greater than:
>  <0> 
> at 
> org.apache.geode.redis.HashesIntegrationTest.testConcurrentHSetNX(HashesIntegrationTest.java:549)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7909) Update Geo* commands in Geode Redis to match native Redis

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-7909.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

The GEO* commands have been removed from 1.14

> Update Geo* commands in Geode Redis to match native Redis
> -
>
> Key: GEODE-7909
> URL: https://issues.apache.org/jira/browse/GEODE-7909
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Raymond Ingles
>Priority: Major
> Fix For: 1.14.0
>
>
> The current GEO* commands (e.g. GEOHASH) don't match the behavior of native 
> Redis; for example, the decimal precision does not line up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8115) Redis Transactions (multi) failing

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8115.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

The redis transactional commands have been removed from 1.14

> Redis Transactions (multi) failing
> --
>
> Key: GEODE-8115
> URL: https://issues.apache.org/jira/browse/GEODE-8115
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Priority: Major
> Fix For: 1.14.0
>
>
> Between these two shas:  
> 7ee1042a8393563b4d7655b8bc2d4a77564b91b5 (test passes)
> and
> 15df6a83e315fc9acb73a117b3c74b08eca7b82d (test fails)
> test:
> {code:java}
>   /*
>* Supported Transaction commands - DISCARD, EXEC, MULTI
>*/
>   protected void checkTransactionCommands() throws ExecutionException, 
> InterruptedException {
> RedisAsyncCommands asyncCommands = connection.async();
> asyncCommands.multi();RedisFuture result1 = 
> asyncCommands.set("key1", "value1");
> RedisFuture result2 = asyncCommands.set("key2", "value2");
> RedisFuture result3 = asyncCommands.set("key3", "value3");
> Log.getLogWriter().info("exec multiple commands as a transaction: " + 
> asyncCommands);
> RedisFuture execResult = asyncCommands.exec();
> TransactionResult transactionResult = execResult.get();
> Log.getLogWriter().info("completed exec multiple commands as a 
> transaction: " + asyncCommands);String firstResult = 
> transactionResult.get(0);
> Log.getLogWriter().info("firstResult = " + firstResult);String 
> secondResult = transactionResult.get(0);
> Log.getLogWriter().info("secondResult = " + secondResult);String 
> thirdResult = transactionResult.get(0);
> Log.getLogWriter().info("thirdResult = " + thirdResult);
>   }
> {code}
>  
> {code:java}
> Unexpected exception java.util.concurrent.ExecutionException: 
> io.lettuce.core.RedisCommandExecutionException: ERR Cannot resume 
> transaction, current thread has an active transaction in threadGroup 
> redisClient {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8424) Updates Redis API for Geode docs 1.14

2020-08-13 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8424.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Updates Redis API for Geode docs 1.14
> -
>
> Key: GEODE-8424
> URL: https://issues.apache.org/jira/browse/GEODE-8424
> Project: Geode
>  Issue Type: Improvement
>  Components: docs, redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> The docs should be updated to describe the Redis API for the upcoming 1.14 
> release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8427) change redis to disconnect client instead of failing with memberDeparted

2020-08-12 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8427:
---

 Summary: change redis to disconnect client instead of failing with 
memberDeparted
 Key: GEODE-8427
 URL: https://issues.apache.org/jira/browse/GEODE-8427
 Project: Geode
  Issue Type: Improvement
  Components: redis
Reporter: Darrel Schneider


Currently when a redis operation fails because one of the servers departed 
while it was in progress an exception is thrown that contains "memberDeparted". 
Clients are required to handle this non-standard error message and decide if 
the operation should be retried or not.

A better approach is to have the server close the client connection that the 
operation came in on. Some clients already handle this and support things like 
auto reconnect. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8424) Updates Redis API for Geode docs 1.14

2020-08-12 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8424:
---

 Summary: Updates Redis API for Geode docs 1.14
 Key: GEODE-8424
 URL: https://issues.apache.org/jira/browse/GEODE-8424
 Project: Geode
  Issue Type: Improvement
  Components: docs, redis
Reporter: Darrel Schneider


The docs should be updated to describe the Redis API for the upcoming 1.14 
release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8424) Updates Redis API for Geode docs 1.14

2020-08-12 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8424:
---

Assignee: Darrel Schneider

> Updates Redis API for Geode docs 1.14
> -
>
> Key: GEODE-8424
> URL: https://issues.apache.org/jira/browse/GEODE-8424
> Project: Geode
>  Issue Type: Improvement
>  Components: docs, redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> The docs should be updated to describe the Redis API for the upcoming 1.14 
> release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (GEODE-8423) Updates Redis API for Geode docs 1.13

2020-08-12 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8423:

Comment: was deleted

(was: This fix was cherry-picked to 1.13 here: 

cd0a8459ac3cde8b714788444f2913a6b0f6bbf0)

> Updates Redis API for Geode docs 1.13
> -
>
> Key: GEODE-8423
> URL: https://issues.apache.org/jira/browse/GEODE-8423
> Project: Geode
>  Issue Type: Improvement
>  Components: docs, redis
>Reporter: Sarah Abbey
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.14.0
>
>
> Documentation for Redis API for Geode that will currently be published for 
> 1.13 are not accurate and need to be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8423) Updates Redis API for Geode docs 1.13

2020-08-12 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176518#comment-17176518
 ] 

Darrel Schneider commented on GEODE-8423:
-

This fix was cherry-picked to 1.13 here: 

cd0a8459ac3cde8b714788444f2913a6b0f6bbf0

> Updates Redis API for Geode docs 1.13
> -
>
> Key: GEODE-8423
> URL: https://issues.apache.org/jira/browse/GEODE-8423
> Project: Geode
>  Issue Type: Improvement
>  Components: docs, redis
>Reporter: Sarah Abbey
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.14.0
>
>
> Documentation for Redis API for Geode that will currently be published for 
> 1.13 are not accurate and need to be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8423) Updates Redis API for Geode docs 1.13

2020-08-12 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8423:

Fix Version/s: 1.13.0

> Updates Redis API for Geode docs 1.13
> -
>
> Key: GEODE-8423
> URL: https://issues.apache.org/jira/browse/GEODE-8423
> Project: Geode
>  Issue Type: Improvement
>  Components: docs, redis
>Reporter: Sarah Abbey
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.14.0
>
>
> Documentation for Redis API for Geode that will currently be published for 
> 1.13 are not accurate and need to be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8423) Updates Redis API for Geode docs 1.13

2020-08-12 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8423.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Updates Redis API for Geode docs 1.13
> -
>
> Key: GEODE-8423
> URL: https://issues.apache.org/jira/browse/GEODE-8423
> Project: Geode
>  Issue Type: Improvement
>  Components: docs, redis
>Reporter: Sarah Abbey
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Documentation for Redis API for Geode that will currently be published for 
> 1.13 are not accurate and need to be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8417) CI failure: SessionExpirationDUnitTest.sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer fails with a timeout

2020-08-11 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8417.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> CI failure: 
> SessionExpirationDUnitTest.sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer
>  fails with a timeout
> -
>
> Key: GEODE-8417
> URL: https://issues.apache.org/jira/browse/GEODE-8417
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Bruce J Schuchardt
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> This test failed in a CI run:
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/402#L5f20d92c:648]
>  
> {noformat}
> org.apache.geode.redis.session.SessionExpirationDUnitTest > 
> sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer FAILED
> 13:20:39org.awaitility.core.ConditionTimeoutException: Condition with 
> lambda expression in 
> org.apache.geode.redis.session.SessionExpirationDUnitTest that uses 
> java.lang.String was not fulfilled within 10 seconds.
> 13:20:39at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:165)
> 13:20:39at 
> org.awaitility.core.CallableCondition.await(CallableCondition.java:78)
> 13:20:39at 
> org.awaitility.core.CallableCondition.await(CallableCondition.java:26)
> 13:20:39at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:895)
> 13:20:39at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:864)
> 13:20:39at 
> org.apache.geode.redis.session.SessionExpirationDUnitTest.waitForTheSessionToExpire(SessionExpirationDUnitTest.java:113)
> 13:20:39at 
> org.apache.geode.redis.session.SessionExpirationDUnitTest.sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer(SessionExpirationDUnitTest.java:93)
> 13:20:39
> 13:20:39Caused by:
> 13:20:39redis.clients.jedis.exceptions.JedisConnectionException: 
> java.net.SocketException: Broken pipe (Write failed)
> 13:20:39at 
> redis.clients.jedis.Protocol.sendCommand(Protocol.java:109)
> 13:20:39at 
> redis.clients.jedis.Protocol.sendCommand(Protocol.java:89)
> 13:20:39at 
> redis.clients.jedis.Connection.sendCommand(Connection.java:126)
> 13:20:39at 
> redis.clients.jedis.BinaryClient.ttl(BinaryClient.java:186)
> 13:20:39at redis.clients.jedis.Client.ttl(Client.java:114)
> 13:20:39at redis.clients.jedis.Jedis.ttl(Jedis.java:399)
> 13:20:39at 
> org.apache.geode.redis.session.SessionExpirationDUnitTest.lambda$waitForTheSessionToExpire$0(SessionExpirationDUnitTest.java:114)
> 13:20:39
> 13:20:39Caused by:
> 13:20:39java.net.SocketException: Broken pipe (Write failed)
> 13:20:39at java.net.SocketOutputStream.socketWrite0(Native 
> Method)
> 13:20:39at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
> 13:20:39at 
> java.net.SocketOutputStream.write(SocketOutputStream.java:150)
> 13:20:39at 
> redis.clients.jedis.util.RedisOutputStream.flushBuffer(RedisOutputStream.java:52)
> 13:20:39at 
> redis.clients.jedis.util.RedisOutputStream.write(RedisOutputStream.java:59)
> 13:20:39at 
> redis.clients.jedis.Protocol.sendCommand(Protocol.java:95)
> 13:20:39... 6 more
> 13:20:39 {noformat}
>  
> I bisected and found it started failing here:
> {noformat}
> commit 0a91484b05f1caffa8cc3a59cc7fc38abe4376ed
> Author: Darrel Schneider 
> Date:   Mon Aug 10 12:50:31 2020 -0700
> GEODE-8393: change memberDeparted to disconnect the connection (#5431)
> * server now disconnects connection if memberDeparted
> Co-authored-by: john Hutchison 
>  {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8418) CI failure: HashesAndCrashesDUnitTest.givenServerCrashesDuringSET_thenDataIsNotLost

2020-08-11 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8418.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

fixed in: 

d19368f7eb676026d1f921eac4106ff465131cc0

> CI failure: 
> HashesAndCrashesDUnitTest.givenServerCrashesDuringSET_thenDataIsNotLost
> ---
>
> Key: GEODE-8418
> URL: https://issues.apache.org/jira/browse/GEODE-8418
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Bruce J Schuchardt
>Assignee: Darrel Schneider
>Priority: Major
> Fix For: 1.14.0
>
>
> This redis test failed in a CI run:
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/414]
> {noformat}
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest > 
> givenServerCrashesDuringSET_thenDataIsNotLost FAILED
> 13:18:47java.util.concurrent.ExecutionException: 
> io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:18:47at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 13:18:47at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.modifyDataWhileCrashingVMs(HashesAndCrashesDUnitTest.java:239)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.givenServerCrashesDuringSET_thenDataIsNotLost(HashesAndCrashesDUnitTest.java:174)
> 13:18:47
> 13:18:47Caused by:
> 13:18:47io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:18:47at 
> io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:129)
> 13:18:47at 
> io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
> 13:18:47at 
> io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
> 13:18:47at com.sun.proxy.$Proxy70.set(Unknown Source)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.setPerformAndVerify(HashesAndCrashesDUnitTest.java:296)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.lambda$modifyDataWhileCrashingVMs$9(HashesAndCrashesDUnitTest.java:206)
> 13:18:47
> 13:18:47Caused by:
> 13:18:47io.netty.channel.unix.Errors$NativeIoException: 
> readAddress(..) failed: Connection reset by peer
> 13:20:00
> 13:20:00org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest
>  > givenServerCrashesDuringSADD_thenDataIsNotLost FAILED
> 13:20:00java.util.concurrent.ExecutionException: 
> io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:20:00at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 13:20:00at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.modifyDataWhileCrashingVMs(HashesAndCrashesDUnitTest.java:239)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.givenServerCrashesDuringSADD_thenDataIsNotLost(HashesAndCrashesDUnitTest.java:169)
> 13:20:00
> 13:20:00Caused by:
> 13:20:00io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:20:00at 
> io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:129)
> 13:20:00at 
> io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
> 13:20:00at 
> io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
> 13:20:00at com.sun.proxy.$Proxy70.sadd(Unknown Source)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.saddPerformAndVerify(HashesAndCrashesDUnitTest.java:272)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.lambda$modifyDataWhileCrashingVMs$5(HashesAndCrashesDUnitTest.java:200)
> 13:20:00
> 13:20:00Caused by:
> 13:20:00io.netty.channel.unix.Errors$NativeIoException: 
> readAddress(..) failed: Connection reset by peer {noformat}
>  
> I bisected and think this commit 

[jira] [Assigned] (GEODE-8418) CI failure: HashesAndCrashesDUnitTest.givenServerCrashesDuringSET_thenDataIsNotLost

2020-08-10 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8418:
---

Assignee: Darrel Schneider

> CI failure: 
> HashesAndCrashesDUnitTest.givenServerCrashesDuringSET_thenDataIsNotLost
> ---
>
> Key: GEODE-8418
> URL: https://issues.apache.org/jira/browse/GEODE-8418
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Bruce J Schuchardt
>Assignee: Darrel Schneider
>Priority: Major
>
> This redis test failed in a CI run:
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/414]
> {noformat}
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest > 
> givenServerCrashesDuringSET_thenDataIsNotLost FAILED
> 13:18:47java.util.concurrent.ExecutionException: 
> io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:18:47at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 13:18:47at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.modifyDataWhileCrashingVMs(HashesAndCrashesDUnitTest.java:239)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.givenServerCrashesDuringSET_thenDataIsNotLost(HashesAndCrashesDUnitTest.java:174)
> 13:18:47
> 13:18:47Caused by:
> 13:18:47io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:18:47at 
> io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:129)
> 13:18:47at 
> io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
> 13:18:47at 
> io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
> 13:18:47at com.sun.proxy.$Proxy70.set(Unknown Source)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.setPerformAndVerify(HashesAndCrashesDUnitTest.java:296)
> 13:18:47at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.lambda$modifyDataWhileCrashingVMs$9(HashesAndCrashesDUnitTest.java:206)
> 13:18:47
> 13:18:47Caused by:
> 13:18:47io.netty.channel.unix.Errors$NativeIoException: 
> readAddress(..) failed: Connection reset by peer
> 13:20:00
> 13:20:00org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest
>  > givenServerCrashesDuringSADD_thenDataIsNotLost FAILED
> 13:20:00java.util.concurrent.ExecutionException: 
> io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:20:00at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 13:20:00at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.modifyDataWhileCrashingVMs(HashesAndCrashesDUnitTest.java:239)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.givenServerCrashesDuringSADD_thenDataIsNotLost(HashesAndCrashesDUnitTest.java:169)
> 13:20:00
> 13:20:00Caused by:
> 13:20:00io.lettuce.core.RedisException: 
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> 13:20:00at 
> io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:129)
> 13:20:00at 
> io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
> 13:20:00at 
> io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
> 13:20:00at com.sun.proxy.$Proxy70.sadd(Unknown Source)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.saddPerformAndVerify(HashesAndCrashesDUnitTest.java:272)
> 13:20:00at 
> org.apache.geode.redis.internal.executor.hash.HashesAndCrashesDUnitTest.lambda$modifyDataWhileCrashingVMs$5(HashesAndCrashesDUnitTest.java:200)
> 13:20:00
> 13:20:00Caused by:
> 13:20:00io.netty.channel.unix.Errors$NativeIoException: 
> readAddress(..) failed: Connection reset by peer {noformat}
>  
> I bisected and think this commit introduced the failure:
> {noformat}
> commit 0a91484b05f1caffa8cc3a59cc7fc38abe4376ed (HEAD)
> Author: Darrel 

[jira] [Assigned] (GEODE-8417) CI failure: SessionExpirationDUnitTest.sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer fails with a timeout

2020-08-10 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8417:
---

Assignee: Darrel Schneider

> CI failure: 
> SessionExpirationDUnitTest.sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer
>  fails with a timeout
> -
>
> Key: GEODE-8417
> URL: https://issues.apache.org/jira/browse/GEODE-8417
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Bruce J Schuchardt
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
>
> This test failed in a CI run:
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/402#L5f20d92c:648]
>  
> {noformat}
> org.apache.geode.redis.session.SessionExpirationDUnitTest > 
> sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer FAILED
> 13:20:39org.awaitility.core.ConditionTimeoutException: Condition with 
> lambda expression in 
> org.apache.geode.redis.session.SessionExpirationDUnitTest that uses 
> java.lang.String was not fulfilled within 10 seconds.
> 13:20:39at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:165)
> 13:20:39at 
> org.awaitility.core.CallableCondition.await(CallableCondition.java:78)
> 13:20:39at 
> org.awaitility.core.CallableCondition.await(CallableCondition.java:26)
> 13:20:39at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:895)
> 13:20:39at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:864)
> 13:20:39at 
> org.apache.geode.redis.session.SessionExpirationDUnitTest.waitForTheSessionToExpire(SessionExpirationDUnitTest.java:113)
> 13:20:39at 
> org.apache.geode.redis.session.SessionExpirationDUnitTest.sessionShouldTimeout_whenAppFailsOverToAnotherRedisServer(SessionExpirationDUnitTest.java:93)
> 13:20:39
> 13:20:39Caused by:
> 13:20:39redis.clients.jedis.exceptions.JedisConnectionException: 
> java.net.SocketException: Broken pipe (Write failed)
> 13:20:39at 
> redis.clients.jedis.Protocol.sendCommand(Protocol.java:109)
> 13:20:39at 
> redis.clients.jedis.Protocol.sendCommand(Protocol.java:89)
> 13:20:39at 
> redis.clients.jedis.Connection.sendCommand(Connection.java:126)
> 13:20:39at 
> redis.clients.jedis.BinaryClient.ttl(BinaryClient.java:186)
> 13:20:39at redis.clients.jedis.Client.ttl(Client.java:114)
> 13:20:39at redis.clients.jedis.Jedis.ttl(Jedis.java:399)
> 13:20:39at 
> org.apache.geode.redis.session.SessionExpirationDUnitTest.lambda$waitForTheSessionToExpire$0(SessionExpirationDUnitTest.java:114)
> 13:20:39
> 13:20:39Caused by:
> 13:20:39java.net.SocketException: Broken pipe (Write failed)
> 13:20:39at java.net.SocketOutputStream.socketWrite0(Native 
> Method)
> 13:20:39at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
> 13:20:39at 
> java.net.SocketOutputStream.write(SocketOutputStream.java:150)
> 13:20:39at 
> redis.clients.jedis.util.RedisOutputStream.flushBuffer(RedisOutputStream.java:52)
> 13:20:39at 
> redis.clients.jedis.util.RedisOutputStream.write(RedisOutputStream.java:59)
> 13:20:39at 
> redis.clients.jedis.Protocol.sendCommand(Protocol.java:95)
> 13:20:39... 6 more
> 13:20:39 {noformat}
>  
> I bisected and found it started failing here:
> {noformat}
> commit 0a91484b05f1caffa8cc3a59cc7fc38abe4376ed
> Author: Darrel Schneider 
> Date:   Mon Aug 10 12:50:31 2020 -0700
> GEODE-8393: change memberDeparted to disconnect the connection (#5431)
> * server now disconnects connection if memberDeparted
> Co-authored-by: john Hutchison 
>  {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8409) SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key even if target exists and is not a set

2020-08-10 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8409.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key even if 
> target exists and is not a set
> 
>
> Key: GEODE-8409
> URL: https://issues.apache.org/jira/browse/GEODE-8409
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> This sequence:
>  
> {{SET setres xxx
> SUNIONSTORE setres foo111 bar222}}
> ({{foo111}} and {{bar222}} do not exist)
> {{setres}} should be deleted and we should get a response of
> {{(integer) 0}}
> Instead we get
> {{(error) WRONGTYPE Operation against a key holding the wrong kind of value}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8393) CI: CrashAndNoRepeatDUnitTest > givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED

2020-08-06 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17172677#comment-17172677
 ] 

Darrel Schneider commented on GEODE-8393:
-

I think this is a test issue and not a product issue. The test is configured to 
use lettuce and configures the connection to auto reconnect. This can cause the 
client to automatically redo an operation. When that happens we can end up with 
an unexpected 0 or 1. I'm working on fixing this test.

> CI: CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> ---
>
> Key: GEODE-8393
> URL: https://issues.apache.org/jira/browse/GEODE-8393
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Jinmei Liao
>Priority: Major
>  Labels: CI
>
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/384
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> unexpected 0 at index 3967 in string 
> 

[jira] [Assigned] (GEODE-8393) CI: CrashAndNoRepeatDUnitTest > givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED

2020-08-06 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8393:
---

Assignee: Darrel Schneider

> CI: CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> ---
>
> Key: GEODE-8393
> URL: https://issues.apache.org/jira/browse/GEODE-8393
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Jinmei Liao
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: CI
>
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/384
> org.apache.geode.redis.internal.executor.CrashAndNoRepeatDUnitTest > 
> givenServerCrashesDuringAPPEND_thenDataIsNotLost FAILED
> java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> unexpected 0 at index 3967 in string 
> 

[jira] [Assigned] (GEODE-8409) SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key even if target exists and is not a set

2020-08-05 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8409:
---

Assignee: Darrel Schneider

> SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key even if 
> target exists and is not a set
> 
>
> Key: GEODE-8409
> URL: https://issues.apache.org/jira/browse/GEODE-8409
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> This sequence:
>  
> {{SET setres xxx
> SUNIONSTORE setres foo111 bar222}}
> ({{foo111}} and {{bar222}} do not exist)
> {{setres}} should be deleted and we should get a response of
> {{(integer) 0}}
> Instead we get
> {{(error) WRONGTYPE Operation against a key holding the wrong kind of value}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8409) SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key even if target exists and is not a set

2020-08-05 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8409:
---

 Summary: SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete 
the target key even if target exists and is not a set
 Key: GEODE-8409
 URL: https://issues.apache.org/jira/browse/GEODE-8409
 Project: Geode
  Issue Type: Bug
  Components: redis
Reporter: Darrel Schneider


This sequence:

 

{{SET setres xxx
SUNIONSTORE setres foo111 bar222}}

({{foo111}} and {{bar222}} do not exist)

{{setres}} should be deleted and we should get a response of

{{(integer) 0}}

Instead we get

{{(error) WRONGTYPE Operation against a key holding the wrong kind of value}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-6564) Clearing a replicated region with expiration causes a memory leak

2020-08-04 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-6564:

Fix Version/s: 1.13.0

> Clearing a replicated region with expiration causes a memory leak
> -
>
> Key: GEODE-6564
> URL: https://issues.apache.org/jira/browse/GEODE-6564
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Barrett Oglesby
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.14.0
>
>
> Clearing a replicated region with expiration causes a memory leak
> Both the RegionEntries and EntryExpiryTasks are still live after loading 
> entries into the region and then clearing it.
> Server Startup:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 29856 2797840 [C
>  4: 2038 520600 [B
> Total 187711 10089624
> {noformat}
> Load 100 entries with 600k payload (representing a session):
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2496 60666440 [B
>  2: 30157 2828496 [C
>  73: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  93: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 190737 70240472
> {noformat}
> Clear region:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2398 60505944 [B
>  2: 30448 2849456 [C
>  74: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  100: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 192199 70373048
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2503 120511688 [B
>  2: 30506 2854384 [C
>  46: 200 14400 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  61: 200 9600 org.apache.geode.internal.cache.EntryExpiryTask
> Total 193272 130421432
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2600 180517240 [B
>  2: 30562 2859584 [C
>  33: 300 21600 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  47: 300 14400 org.apache.geode.internal.cache.EntryExpiryTask
> Total 194310 190468176
> {noformat}
> A heap dump shows the VersionedStatsRegionEntryHeapStringKey1 instances are 
> referenced by the DistributedRegion entryExpiryTasks:
> {noformat}
> --> org.apache.geode.internal.cache.DistributedRegion@0x76adbbb88 (816 bytes) 
> (field entryExpiryTasks:)
> --> java.util.concurrent.ConcurrentHashMap@0x76adbc028 (100 bytes) (field 
> table:)
> --> [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358 (4112 bytes) 
> (Element 276 of [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc4e20 (44 bytes) (field 
> next:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc32f0 (44 bytes) (field 
> key:)
> --> 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1@0x76edc3210
>  (86 bytes) 
> {noformat}
> LocalRegion.cancelAllEntryExpiryTasks is called when the region is cleared:
> {noformat}
> java.lang.Exception: Stack trace
>  at java.lang.Thread.dumpStack(Thread.java:1333)
>  at 
> org.apache.geode.internal.cache.LocalRegion.cancelAllEntryExpiryTasks(LocalRegion.java:8202)
>  at 
> org.apache.geode.internal.cache.LocalRegion.clearRegionLocally(LocalRegion.java:9094)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.cmnClearRegion(DistributedRegion.java:1962)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicClear(LocalRegion.java:8998)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.basicClear(DistributedRegion.java:1939)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicBridgeClear(LocalRegion.java:8988)
>  at 
> org.apache.geode.internal.cache.tier.sockets.command.ClearRegion.cmdExecute(ClearRegion.java:123)
> {noformat}
> But it doesn't clear the entryExpiryTasks map:
> {noformat}
> LocalRegion.clearRegionLocally before cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> LocalRegion.clearRegionLocally after cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> {noformat}
> As a test, I added this call to the bottom of the cancelAllEntryExpiryTasks 
> method:
> {noformat}
> this.entryExpiryTasks.clear();
> {noformat}
> This addressed the leak:
> {noformat}
> Server Startup: Total 182414 9855616
> Load/Clear 1: Total 191049 10315832
> Load/Clear 2: Total 191978 10329664
> Load/Clear 3: Total 192638 10360360
> {noformat}
> As a work-around, 

[jira] [Updated] (GEODE-6564) Clearing a replicated region with expiration causes a memory leak

2020-08-04 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-6564:

Fix Version/s: 1.12.1

> Clearing a replicated region with expiration causes a memory leak
> -
>
> Key: GEODE-6564
> URL: https://issues.apache.org/jira/browse/GEODE-6564
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Barrett Oglesby
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.1, 1.13.0, 1.14.0
>
>
> Clearing a replicated region with expiration causes a memory leak
> Both the RegionEntries and EntryExpiryTasks are still live after loading 
> entries into the region and then clearing it.
> Server Startup:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 29856 2797840 [C
>  4: 2038 520600 [B
> Total 187711 10089624
> {noformat}
> Load 100 entries with 600k payload (representing a session):
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2496 60666440 [B
>  2: 30157 2828496 [C
>  73: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  93: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 190737 70240472
> {noformat}
> Clear region:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2398 60505944 [B
>  2: 30448 2849456 [C
>  74: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  100: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 192199 70373048
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2503 120511688 [B
>  2: 30506 2854384 [C
>  46: 200 14400 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  61: 200 9600 org.apache.geode.internal.cache.EntryExpiryTask
> Total 193272 130421432
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2600 180517240 [B
>  2: 30562 2859584 [C
>  33: 300 21600 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  47: 300 14400 org.apache.geode.internal.cache.EntryExpiryTask
> Total 194310 190468176
> {noformat}
> A heap dump shows the VersionedStatsRegionEntryHeapStringKey1 instances are 
> referenced by the DistributedRegion entryExpiryTasks:
> {noformat}
> --> org.apache.geode.internal.cache.DistributedRegion@0x76adbbb88 (816 bytes) 
> (field entryExpiryTasks:)
> --> java.util.concurrent.ConcurrentHashMap@0x76adbc028 (100 bytes) (field 
> table:)
> --> [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358 (4112 bytes) 
> (Element 276 of [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc4e20 (44 bytes) (field 
> next:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc32f0 (44 bytes) (field 
> key:)
> --> 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1@0x76edc3210
>  (86 bytes) 
> {noformat}
> LocalRegion.cancelAllEntryExpiryTasks is called when the region is cleared:
> {noformat}
> java.lang.Exception: Stack trace
>  at java.lang.Thread.dumpStack(Thread.java:1333)
>  at 
> org.apache.geode.internal.cache.LocalRegion.cancelAllEntryExpiryTasks(LocalRegion.java:8202)
>  at 
> org.apache.geode.internal.cache.LocalRegion.clearRegionLocally(LocalRegion.java:9094)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.cmnClearRegion(DistributedRegion.java:1962)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicClear(LocalRegion.java:8998)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.basicClear(DistributedRegion.java:1939)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicBridgeClear(LocalRegion.java:8988)
>  at 
> org.apache.geode.internal.cache.tier.sockets.command.ClearRegion.cmdExecute(ClearRegion.java:123)
> {noformat}
> But it doesn't clear the entryExpiryTasks map:
> {noformat}
> LocalRegion.clearRegionLocally before cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> LocalRegion.clearRegionLocally after cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> {noformat}
> As a test, I added this call to the bottom of the cancelAllEntryExpiryTasks 
> method:
> {noformat}
> this.entryExpiryTasks.clear();
> {noformat}
> This addressed the leak:
> {noformat}
> Server Startup: Total 182414 9855616
> Load/Clear 1: Total 191049 10315832
> Load/Clear 2: Total 191978 10329664
> Load/Clear 3: Total 192638 10360360
> {noformat}
> As a 

[jira] [Resolved] (GEODE-6564) Clearing a replicated region with expiration causes a memory leak

2020-08-03 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-6564.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Clearing a replicated region with expiration causes a memory leak
> -
>
> Key: GEODE-6564
> URL: https://issues.apache.org/jira/browse/GEODE-6564
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Barrett Oglesby
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Clearing a replicated region with expiration causes a memory leak
> Both the RegionEntries and EntryExpiryTasks are still live after loading 
> entries into the region and then clearing it.
> Server Startup:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 29856 2797840 [C
>  4: 2038 520600 [B
> Total 187711 10089624
> {noformat}
> Load 100 entries with 600k payload (representing a session):
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2496 60666440 [B
>  2: 30157 2828496 [C
>  73: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  93: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 190737 70240472
> {noformat}
> Clear region:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2398 60505944 [B
>  2: 30448 2849456 [C
>  74: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  100: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 192199 70373048
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2503 120511688 [B
>  2: 30506 2854384 [C
>  46: 200 14400 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  61: 200 9600 org.apache.geode.internal.cache.EntryExpiryTask
> Total 193272 130421432
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2600 180517240 [B
>  2: 30562 2859584 [C
>  33: 300 21600 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  47: 300 14400 org.apache.geode.internal.cache.EntryExpiryTask
> Total 194310 190468176
> {noformat}
> A heap dump shows the VersionedStatsRegionEntryHeapStringKey1 instances are 
> referenced by the DistributedRegion entryExpiryTasks:
> {noformat}
> --> org.apache.geode.internal.cache.DistributedRegion@0x76adbbb88 (816 bytes) 
> (field entryExpiryTasks:)
> --> java.util.concurrent.ConcurrentHashMap@0x76adbc028 (100 bytes) (field 
> table:)
> --> [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358 (4112 bytes) 
> (Element 276 of [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc4e20 (44 bytes) (field 
> next:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc32f0 (44 bytes) (field 
> key:)
> --> 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1@0x76edc3210
>  (86 bytes) 
> {noformat}
> LocalRegion.cancelAllEntryExpiryTasks is called when the region is cleared:
> {noformat}
> java.lang.Exception: Stack trace
>  at java.lang.Thread.dumpStack(Thread.java:1333)
>  at 
> org.apache.geode.internal.cache.LocalRegion.cancelAllEntryExpiryTasks(LocalRegion.java:8202)
>  at 
> org.apache.geode.internal.cache.LocalRegion.clearRegionLocally(LocalRegion.java:9094)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.cmnClearRegion(DistributedRegion.java:1962)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicClear(LocalRegion.java:8998)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.basicClear(DistributedRegion.java:1939)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicBridgeClear(LocalRegion.java:8988)
>  at 
> org.apache.geode.internal.cache.tier.sockets.command.ClearRegion.cmdExecute(ClearRegion.java:123)
> {noformat}
> But it doesn't clear the entryExpiryTasks map:
> {noformat}
> LocalRegion.clearRegionLocally before cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> LocalRegion.clearRegionLocally after cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> {noformat}
> As a test, I added this call to the bottom of the cancelAllEntryExpiryTasks 
> method:
> {noformat}
> this.entryExpiryTasks.clear();
> {noformat}
> This addressed the leak:
> {noformat}
> Server Startup: Total 182414 9855616
> Load/Clear 1: Total 191049 10315832
> Load/Clear 2: Total 191978 10329664
> Load/Clear 3: Total 192638 10360360
> {noformat}
> As a work-around, a Function that 

[jira] [Assigned] (GEODE-6564) Clearing a replicated region with expiration causes a memory leak

2020-08-03 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-6564:
---

Assignee: Darrel Schneider

> Clearing a replicated region with expiration causes a memory leak
> -
>
> Key: GEODE-6564
> URL: https://issues.apache.org/jira/browse/GEODE-6564
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Barrett Oglesby
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Clearing a replicated region with expiration causes a memory leak
> Both the RegionEntries and EntryExpiryTasks are still live after loading 
> entries into the region and then clearing it.
> Server Startup:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 29856 2797840 [C
>  4: 2038 520600 [B
> Total 187711 10089624
> {noformat}
> Load 100 entries with 600k payload (representing a session):
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2496 60666440 [B
>  2: 30157 2828496 [C
>  73: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  93: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 190737 70240472
> {noformat}
> Clear region:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2398 60505944 [B
>  2: 30448 2849456 [C
>  74: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  100: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 192199 70373048
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2503 120511688 [B
>  2: 30506 2854384 [C
>  46: 200 14400 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  61: 200 9600 org.apache.geode.internal.cache.EntryExpiryTask
> Total 193272 130421432
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2600 180517240 [B
>  2: 30562 2859584 [C
>  33: 300 21600 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  47: 300 14400 org.apache.geode.internal.cache.EntryExpiryTask
> Total 194310 190468176
> {noformat}
> A heap dump shows the VersionedStatsRegionEntryHeapStringKey1 instances are 
> referenced by the DistributedRegion entryExpiryTasks:
> {noformat}
> --> org.apache.geode.internal.cache.DistributedRegion@0x76adbbb88 (816 bytes) 
> (field entryExpiryTasks:)
> --> java.util.concurrent.ConcurrentHashMap@0x76adbc028 (100 bytes) (field 
> table:)
> --> [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358 (4112 bytes) 
> (Element 276 of [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc4e20 (44 bytes) (field 
> next:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc32f0 (44 bytes) (field 
> key:)
> --> 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1@0x76edc3210
>  (86 bytes) 
> {noformat}
> LocalRegion.cancelAllEntryExpiryTasks is called when the region is cleared:
> {noformat}
> java.lang.Exception: Stack trace
>  at java.lang.Thread.dumpStack(Thread.java:1333)
>  at 
> org.apache.geode.internal.cache.LocalRegion.cancelAllEntryExpiryTasks(LocalRegion.java:8202)
>  at 
> org.apache.geode.internal.cache.LocalRegion.clearRegionLocally(LocalRegion.java:9094)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.cmnClearRegion(DistributedRegion.java:1962)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicClear(LocalRegion.java:8998)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.basicClear(DistributedRegion.java:1939)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicBridgeClear(LocalRegion.java:8988)
>  at 
> org.apache.geode.internal.cache.tier.sockets.command.ClearRegion.cmdExecute(ClearRegion.java:123)
> {noformat}
> But it doesn't clear the entryExpiryTasks map:
> {noformat}
> LocalRegion.clearRegionLocally before cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> LocalRegion.clearRegionLocally after cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> {noformat}
> As a test, I added this call to the bottom of the cancelAllEntryExpiryTasks 
> method:
> {noformat}
> this.entryExpiryTasks.clear();
> {noformat}
> This addressed the leak:
> {noformat}
> Server Startup: Total 182414 9855616
> Load/Clear 1: Total 191049 10315832
> Load/Clear 2: Total 191978 10329664
> Load/Clear 3: Total 192638 10360360
> {noformat}
> As a 

[jira] [Commented] (GEODE-6564) Clearing a replicated region with expiration causes a memory leak

2020-08-03 Thread Darrel Schneider (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170148#comment-17170148
 ] 

Darrel Schneider commented on GEODE-6564:
-

I think the fix for this is pretty simple. In 
LocalRegion.cancelAllEntryExpiryTasks as the values of the entryExpiryTasks map 
are iterated, the method needs to also remove each value from the map. 
Currently it just cancels the task and purges the scheduler but all this does 
is remove the task from the scheduler. It needs to also remove the task from 
the entryExpiryTasks map.

> Clearing a replicated region with expiration causes a memory leak
> -
>
> Key: GEODE-6564
> URL: https://issues.apache.org/jira/browse/GEODE-6564
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Barrett Oglesby
>Priority: Major
>
> Clearing a replicated region with expiration causes a memory leak
> Both the RegionEntries and EntryExpiryTasks are still live after loading 
> entries into the region and then clearing it.
> Server Startup:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 29856 2797840 [C
>  4: 2038 520600 [B
> Total 187711 10089624
> {noformat}
> Load 100 entries with 600k payload (representing a session):
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2496 60666440 [B
>  2: 30157 2828496 [C
>  73: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  93: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 190737 70240472
> {noformat}
> Clear region:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2398 60505944 [B
>  2: 30448 2849456 [C
>  74: 100 7200 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  100: 100 4800 org.apache.geode.internal.cache.EntryExpiryTask
> Total 192199 70373048
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2503 120511688 [B
>  2: 30506 2854384 [C
>  46: 200 14400 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  61: 200 9600 org.apache.geode.internal.cache.EntryExpiryTask
> Total 193272 130421432
> {noformat}
> Load and clear another 100 entries:
> {noformat}
>  num #instances #bytes class name
> --
>  1: 2600 180517240 [B
>  2: 30562 2859584 [C
>  33: 300 21600 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1
>  47: 300 14400 org.apache.geode.internal.cache.EntryExpiryTask
> Total 194310 190468176
> {noformat}
> A heap dump shows the VersionedStatsRegionEntryHeapStringKey1 instances are 
> referenced by the DistributedRegion entryExpiryTasks:
> {noformat}
> --> org.apache.geode.internal.cache.DistributedRegion@0x76adbbb88 (816 bytes) 
> (field entryExpiryTasks:)
> --> java.util.concurrent.ConcurrentHashMap@0x76adbc028 (100 bytes) (field 
> table:)
> --> [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358 (4112 bytes) 
> (Element 276 of [Ljava.util.concurrent.ConcurrentHashMap$Node;@0x76ee85358:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc4e20 (44 bytes) (field 
> next:)
> --> java.util.concurrent.ConcurrentHashMap$Node@0x76edc32f0 (44 bytes) (field 
> key:)
> --> 
> org.apache.geode.internal.cache.entries.VersionedStatsRegionEntryHeapStringKey1@0x76edc3210
>  (86 bytes) 
> {noformat}
> LocalRegion.cancelAllEntryExpiryTasks is called when the region is cleared:
> {noformat}
> java.lang.Exception: Stack trace
>  at java.lang.Thread.dumpStack(Thread.java:1333)
>  at 
> org.apache.geode.internal.cache.LocalRegion.cancelAllEntryExpiryTasks(LocalRegion.java:8202)
>  at 
> org.apache.geode.internal.cache.LocalRegion.clearRegionLocally(LocalRegion.java:9094)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.cmnClearRegion(DistributedRegion.java:1962)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicClear(LocalRegion.java:8998)
>  at 
> org.apache.geode.internal.cache.DistributedRegion.basicClear(DistributedRegion.java:1939)
>  at 
> org.apache.geode.internal.cache.LocalRegion.basicBridgeClear(LocalRegion.java:8988)
>  at 
> org.apache.geode.internal.cache.tier.sockets.command.ClearRegion.cmdExecute(ClearRegion.java:123)
> {noformat}
> But it doesn't clear the entryExpiryTasks map:
> {noformat}
> LocalRegion.clearRegionLocally before cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> LocalRegion.clearRegionLocally after cancelAllEntryExpiryTasks 
> entryExpiryTasks=100
> {noformat}
> As a test, I added this call to the bottom of the cancelAllEntryExpiryTasks 
> method:
> {noformat}

[jira] [Resolved] (GEODE-8387) SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key if the computed set is empty

2020-07-31 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8387.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key if the 
> computed set is empty
> --
>
> Key: GEODE-8387
> URL: https://issues.apache.org/jira/browse/GEODE-8387
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> The current impl of SUNIONSTORE, SINTERSTORE, and SDIFFSTORE set the 
> destination to an empty set if the computed set is empty. But they should 
> instead in that case delete the destination. This is true if the read keys do 
> not exist.
> If the read keys exist but are not sets then the commands should fail with a 
> WRONGTYPE error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8386) SRANDMEMBER with negative count should return that many members

2020-07-31 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8386.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> SRANDMEMBER with negative count should return that many members
> ---
>
> Key: GEODE-8386
> URL: https://issues.apache.org/jira/browse/GEODE-8386
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> When SRANDMEMBER is sent a negative count, it should return a number of items 
> equal to the absolute value. uniqueness is not guaranteed.
>  
> {{SADD abc 123 456
> SRANDMEMBER abc -1 -> 123
> SRANDMEMBER abc -2 -> 123, 123 OR 123, 456, 456, 123 OR 456, 456
> SRANDMEMBER abc -5 -> 123,123,456,123,456 OR any one of the 2^5 
> combinations...}}
> see the redis unit tests and docs: unit/type/set -> SRANDMEMBER with  
> - hashtable
> and documentation: [https://redis.io/commands/srandmember]
> Currently, Geode Redis returns only up to as many members as there are in the 
> set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8384) SETEX error message should be consistent with native redis

2020-07-31 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8384.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> SETEX error message should be consistent with native redis
> --
>
> Key: GEODE-8384
> URL: https://issues.apache.org/jira/browse/GEODE-8384
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> When we pass a negative expiration to SETEX on native redis we get the 
> following error message:
>  
> {{ERR invalid expire time in setex}}
> When we pass a negative expiration to SETEX on geode redis we get the 
> following error message:
>  
> {{ERR The expiration argument must be greater than 0}}
> These error messages should match.
> Revert the following native Redis test back to its original form:
>  
> {{test \{SETEX - Wrong time parameter}}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8333) Fix PUBSUB hang

2020-07-29 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8333:

Description: 
PUBSUB hangs with concurrent publishers and subscribers on multiple servers.

The initial fix is being reverted because it caused a bug in which responses to 
clients could now be out of order.

 

  was:PUBSUB hangs with concurrent publishers and subscribers on multiple 
servers


> Fix PUBSUB hang
> ---
>
> Key: GEODE-8333
> URL: https://issues.apache.org/jira/browse/GEODE-8333
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> PUBSUB hangs with concurrent publishers and subscribers on multiple servers.
> The initial fix is being reverted because it caused a bug in which responses 
> to clients could now be out of order.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (GEODE-8333) Fix PUBSUB hang

2020-07-29 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reopened GEODE-8333:
-

> Fix PUBSUB hang
> ---
>
> Key: GEODE-8333
> URL: https://issues.apache.org/jira/browse/GEODE-8333
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> PUBSUB hangs with concurrent publishers and subscribers on multiple servers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8387) SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete the target key if the computed set is empty

2020-07-27 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8387:
---

 Summary: SUNIONSTORE, SINTERSTORE, and SDIFFSTORE should delete 
the target key if the computed set is empty
 Key: GEODE-8387
 URL: https://issues.apache.org/jira/browse/GEODE-8387
 Project: Geode
  Issue Type: Bug
  Components: redis
Reporter: Darrel Schneider


The current impl of SUNIONSTORE, SINTERSTORE, and SDIFFSTORE set the 
destination to an empty set if the computed set is empty. But they should 
instead in that case delete the destination. This is true if the read keys do 
not exist.

If the read keys exist but are not sets then the commands should fail with a 
WRONGTYPE error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8386) SRANDMEMBER with negative count should return that many members

2020-07-27 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8386:
---

Assignee: Darrel Schneider

> SRANDMEMBER with negative count should return that many members
> ---
>
> Key: GEODE-8386
> URL: https://issues.apache.org/jira/browse/GEODE-8386
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> When SRANDMEMBER is sent a negative count, it should return a number of items 
> equal to the absolute value. uniqueness is not guaranteed.
>  
> {{SADD abc 123 456
> SRANDMEMBER abc -1 -> 123
> SRANDMEMBER abc -2 -> 123, 123 OR 123, 456, 456, 123 OR 456, 456
> SRANDMEMBER abc -5 -> 123,123,456,123,456 OR any one of the 2^5 
> combinations...}}
> see the redis unit tests and docs: unit/type/set -> SRANDMEMBER with  
> - hashtable
> and documentation: [https://redis.io/commands/srandmember]
> Currently, Geode Redis returns only up to as many members as there are in the 
> set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8386) SRANDMEMBER with negative count should return that many members

2020-07-27 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8386:
---

 Summary: SRANDMEMBER with negative count should return that many 
members
 Key: GEODE-8386
 URL: https://issues.apache.org/jira/browse/GEODE-8386
 Project: Geode
  Issue Type: Bug
  Components: redis
Reporter: Darrel Schneider


When SRANDMEMBER is sent a negative count, it should return a number of items 
equal to the absolute value. uniqueness is not guaranteed.

 

{{SADD abc 123 456
SRANDMEMBER abc -1 -> 123
SRANDMEMBER abc -2 -> 123, 123 OR 123, 456, 456, 123 OR 456, 456
SRANDMEMBER abc -5 -> 123,123,456,123,456 OR any one of the 2^5 
combinations...}}

see the redis unit tests and docs: unit/type/set -> SRANDMEMBER with  - 
hashtable
and documentation: [https://redis.io/commands/srandmember]

Currently, Geode Redis returns only up to as many members as there are in the 
set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8384) SETEX error message should be consistent with native redis

2020-07-27 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8384:
---

Assignee: Darrel Schneider

> SETEX error message should be consistent with native redis
> --
>
> Key: GEODE-8384
> URL: https://issues.apache.org/jira/browse/GEODE-8384
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> When we pass a negative expiration to SETEX on native redis we get the 
> following error message:
>  
> {{ERR invalid expire time in setex}}
> When we pass a negative expiration to SETEX on geode redis we get the 
> following error message:
>  
> {{ERR The expiration argument must be greater than 0}}
> These error messages should match.
> Revert the following native Redis test back to its original form:
>  
> {{test \{SETEX - Wrong time parameter}}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8384) SETEX error message should be consistent with native redis

2020-07-27 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-8384:

Issue Type: Bug  (was: Improvement)

> SETEX error message should be consistent with native redis
> --
>
> Key: GEODE-8384
> URL: https://issues.apache.org/jira/browse/GEODE-8384
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Priority: Major
>
> When we pass a negative expiration to SETEX on native redis we get the 
> following error message:
>  
> {{ERR invalid expire time in setex}}
> When we pass a negative expiration to SETEX on geode redis we get the 
> following error message:
>  
> {{ERR The expiration argument must be greater than 0}}
> These error messages should match.
> Revert the following native Redis test back to its original form:
>  
> {{test \{SETEX - Wrong time parameter}}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8333) Fix PUBSUB hang

2020-07-24 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8333.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Fix PUBSUB hang
> ---
>
> Key: GEODE-8333
> URL: https://issues.apache.org/jira/browse/GEODE-8333
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> PUBSUB hangs with concurrent publishers and subscribers on multiple servers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8333) Fix PUBSUB hang

2020-07-24 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8333:
---

Assignee: Darrel Schneider

> Fix PUBSUB hang
> ---
>
> Key: GEODE-8333
> URL: https://issues.apache.org/jira/browse/GEODE-8333
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
>
> PUBSUB hangs with concurrent publishers and subscribers on multiple servers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8375) No-op test to run Redis API for Geode for local development

2020-07-23 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8375.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> No-op test to run Redis API for Geode for local development
> ---
>
> Key: GEODE-8375
> URL: https://issues.apache.org/jira/browse/GEODE-8375
> Project: Geode
>  Issue Type: Test
>  Components: redis
>Reporter: Sarah Abbey
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> No-op test to run Redis API for Geode for local development



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8362) Add redis tests to ensure that commands can access binary data

2020-07-21 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8362.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Add redis tests to ensure that commands can access binary data
> --
>
> Key: GEODE-8362
> URL: https://issues.apache.org/jira/browse/GEODE-8362
> Project: Geode
>  Issue Type: Test
>  Components: redis
>Reporter: Jens Deppe
>Assignee: Jens Deppe
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8338) Redis commands may be repeated when server dies

2020-07-10 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8338.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> Redis commands may be repeated when server dies
> ---
>
> Key: GEODE-8338
> URL: https://issues.apache.org/jira/browse/GEODE-8338
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Darrel Schneider
>Priority: Major
> Fix For: 1.14.0
>
>
> Since we have one redundant copy of the data, and since we modify the data 
> using a function, I think we may have a data corruption issue with 
> non-idempotent operations. What can happen is that an operation like APPEND 
> can:
>  0) executor called on non-primary redis server, 
>  1) modify the primary (by sending a function exec to it), 
>  2) modify the secondary (by sending a geode delta to it), 
>  3) the primary server fails now (before the function executing on it 
> completes), 
>  4) the non-primary redis server sees the function fail and that it is marked 
> as HA so it retries it. This time it sends it the secondary, which is the new 
> primary, but the operation was actually done on the secondary so this retry 
> will end up doing the operation twice.
> This may be okay for certain ops (like SADD) that are idempotent (but even 
> they could cause extra key events in the future), but for ops like APPEND we 
> end up appending twice.
> This will only happen when a server executing a function dies and our 
> function service retries the function on another server because it is marked 
> HA. The easy way to fix this is to change our function to not be HA. This is 
> just a single one line change.
>  Note that our clients can already see exceptions/errors if the server they 
> are connected to dies. When that happens the operation they requested may 
> have happened, and if they have multiple geode redis servers running it may 
> have been stored and still in memory. So clients will need some logic to 
> decide if they should redo such an operation or not (because it is already 
> done).
> *Note:* By making the function non-HA, it should just give the client another 
> case in which they need to handle a server crash. It can now be for servers 
> they were not connected to but that were involved in performing the operation 
> they requested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8338) Redis commands may be repeated when server dies

2020-07-10 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8338:
---

Assignee: Darrel Schneider

> Redis commands may be repeated when server dies
> ---
>
> Key: GEODE-8338
> URL: https://issues.apache.org/jira/browse/GEODE-8338
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Sarah Abbey
>Assignee: Darrel Schneider
>Priority: Major
>
> Since we have one redundant copy of the data, and since we modify the data 
> using a function, I think we may have a data corruption issue with 
> non-idempotent operations. What can happen is that an operation like APPEND 
> can:
>  0) executor called on non-primary redis server, 
>  1) modify the primary (by sending a function exec to it), 
>  2) modify the secondary (by sending a geode delta to it), 
>  3) the primary server fails now (before the function executing on it 
> completes), 
>  4) the non-primary redis server sees the function fail and that it is marked 
> as HA so it retries it. This time it sends it the secondary, which is the new 
> primary, but the operation was actually done on the secondary so this retry 
> will end up doing the operation twice.
> This may be okay for certain ops (like SADD) that are idempotent (but even 
> they could cause extra key events in the future), but for ops like APPEND we 
> end up appending twice.
> This will only happen when a server executing a function dies and our 
> function service retries the function on another server because it is marked 
> HA. The easy way to fix this is to change our function to not be HA. This is 
> just a single one line change.
>  Note that our clients can already see exceptions/errors if the server they 
> are connected to dies. When that happens the operation they requested may 
> have happened, and if they have multiple geode redis servers running it may 
> have been stored and still in memory. So clients will need some logic to 
> decide if they should redo such an operation or not (because it is already 
> done).
> *Note:* By making the function non-HA, it should just give the client another 
> case in which they need to handle a server crash. It can now be for servers 
> they were not connected to but that were involved in performing the operation 
> they requested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8332) refactor the redis "inRegion" classes

2020-07-06 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider resolved GEODE-8332.
-
Fix Version/s: 1.14.0
   Resolution: Fixed

> refactor the redis "inRegion" classes
> -
>
> Key: GEODE-8332
> URL: https://issues.apache.org/jira/browse/GEODE-8332
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
> Fix For: 1.14.0
>
>
> Currently the CommandFunction creates an instance of an "inRegion" class each 
> time it executes an operation. Instead it could use a single instance of a 
> class that is immutable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8332) refactor the redis "inRegion" classes

2020-07-06 Thread Darrel Schneider (Jira)
Darrel Schneider created GEODE-8332:
---

 Summary: refactor the redis "inRegion" classes
 Key: GEODE-8332
 URL: https://issues.apache.org/jira/browse/GEODE-8332
 Project: Geode
  Issue Type: Improvement
  Components: redis
Reporter: Darrel Schneider


Currently the CommandFunction creates an instance of an "inRegion" class each 
time it executes an operation. Instead it could use a single instance of a 
class that is immutable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8332) refactor the redis "inRegion" classes

2020-07-06 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8332:
---

Assignee: Darrel Schneider

> refactor the redis "inRegion" classes
> -
>
> Key: GEODE-8332
> URL: https://issues.apache.org/jira/browse/GEODE-8332
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> Currently the CommandFunction creates an instance of an "inRegion" class each 
> time it executes an operation. Instead it could use a single instance of a 
> class that is immutable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


<    1   2   3   4   5   6   7   8   9   10   >