[jira] [Resolved] (GEODE-9854) Orphaned .drf files causing memory leak

2022-04-04 Thread Jakov Varenina (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakov Varenina resolved GEODE-9854.
---
Fix Version/s: 1.15.0
   Resolution: Fixed

> Orphaned .drf files causing memory leak
> ---
>
> Key: GEODE-9854
> URL: https://issues.apache.org/jira/browse/GEODE-9854
> Project: Geode
>  Issue Type: Bug
>Reporter: Jakov Varenina
>Assignee: Jakov Varenina
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
> Attachments: screenshot-1.png, screenshot-2.png, server1.log
>
>
> Issue:
> An OpLog files are compacted, but the .drf file is left because it contains 
> deletes ofentries in previous .crfs. The .crf file is deleted, but the 
> orphaned .drf is not until all
> previous .crf files (.crfs with smaller id) are deleted.
> The problem is that compacted Oplog object representing orphaned .drf file 
> holds a structure in memory (Oplog.regionMap) that contains information that 
> is not useful
> after the compaction and it takes certain amount of memory. Besides, there is 
> a race condition in the code when creating .krf files that, depending on the 
> execution order,
> could make the problem more severe  (it could leave pendingKrfTags structure 
> on the regionMap and this could take up a significant amount of memory). This
> pendingKrfTags HashMap is actually empty, but consumes memory because it was 
> used previously and the size of the HashMap was not reduced after it is 
> cleared.
> This race condition usually happens when new Oplog is rolled out and previous 
> Oplog is immediately marked as eligible for compaction. Compaction and .krf 
> creation start at
> the similar time and compactor cancels creation of .krf if it is executed 
> first. The pendingKrfTags structure is usually cleared when .krf file is 
> created, but sincecompaction canceled creation of .krf, the pendingKrfTags 
> structure remain in memory until Oplog representing orphaned .drf file is 
> deleted.
> Below it can be see that actually .krf is never created for the orphaned .drf 
> Oplog object that has memory allocated in pendingKrfTags:
> {code:java}
> server1.log:1956:[info 2021/11/25 21:52:26.866 CET server1 
>  tid=0x34] Created oplog#129 
> drf for disk store store1.
> server1.log:1958:[info 2021/11/25 21:52:26.867 CET server1 
>  tid=0x34] Created oplog#129 
> crf for disk store store1.
> server1.log:1974:[info 2021/11/25 21:52:39.490 CET server1  store1 for oplog oplog#129> tid=0x5c] OplogCompactor for store1 compaction 
> oplog id(s): oplog#129
> server1.log:1980:[info 2021/11/25 21:52:39.532 CET server1  store1 for oplog oplog#129> tid=0x5c] compaction did 3685 creates and updates 
> in 41 ms
> server1.log:1982:[info 2021/11/25 21:52:39.532 CET server1  Task4> tid=0x5d] Deleted oplog#129 crf for disk store store1.
> {code}
> !screenshot-1.png|width=1123,height=268!
> Below you can see the log and heap dump of orphaned .drf Oplg that dont have 
> pendingKrfTags allocated in memory. This is because pendingKrfTags is cleared 
> when .krf is created as can be seen in below logs.
> {code:java}
> server1.log:1976:[info 2021/11/25 21:52:39.491 CET server1 
>  tid=0x34] Created oplog#130 
> drf for disk store store1.
> server1.log:1978:[info 2021/11/25 21:52:39.493 CET server1 
>  tid=0x34] Created oplog#130 
> crf for disk store store1.
> server1.log:1998:[info 2021/11/25 21:52:41.131 CET server1  OplogCompactor> tid=0x5c] Created oplog#130 krf for disk store store1.
> server1.log:2000:[info 2021/11/25 21:52:41.893 CET server1  store1 for oplog oplog#130> tid=0x5c|#130> tid=0x5c] OplogCompactor for 
> store1 compaction oplog id(s): oplog#130
> server1.log:2002:[info 2021/11/25 21:52:41.958 CET server1  store1 for oplog oplog#130> tid=0x5c|#130> tid=0x5c] compaction did 9918 
> creates and updates in 64 ms
> server1.log:2004:[info 2021/11/25 21:52:41.958 CET server1  Task4> tid=0x5d] Deleted oplog#130 crf for disk store store1.
> server1.log:2006:[info 2021/11/25 21:52:41.958 CET server1  Task4> tid=0x5d] Deleted oplog#130 krf for disk store store1.
> {code}
> !screenshot-2.png|width=1123,height=268!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10215) WAN replication not working after re-creating the partitioned region

2022-04-04 Thread Alexander Murmann (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Murmann updated GEODE-10215:
--
Labels: needsTriage  (was: )

> WAN replication not working after re-creating the partitioned region
> 
>
> Key: GEODE-10215
> URL: https://issues.apache.org/jira/browse/GEODE-10215
> Project: Geode
>  Issue Type: Bug
>Reporter: Jakov Varenina
>Priority: Major
>  Labels: needsTriage
>
> Steps to reproduce the issue:
> Start multi-site with at least 3 servers on each site. If there are less than 
> three servers then issue will not reproduce.
> Configuration site 1:
> create disk-store --name=queue_disk_store --dir=ds2
> create gateway-sender --id="remote_site_2" --parallel="true" 
> --remote-distributed-system-id="1"  --enable-persistence=true 
> --disk-store-name=queue_disk_store
> create disk-store --name=data_disk_store --dir=ds1
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> # Configure the remote site 2 with the region and the gateway-receiver
> # Run some traffic so that all buckets are created and data is replicated to 
> the other site
> alter region --name=/example-region --gateway-sender-id=""
> destroy region --name=/example-region
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> # run traffic to see that some data is not replicated to the remote site 2



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-10215) WAN replication not working after re-creating the partitioned region

2022-04-04 Thread Jakov Varenina (Jira)
Jakov Varenina created GEODE-10215:
--

 Summary: WAN replication not working after re-creating the 
partitioned region
 Key: GEODE-10215
 URL: https://issues.apache.org/jira/browse/GEODE-10215
 Project: Geode
  Issue Type: Bug
Reporter: Jakov Varenina


Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

create disk-store --name=queue_disk_store --dir=ds2

create gateway-sender --id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  --enable-persistence=true 
--disk-store-name=queue_disk_store

create disk-store --name=data_disk_store --dir=ds1

create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

# Configure the remote site 2 with the region and the gateway-receiver

# Run some traffic so that all buckets are created and data is replicated to 
the other site

alter region --name=/example-region --gateway-sender-id=""

destroy region --name=/example-region

create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

# run traffic to see that some data is not replicated to the remote site 2



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-10215) WAN replication not working after re-creating the partitioned region

2022-04-04 Thread Jakov Varenina (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakov Varenina reassigned GEODE-10215:
--

Assignee: Jakov Varenina

> WAN replication not working after re-creating the partitioned region
> 
>
> Key: GEODE-10215
> URL: https://issues.apache.org/jira/browse/GEODE-10215
> Project: Geode
>  Issue Type: Bug
>Reporter: Jakov Varenina
>Assignee: Jakov Varenina
>Priority: Major
>  Labels: needsTriage
>
> Steps to reproduce the issue:
> Start multi-site with at least 3 servers on each site. If there are less than 
> three servers then issue will not reproduce.
> Configuration site 1:
> create disk-store --name=queue_disk_store --dir=ds2
> create gateway-sender --id="remote_site_2" --parallel="true" 
> --remote-distributed-system-id="1"  --enable-persistence=true 
> --disk-store-name=queue_disk_store
> create disk-store --name=data_disk_store --dir=ds1
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> # Configure the remote site 2 with the region and the gateway-receiver
> # Run some traffic so that all buckets are created and data is replicated to 
> the other site
> alter region --name=/example-region --gateway-sender-id=""
> destroy region --name=/example-region
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> # run traffic to see that some data is not replicated to the remote site 2



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10215) WAN replication not working after re-creating the partitioned region

2022-04-04 Thread Jakov Varenina (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakov Varenina updated GEODE-10215:
---
Description: 
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

 
{code:java}
create disk-store --name=queue_disk_store --dir=ds2
create gateway-sender -id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  -enable-persistence=true 
--disk-store-name=queue_disk_store
create disk-store --name=data_disk_store --dir=ds1
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#Configure the remote site 2 with the region and the gateway-receiver  
#Run some traffic so that all buckets are created and data is replicated to the 
other site
alter region --name=/example-region --gateway-sender-id=""
destroy region --name=/example-region
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#run traffic to see that some data is not replicated to the remote site 2 {code}
 #  

  was:
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

create disk-store --name=queue_disk_store --dir=ds2

create gateway-sender --id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  --enable-persistence=true 
--disk-store-name=queue_disk_store

create disk-store --name=data_disk_store --dir=ds1

create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

# Configure the remote site 2 with the region and the gateway-receiver

# Run some traffic so that all buckets are created and data is replicated to 
the other site

alter region --name=/example-region --gateway-sender-id=""

destroy region --name=/example-region

create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

# run traffic to see that some data is not replicated to the remote site 2


> WAN replication not working after re-creating the partitioned region
> 
>
> Key: GEODE-10215
> URL: https://issues.apache.org/jira/browse/GEODE-10215
> Project: Geode
>  Issue Type: Bug
>Reporter: Jakov Varenina
>Assignee: Jakov Varenina
>Priority: Major
>  Labels: needsTriage
>
> Steps to reproduce the issue:
> Start multi-site with at least 3 servers on each site. If there are less than 
> three servers then issue will not reproduce.
> Configuration site 1:
>  
> {code:java}
> create disk-store --name=queue_disk_store --dir=ds2
> create gateway-sender -id="remote_site_2" --parallel="true" 
> --remote-distributed-system-id="1"  -enable-persistence=true 
> --disk-store-name=queue_disk_store
> create disk-store --name=data_disk_store --dir=ds1
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #Configure the remote site 2 with the region and the gateway-receiver  
> #Run some traffic so that all buckets are created and data is replicated to 
> the other site
> alter region --name=/example-region --gateway-sender-id=""
> destroy region --name=/example-region
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #run traffic to see that some data is not replicated to the remote site 2 
> {code}
>  #  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10215) WAN replication not working after re-creating the partitioned region

2022-04-04 Thread Jakov Varenina (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakov Varenina updated GEODE-10215:
---
Description: 
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

 
{code:java}
create disk-store --name=queue_disk_store --dir=ds2
create gateway-sender -id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  -enable-persistence=true 
--disk-store-name=queue_disk_store
create disk-store --name=data_disk_store --dir=ds1
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#Configure the remote site 2 with the region and the gateway-receiver  
#Run some traffic so that all buckets are created and data is replicated to the 
other site
alter region --name=/example-region --gateway-sender-id=""
destroy region --name=/example-region
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#run traffic to see that some data is not replicated to the remote site 2 {code}

  was:
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

 
{code:java}
create disk-store --name=queue_disk_store --dir=ds2
create gateway-sender -id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  -enable-persistence=true 
--disk-store-name=queue_disk_store
create disk-store --name=data_disk_store --dir=ds1
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#Configure the remote site 2 with the region and the gateway-receiver  
#Run some traffic so that all buckets are created and data is replicated to the 
other site
alter region --name=/example-region --gateway-sender-id=""
destroy region --name=/example-region
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#run traffic to see that some data is not replicated to the remote site 2 {code}
 #  


> WAN replication not working after re-creating the partitioned region
> 
>
> Key: GEODE-10215
> URL: https://issues.apache.org/jira/browse/GEODE-10215
> Project: Geode
>  Issue Type: Bug
>Reporter: Jakov Varenina
>Assignee: Jakov Varenina
>Priority: Major
>  Labels: needsTriage
>
> Steps to reproduce the issue:
> Start multi-site with at least 3 servers on each site. If there are less than 
> three servers then issue will not reproduce.
> Configuration site 1:
>  
> {code:java}
> create disk-store --name=queue_disk_store --dir=ds2
> create gateway-sender -id="remote_site_2" --parallel="true" 
> --remote-distributed-system-id="1"  -enable-persistence=true 
> --disk-store-name=queue_disk_store
> create disk-store --name=data_disk_store --dir=ds1
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #Configure the remote site 2 with the region and the gateway-receiver  
> #Run some traffic so that all buckets are created and data is replicated to 
> the other site
> alter region --name=/example-region --gateway-sender-id=""
> destroy region --name=/example-region
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #run traffic to see that some data is not replicated to the remote site 2 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10215) WAN replication not working after re-creating the partitioned region

2022-04-04 Thread Jakov Varenina (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakov Varenina updated GEODE-10215:
---
Description: 
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:
{code:java}
create disk-store --name=queue_disk_store --dir=ds2
create gateway-sender -id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  -enable-persistence=true 
--disk-store-name=queue_disk_store
create disk-store --name=data_disk_store --dir=ds1
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#Configure the remote site 2 with the region and the gateway-receiver  
#Run some traffic so that all buckets are created and data is replicated to the 
other site

alter region --name=/example-region --gateway-sender-id=""
destroy region --name=/example-region
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#run traffic to see that some data is not replicated to the remote site 2 {code}

  was:
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

 
{code:java}
create disk-store --name=queue_disk_store --dir=ds2
create gateway-sender -id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  -enable-persistence=true 
--disk-store-name=queue_disk_store
create disk-store --name=data_disk_store --dir=ds1
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#Configure the remote site 2 with the region and the gateway-receiver  
#Run some traffic so that all buckets are created and data is replicated to the 
other site

alter region --name=/example-region --gateway-sender-id=""
destroy region --name=/example-region
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#run traffic to see that some data is not replicated to the remote site 2 {code}


> WAN replication not working after re-creating the partitioned region
> 
>
> Key: GEODE-10215
> URL: https://issues.apache.org/jira/browse/GEODE-10215
> Project: Geode
>  Issue Type: Bug
>Reporter: Jakov Varenina
>Assignee: Jakov Varenina
>Priority: Major
>  Labels: needsTriage
>
> Steps to reproduce the issue:
> Start multi-site with at least 3 servers on each site. If there are less than 
> three servers then issue will not reproduce.
> Configuration site 1:
> {code:java}
> create disk-store --name=queue_disk_store --dir=ds2
> create gateway-sender -id="remote_site_2" --parallel="true" 
> --remote-distributed-system-id="1"  -enable-persistence=true 
> --disk-store-name=queue_disk_store
> create disk-store --name=data_disk_store --dir=ds1
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #Configure the remote site 2 with the region and the gateway-receiver  
> #Run some traffic so that all buckets are created and data is replicated to 
> the other site
> alter region --name=/example-region --gateway-sender-id=""
> destroy region --name=/example-region
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #run traffic to see that some data is not replicated to the remote site 2 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10215) WAN replication not working after re-creating the partitioned region

2022-04-04 Thread Jakov Varenina (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakov Varenina updated GEODE-10215:
---
Description: 
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

 
{code:java}
create disk-store --name=queue_disk_store --dir=ds2
create gateway-sender -id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  -enable-persistence=true 
--disk-store-name=queue_disk_store
create disk-store --name=data_disk_store --dir=ds1
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#Configure the remote site 2 with the region and the gateway-receiver  
#Run some traffic so that all buckets are created and data is replicated to the 
other site

alter region --name=/example-region --gateway-sender-id=""
destroy region --name=/example-region
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#run traffic to see that some data is not replicated to the remote site 2 {code}

  was:
Steps to reproduce the issue:

Start multi-site with at least 3 servers on each site. If there are less than 
three servers then issue will not reproduce.

Configuration site 1:

 
{code:java}
create disk-store --name=queue_disk_store --dir=ds2
create gateway-sender -id="remote_site_2" --parallel="true" 
--remote-distributed-system-id="1"  -enable-persistence=true 
--disk-store-name=queue_disk_store
create disk-store --name=data_disk_store --dir=ds1
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#Configure the remote site 2 with the region and the gateway-receiver  
#Run some traffic so that all buckets are created and data is replicated to the 
other site
alter region --name=/example-region --gateway-sender-id=""
destroy region --name=/example-region
create region --name=example-region --type=PARTITION_PERSISTENT 
--gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
--total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false

#run traffic to see that some data is not replicated to the remote site 2 {code}


> WAN replication not working after re-creating the partitioned region
> 
>
> Key: GEODE-10215
> URL: https://issues.apache.org/jira/browse/GEODE-10215
> Project: Geode
>  Issue Type: Bug
>Reporter: Jakov Varenina
>Assignee: Jakov Varenina
>Priority: Major
>  Labels: needsTriage
>
> Steps to reproduce the issue:
> Start multi-site with at least 3 servers on each site. If there are less than 
> three servers then issue will not reproduce.
> Configuration site 1:
>  
> {code:java}
> create disk-store --name=queue_disk_store --dir=ds2
> create gateway-sender -id="remote_site_2" --parallel="true" 
> --remote-distributed-system-id="1"  -enable-persistence=true 
> --disk-store-name=queue_disk_store
> create disk-store --name=data_disk_store --dir=ds1
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #Configure the remote site 2 with the region and the gateway-receiver  
> #Run some traffic so that all buckets are created and data is replicated to 
> the other site
> alter region --name=/example-region --gateway-sender-id=""
> destroy region --name=/example-region
> create region --name=example-region --type=PARTITION_PERSISTENT 
> --gateway-sender-id="remote_site_2" --disk-store=data_disk_store 
> --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false
> #run traffic to see that some data is not replicated to the remote site 2 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10213) Remove kirklund from codeowner areas of less knowledge

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated GEODE-10213:
---
Labels: pull-request-available  (was: )

> Remove kirklund from codeowner areas of less knowledge
> --
>
> Key: GEODE-10213
> URL: https://issues.apache.org/jira/browse/GEODE-10213
> Project: Geode
>  Issue Type: Wish
>Reporter: Kirk Lund
>Priority: Major
>  Labels: pull-request-available
>
> Remove kirklund from codeowner areas of less knowledge



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-10213) Remove kirklund from codeowner areas of less knowledge

2022-04-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17516941#comment-17516941
 ] 

ASF subversion and git services commented on GEODE-10213:
-

Commit ac98e81a81c72a0f34f263e7aa42ff0753887bb2 in geode's branch 
refs/heads/develop from Kirk Lund
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=ac98e81a81 ]

GEODE-10213: Remove kirklund from several codeowner areas (#7547)



> Remove kirklund from codeowner areas of less knowledge
> --
>
> Key: GEODE-10213
> URL: https://issues.apache.org/jira/browse/GEODE-10213
> Project: Geode
>  Issue Type: Wish
>Reporter: Kirk Lund
>Priority: Major
>  Labels: pull-request-available
>
> Remove kirklund from codeowner areas of less knowledge



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-10127) Incorrect locator hostname used in remote locator connections

2022-04-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-10127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17516976#comment-17516976
 ] 

ASF subversion and git services commented on GEODE-10127:
-

Commit e5dd4cff5486652f90caffeacee5eabe6cfbcfe7 in geode's branch 
refs/heads/develop from Jacob Barrett
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=e5dd4cff54 ]

GEODE-10127: Improve consistency of marshal/unmarshal calls. (#7463)

* Refactor to common constructor.
* Add unmarshal method in place of specialized constructor.
* Replace toString with specialized marshal for using hostname-for-clients
  or bind-address.

> Incorrect locator hostname used in remote locator connections
> -
>
> Key: GEODE-10127
> URL: https://issues.apache.org/jira/browse/GEODE-10127
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.15.0
>Reporter: Jacob Barrett
>Assignee: Jacob Barrett
>Priority: Major
>  Labels: blocks-1.15.0​, pull-request-available
> Fix For: 1.15.0
>
>
> When locators in distributed system (DS) B as for locators in DS A they are 
> sent the local host name and IP address of the locators and not that of the 
> {{hostname-for-clients}} or {{bind-address}} properties.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-10200) [CI Failure] : SocketCreatorUpgradeTest > upgradingToNewGeodeAndNewJavaWithProtocolsAny[1.14.0] FAILED

2022-04-04 Thread Jacob Barrett (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Barrett resolved GEODE-10200.
---
Resolution: Cannot Reproduce

Ran 100 times without reproduction.

> [CI Failure] :  SocketCreatorUpgradeTest > 
> upgradingToNewGeodeAndNewJavaWithProtocolsAny[1.14.0] FAILED
> ---
>
> Key: GEODE-10200
> URL: https://issues.apache.org/jira/browse/GEODE-10200
> Project: Geode
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.15.0
>Reporter: Nabarun Nag
>Assignee: Jacob Barrett
>Priority: Major
>  Labels: needsTriage
>
>  
> {code:java}
> SocketCreatorUpgradeTest > 
> upgradingToNewGeodeAndNewJavaWithProtocolsAny[1.14.0] FAILED
> org.opentest4j.AssertionFailedError: [Exit value from process started by 
> [1fa9fcaebd8c018e: gfsh -e start locator --connect=false 
> --http-service-port=0 --name=locator2 
> --bind-address=heavy-lifter-5c2a1d0b-5930-5788-97d6-3ca24d2f026a.c.apachegeode-ci.internal
>  --port=21172 --J=-Dgemfire.jmx-manager-port=21173 
> --security-properties-file=/tmp/junit1876902159761664930/junit7901411307157053608.tmp
>  
> --locators=heavy-lifter-5c2a1d0b-5930-5788-97d6-3ca24d2f026a.c.apachegeode-ci.internal[21170]]]
>  
> expected: 0
>  but was: 1
> at 
> jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.test.junit.rules.gfsh.GfshExecution.awaitTermination(GfshExecution.java:103)
> at 
> org.apache.geode.test.junit.rules.gfsh.GfshRule.execute(GfshRule.java:154)
> at 
> org.apache.geode.test.junit.rules.gfsh.GfshRule.execute(GfshRule.java:133)
> at 
> org.apache.geode.internal.net.SocketCreatorUpgradeTest.upgradingToNewGeodeAndNewJavaWithProtocolsAny(SocketCreatorUpgradeTest.java:450)
>  {code}
> In the logs we can see that there are a lot SSLv2Hello not supported errors. 
> {code:java}
> [warn 2022/03/30 11:49:45.067 UTC locator2  
> tid=0x38] SSL handshake exception
> javax.net.ssl.SSLHandshakeException: SSLv2Hello is not enabled
>   at 
> sun.security.ssl.SSLEngineInputRecord.handleUnknownRecord(SSLEngineInputRecord.java:366)
>   at 
> sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:193)
>   at 
> sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:160)
>   at sun.security.ssl.SSLTransport.decode(SSLTransport.java:108)
>   at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:575)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:531)
>   at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:398)
>   at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:377)
>   at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:626)
>   at 
> org.apache.geode.internal.net.NioSslEngine.handshake(NioSslEngine.java:147)
>   at 
> org.apache.geode.internal.net.SocketCreator.handshakeSSLSocketChannel(SocketCreator.java:436)
>   at 
> org.apache.geode.internal.tcp.Connection.createIoFilter(Connection.java:1775)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1563)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1500)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-10127) Incorrect locator hostname used in remote locator connections

2022-04-04 Thread Jacob Barrett (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Barrett resolved GEODE-10127.
---
Resolution: Fixed

> Incorrect locator hostname used in remote locator connections
> -
>
> Key: GEODE-10127
> URL: https://issues.apache.org/jira/browse/GEODE-10127
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.15.0
>Reporter: Jacob Barrett
>Assignee: Jacob Barrett
>Priority: Major
>  Labels: blocks-1.15.0​, pull-request-available
> Fix For: 1.15.0
>
>
> When locators in distributed system (DS) B as for locators in DS A they are 
> sent the local host name and IP address of the locators and not that of the 
> {{hostname-for-clients}} or {{bind-address}} properties.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-9437) Redis session dunit tests are flaky

2022-04-04 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517061#comment-17517061
 ] 

Geode Integration commented on GEODE-9437:
--

Seen in [distributed-test-openjdk8 
#1754|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1754]
 ... see [test 
results|http://files.apachegeode-ci.info/builds/apache-develop-mass-test-run/1.15.0-build.1060/test-results/distributedTest/1648908376/]
 or download 
[artifacts|http://files.apachegeode-ci.info/builds/apache-develop-mass-test-run/1.15.0-build.1060/test-artifacts/1648908376/distributedtestfiles-openjdk8-1.15.0-build.1060.tgz].

> Redis session dunit tests are flaky
> ---
>
> Key: GEODE-9437
> URL: https://issues.apache.org/jira/browse/GEODE-9437
> Project: Geode
>  Issue Type: Test
>  Components: redis
>Reporter: Jens Deppe
>Priority: Major
>
> The Redis session-related DUnit tests will sometimes fail with errors such as:
> {noformat}
> org.apache.geode.redis.session.RedisSessionDUnitTest > should_storeSession 
> FAILED
> 
> org.springframework.web.client.HttpServerErrorException$InternalServerError: 
> 500 Server Error: [no body]
> at 
> org.springframework.web.client.HttpServerErrorException.create(HttpServerErrorException.java:100)
> at 
> org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:188)
> at 
> org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:125)
> at 
> org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63)
> at 
> org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:819)
> at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:777)
> at 
> org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711)
> at 
> org.springframework.web.client.RestTemplate.postForEntity(RestTemplate.java:468)
> at 
> org.apache.geode.redis.session.SessionDUnitTest.createNewSessionWithNote0(SessionDUnitTest.java:207)
> at 
> org.apache.geode.redis.session.SessionDUnitTest.lambda$createNewSessionWithNote$1(SessionDUnitTest.java:201)
> at 
> io.github.resilience4j.retry.Retry.lambda$decorateCallable$5(Retry.java:306)
> at 
> org.apache.geode.redis.session.SessionDUnitTest.createNewSessionWithNote(SessionDUnitTest.java:201)
> at 
> org.apache.geode.redis.session.RedisSessionDUnitTest.should_storeSession(RedisSessionDUnitTest.java:88)
> org.apache.geode.redis.session.RedisSessionDUnitTest > 
> should_propagateSession_toOtherServers FAILED
> 
> org.springframework.web.client.HttpServerErrorException$InternalServerError: 
> 500 Server Error: 
> [{"timestamp":"2021-07-19T15:38:49.855+00:00","status":500,"error":"Internal 
> Server Error","path":"/addSessionNote"}]
> at 
> org.springframework.web.client.HttpServerErrorException.create(HttpServerErrorException.java:100)
> at 
> org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:188)
> at 
> org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:125)
> at 
> org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63)
> at 
> org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:819)
> at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:777)
> at 
> org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711)
> at 
> org.springframework.web.client.RestTemplate.postForEntity(RestTemplate.java:468)
> at 
> org.apache.geode.redis.session.SessionDUnitTest.createNewSessionWithNote0(SessionDUnitTest.java:207)
> at 
> org.apache.geode.redis.session.SessionDUnitTest.lambda$createNewSessionWithNote$1(SessionDUnitTest.java:201)
> at 
> io.github.resilience4j.retry.Retry.lambda$decorateCallable$5(Retry.java:306)
> at 
> org.apache.geode.redis.session.SessionDUnitTest.createNewSessionWithNote(SessionDUnitTest.java:201)
> at 
> org.apache.geode.redis.session.RedisSessionDUnitTest.should_propagateSession_toOtherServers(RedisSessionDUnitTest.java:97)
> {noformat}
> It's unclear exactly what is causing the problem as it seems to be related to 
> lettuce when servers stop/restart and lettuce tries to resubmit commands.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-6489) CI Failures with testDistributedDeadlock

2022-04-04 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517062#comment-17517062
 ] 

Geode Integration commented on GEODE-6489:
--

Seen in [distributed-test-openjdk8 
#1750|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1750]
 ... see [test 
results|http://files.apachegeode-ci.info/builds/apache-develop-mass-test-run/1.15.0-build.1060/test-results/distributedTest/1648900845/]
 or download 
[artifacts|http://files.apachegeode-ci.info/builds/apache-develop-mass-test-run/1.15.0-build.1060/test-artifacts/1648900845/distributedtestfiles-openjdk8-1.15.0-build.1060.tgz].

> CI Failures with testDistributedDeadlock
> 
>
> Key: GEODE-6489
> URL: https://issues.apache.org/jira/browse/GEODE-6489
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh
>Affects Versions: 1.10.0, 1.14.0, 1.15.0
>Reporter: Lynn Hughes-Godfrey
>Assignee: Jinmei Liao
>Priority: Major
>  Labels: flaky
>
> In an single CI run, we see 3 failures all related to testDistributedDeadlock:
> {noformat}
> org.apache.geode.management.internal.cli.commands.ShowDeadlockOverHttpDUnitTest
>  > testDistributedDeadlockWithFunction FAILED
> org.apache.geode.management.internal.cli.commands.ShowDeadlockOverHttpDUnitTest
>  > testNoDeadlock FAILED
> org.apache.geode.distributed.internal.deadlock.GemFireDeadlockDetectorDUnitTest
>  > testDistributedDeadlockWithDLock FAILED
> {noformat}
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/469
> {noformat}
> org.apache.geode.management.internal.cli.commands.ShowDeadlockOverHttpDUnitTest
>  > testDistributedDeadlockWithFunction FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.management.internal.cli.commands.ShowDeadlockDistributedTestBase$$Lambda$68/829260532.run
>  in VM 1 running on Host ceb4d948b5be with 4 VMs
> Caused by:
> org.awaitility.core.ConditionTimeoutException: Condition with 
> org.apache.geode.management.internal.cli.commands.ShowDeadlockDistributedTestBase
>  was not fulfilled within 300 seconds.
> org.apache.geode.management.internal.cli.commands.ShowDeadlockOverHttpDUnitTest
>  > testNoDeadlock FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.management.internal.cli.commands.ShowDeadlockDistributedTestBase$$Lambda$68/829260532.run
>  in VM 1 running on Host ceb4d948b5be with 4 VMs
> Caused by:
> org.awaitility.core.ConditionTimeoutException: Condition with 
> org.apache.geode.management.internal.cli.commands.ShowDeadlockDistributedTestBase
>  was not fulfilled within 300 seconds.
> 137 tests completed, 2 failed
> > Task :geode-web:distributedTest FAILED
> > Task :geode-core:distributedTest
> org.apache.geode.distributed.internal.deadlock.GemFireDeadlockDetectorDUnitTest
>  > testDistributedDeadlockWithDLock FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.geode.distributed.internal.deadlock.GemFireDeadlockDetectorDUnitTest.testDistributedDeadlockWithDLock(GemFireDeadlockDetectorDUnitTest.java:201)
> {noformat}
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0019/test-results/distributedTest/1551833386/
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Test report artifacts from this job are available at:
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0019/test-artifacts/1551833386/distributedtestfiles-OpenJDK8-1.10.0-SNAPSHOT.0019.tgz



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-9953) Implement LTRIM

2022-04-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517064#comment-17517064
 ] 

ASF subversion and git services commented on GEODE-9953:


Commit 3cc3bdb826ee5047b3fec1b4d9de92fb1fdabc83 in geode's branch 
refs/heads/develop from Ray Ingles
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=3cc3bdb826 ]

GEODE-9953: Implement LTRIM Command (#7403)

* GEODE-9953: Implement  Redis LTRIM command

* Docs updated

Co-authored-by: Ray Ingles 

> Implement LTRIM
> ---
>
> Key: GEODE-9953
> URL: https://issues.apache.org/jira/browse/GEODE-9953
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Wayne
>Priority: Major
>  Labels: pull-request-available
>
> Implement the [LTRIM|https://redis.io/commands/ltrim] command.
>  
> +Acceptance Criteria+
> The command has been implemented along with appropriate unit and system tests.
>  
> The command has been tested using the redis-cli tool and verified against 
> native redis.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-9953) Implement LTRIM

2022-04-04 Thread Ray Ingles (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Ingles reassigned GEODE-9953:
-

Assignee: Ray Ingles

> Implement LTRIM
> ---
>
> Key: GEODE-9953
> URL: https://issues.apache.org/jira/browse/GEODE-9953
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Wayne
>Assignee: Ray Ingles
>Priority: Major
>  Labels: pull-request-available
>
> Implement the [LTRIM|https://redis.io/commands/ltrim] command.
>  
> +Acceptance Criteria+
> The command has been implemented along with appropriate unit and system tests.
>  
> The command has been tested using the redis-cli tool and verified against 
> native redis.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-9953) Implement LTRIM

2022-04-04 Thread Ray Ingles (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Ingles resolved GEODE-9953.
---
Fix Version/s: 1.15.0
   Resolution: Fixed

Implement LTRIM with associated tests.

> Implement LTRIM
> ---
>
> Key: GEODE-9953
> URL: https://issues.apache.org/jira/browse/GEODE-9953
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Wayne
>Assignee: Ray Ingles
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Implement the [LTRIM|https://redis.io/commands/ltrim] command.
>  
> +Acceptance Criteria+
> The command has been implemented along with appropriate unit and system tests.
>  
> The command has been tested using the redis-cli tool and verified against 
> native redis.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-10216) Add test for existence of VM stats

2022-04-04 Thread Hale Bales (Jira)
Hale Bales created GEODE-10216:
--

 Summary: Add test for existence of VM stats
 Key: GEODE-10216
 URL: https://issues.apache.org/jira/browse/GEODE-10216
 Project: Geode
  Issue Type: Test
  Components: statistics
Affects Versions: 1.15.0
Reporter: Hale Bales


When we started running with JDK 11, a couple of stats went missing (fdsOpen 
and fdLimit). This was resolved by adding the following line to test.gradle:
{code:java}
"--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED",
{code}

This change was not tested. Add a test so that if something similar happens in 
the future, we will know about it before a customer has an issue.




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-10216) Add test for existence of VM stats

2022-04-04 Thread Hale Bales (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hale Bales reassigned GEODE-10216:
--

Assignee: Hale Bales

> Add test for existence of VM stats
> --
>
> Key: GEODE-10216
> URL: https://issues.apache.org/jira/browse/GEODE-10216
> Project: Geode
>  Issue Type: Test
>  Components: statistics
>Affects Versions: 1.15.0
>Reporter: Hale Bales
>Assignee: Hale Bales
>Priority: Major
>
> When we started running with JDK 11, a couple of stats went missing (fdsOpen 
> and fdLimit). This was resolved by adding the following line to test.gradle:
> {code:java}
> "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED",
> {code}
> This change was not tested. Add a test so that if something similar happens 
> in the future, we will know about it before a customer has an issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10216) Add test for existence of VM stats

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated GEODE-10216:
---
Labels: pull-request-available  (was: )

> Add test for existence of VM stats
> --
>
> Key: GEODE-10216
> URL: https://issues.apache.org/jira/browse/GEODE-10216
> Project: Geode
>  Issue Type: Test
>  Components: statistics
>Affects Versions: 1.15.0
>Reporter: Hale Bales
>Assignee: Hale Bales
>Priority: Major
>  Labels: pull-request-available
>
> When we started running with JDK 11, a couple of stats went missing (fdsOpen 
> and fdLimit). This was resolved by adding the following line to test.gradle:
> {code:java}
> "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED",
> {code}
> This change was not tested. Add a test so that if something similar happens 
> in the future, we will know about it before a customer has an issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-10116) Enable native Redis TCL tests for List data type

2022-04-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-10116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517091#comment-17517091
 ] 

ASF subversion and git services commented on GEODE-10116:
-

Commit 83a8230144facc782c4982eca27a2b1fc7cbe083 in geode's branch 
refs/heads/develop from Ray Ingles
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=83a8230144 ]

GEODE-10116: Enable Redis TCL tests for Lists (#7521)

* GEODE-10116: Enable Redis TCL tests for Lists

Co-authored-by: Ray Ingles 

> Enable native Redis TCL tests for List data type
> 
>
> Key: GEODE-10116
> URL: https://issues.apache.org/jira/browse/GEODE-10116
> Project: Geode
>  Issue Type: Improvement
>Reporter: Ray Ingles
>Priority: Major
>  Labels: pull-request-available
>
> The native redis TCL-based test suite has several tests for the List data 
> type, which should be enabled when Geode-for-Redis support is sufficient to 
> support them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-10116) Enable native Redis TCL tests for List data type

2022-04-04 Thread Ray Ingles (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Ingles reassigned GEODE-10116:
--

Assignee: Ray Ingles

> Enable native Redis TCL tests for List data type
> 
>
> Key: GEODE-10116
> URL: https://issues.apache.org/jira/browse/GEODE-10116
> Project: Geode
>  Issue Type: Improvement
>Reporter: Ray Ingles
>Assignee: Ray Ingles
>Priority: Major
>  Labels: pull-request-available
>
> The native redis TCL-based test suite has several tests for the List data 
> type, which should be enabled when Geode-for-Redis support is sufficient to 
> support them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-10116) Enable native Redis TCL tests for List data type

2022-04-04 Thread Ray Ingles (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Ingles resolved GEODE-10116.

Fix Version/s: 1.15.0
   Resolution: Fixed

Enabled all currently-available Redis List TCL tests.

> Enable native Redis TCL tests for List data type
> 
>
> Key: GEODE-10116
> URL: https://issues.apache.org/jira/browse/GEODE-10116
> Project: Geode
>  Issue Type: Improvement
>Reporter: Ray Ingles
>Assignee: Ray Ingles
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The native redis TCL-based test suite has several tests for the List data 
> type, which should be enabled when Geode-for-Redis support is sufficient to 
> support them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-10157) Add unit tests for all Delta classes in package org.apache.geode.redis.internal.data

2022-04-04 Thread Ray Ingles (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Ingles reassigned GEODE-10157:
--

Assignee: Ray Ingles

> Add unit tests for all Delta classes in package 
> org.apache.geode.redis.internal.data
> 
>
> Key: GEODE-10157
> URL: https://issues.apache.org/jira/browse/GEODE-10157
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Jens Deppe
>Assignee: Ray Ingles
>Priority: Major
>
> Expand on tests in 
> {{org.apache.geode.redis.internal.data.DeltaClassesJUnitTest}} for all other 
> delta-related classes in this package.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-10211) A retried operation does not set possible duplicate in the event if retried on an accessor

2022-04-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517114#comment-17517114
 ] 

ASF subversion and git services commented on GEODE-10211:
-

Commit ead41c0221ba029bec3d3ed57c5232000601d7ce in geode's branch 
refs/heads/develop from Eric Shu
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=ead41c0221 ]

GEODE-10211: Correctly check persistence on accessor (#7546)

 * To set possible duplicate for an event, whether region is persistent
   need to be determined. Allow an accessor to check the setting through
   profiles.

* Changed the order of condition check so that no need to send the find
version tag message to peers if persistence is true.

> A retried operation does not set possible duplicate in the event if retried 
> on an accessor
> --
>
> Key: GEODE-10211
> URL: https://issues.apache.org/jira/browse/GEODE-10211
> Project: Geode
>  Issue Type: Bug
>  Components: persistence, regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeOperationAPI, blocks-1.15.0​, pull-request-available
>
> In geode, or persistent region, it is possible that all persistent copies 
> went offline. So possible duplicate should be set in the event, even though 
> no members has the event in the event tracker.
> Currently, the code handling this will miss the setting if the retry occurs 
> on an accessor (localMaxMemory set to 0) case.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-10211) A retried operation does not set possible duplicate in the event if retried on an accessor

2022-04-04 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-10211.
--
Fix Version/s: 1.15.0
   Resolution: Fixed

> A retried operation does not set possible duplicate in the event if retried 
> on an accessor
> --
>
> Key: GEODE-10211
> URL: https://issues.apache.org/jira/browse/GEODE-10211
> Project: Geode
>  Issue Type: Bug
>  Components: persistence, regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeOperationAPI, blocks-1.15.0​, pull-request-available
> Fix For: 1.15.0
>
>
> In geode, or persistent region, it is possible that all persistent copies 
> went offline. So possible duplicate should be set in the event, even though 
> no members has the event in the event tracker.
> Currently, the code handling this will miss the setting if the retry occurs 
> on an accessor (localMaxMemory set to 0) case.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-10106) CI Failure: CacheClientNotifierDUnitTest > testNormalClient2MultipleCacheServer

2022-04-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517126#comment-17517126
 ] 

ASF subversion and git services commented on GEODE-10106:
-

Commit 7870f01449ad8fe61fc2f7daa8a82dad5ecbeebc in geode's branch 
refs/heads/develop from Hale Bales
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=7870f01449 ]

GEODE-10106: Handle NPE in QueueManagerImpl (#7471)

CacheClientNotifierDUnitTest failed with an NPE because queuedConnection is 
occationally null when trying to recover the primary. 

This is solved by checking for null before calling any methods on that object.

- fixes spelling mistakes.
- the primary can be null when removing a connection, and when a new
  primary can't be found when recovering a connection.
- the null check protects us from both of those circumstances when
  scheduling the redundancy satisfier.

> CI Failure: CacheClientNotifierDUnitTest > 
> testNormalClient2MultipleCacheServer
> ---
>
> Key: GEODE-10106
> URL: https://issues.apache.org/jira/browse/GEODE-10106
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.15.0
>Reporter: Jens Deppe
>Assignee: Hale Bales
>Priority: Major
>  Labels: needsTriage, pull-request-available
>
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1382]
> {noformat}
> CacheClientNotifierDUnitTest > testNormalClient2MultipleCacheServer FAILED
> 11:49:39java.lang.AssertionError: Suspicious strings were written to the 
> log during this run.
> 11:49:39Fix the strings or use IgnoredException.addIgnoredException to 
> ignore.
> 11:49:39
> ---
> 11:49:39Found suspect string in 'dunit_suspect-vm4.log' at line 431
> 11:49:39
> 11:49:39[error 2022/03/05 19:49:36.075 UTC 
>  tid=55] Error in 
> redundancy satisfier
> 11:49:39java.lang.NullPointerException
> 11:49:39  at 
> org.apache.geode.cache.client.internal.QueueManagerImpl.recoverPrimary(QueueManagerImpl.java:856)
> 11:49:39  at 
> org.apache.geode.cache.client.internal.QueueManagerImpl$RedundancySatisfierTask.run2(QueueManagerImpl.java:1454)
> 11:49:39  at 
> org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1340)
> 11:49:39  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 11:49:39  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 11:49:39  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> 11:49:39  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> 11:49:39  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 11:49:39  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 11:49:39  at java.lang.Thread.run(Thread.java:750)
> 11:49:39at org.junit.Assert.fail(Assert.java:89)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.DUnitLauncher.closeAndCheckForSuspects(DUnitLauncher.java:422)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.DUnitLauncher.closeAndCheckForSuspects(DUnitLauncher.java:438)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.cleanupAllVms(JUnit4DistributedTestCase.java:551)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.doTearDownDistributedTestCase(JUnit4DistributedTestCase.java:498)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.tearDownDistributedTestCase(JUnit4DistributedTestCase.java:481)
> 11:49:39at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 11:49:39at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 11:49:39at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 11:49:39at java.lang.reflect.Method.invoke(Method.java:498)
> 11:49:39at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 11:49:39at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 11:49:39at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 11:49:39at 
> org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
> 11:49:39at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> 11:49:39at org.junit.rule

[jira] [Resolved] (GEODE-10106) CI Failure: CacheClientNotifierDUnitTest > testNormalClient2MultipleCacheServer

2022-04-04 Thread Hale Bales (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hale Bales resolved GEODE-10106.

Fix Version/s: 1.15.0
   Resolution: Fixed

> CI Failure: CacheClientNotifierDUnitTest > 
> testNormalClient2MultipleCacheServer
> ---
>
> Key: GEODE-10106
> URL: https://issues.apache.org/jira/browse/GEODE-10106
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.15.0
>Reporter: Jens Deppe
>Assignee: Hale Bales
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1382]
> {noformat}
> CacheClientNotifierDUnitTest > testNormalClient2MultipleCacheServer FAILED
> 11:49:39java.lang.AssertionError: Suspicious strings were written to the 
> log during this run.
> 11:49:39Fix the strings or use IgnoredException.addIgnoredException to 
> ignore.
> 11:49:39
> ---
> 11:49:39Found suspect string in 'dunit_suspect-vm4.log' at line 431
> 11:49:39
> 11:49:39[error 2022/03/05 19:49:36.075 UTC 
>  tid=55] Error in 
> redundancy satisfier
> 11:49:39java.lang.NullPointerException
> 11:49:39  at 
> org.apache.geode.cache.client.internal.QueueManagerImpl.recoverPrimary(QueueManagerImpl.java:856)
> 11:49:39  at 
> org.apache.geode.cache.client.internal.QueueManagerImpl$RedundancySatisfierTask.run2(QueueManagerImpl.java:1454)
> 11:49:39  at 
> org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1340)
> 11:49:39  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 11:49:39  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 11:49:39  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> 11:49:39  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> 11:49:39  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 11:49:39  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 11:49:39  at java.lang.Thread.run(Thread.java:750)
> 11:49:39at org.junit.Assert.fail(Assert.java:89)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.DUnitLauncher.closeAndCheckForSuspects(DUnitLauncher.java:422)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.DUnitLauncher.closeAndCheckForSuspects(DUnitLauncher.java:438)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.cleanupAllVms(JUnit4DistributedTestCase.java:551)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.doTearDownDistributedTestCase(JUnit4DistributedTestCase.java:498)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.tearDownDistributedTestCase(JUnit4DistributedTestCase.java:481)
> 11:49:39at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 11:49:39at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 11:49:39at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 11:49:39at java.lang.reflect.Method.invoke(Method.java:498)
> 11:49:39at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 11:49:39at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 11:49:39at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 11:49:39at 
> org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
> 11:49:39at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> 11:49:39at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 11:49:39at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 11:49:39at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 11:49:39at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 11:49:39at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 11:49:39at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 11:49:39at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 11:49:39at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 11:49

[jira] [Updated] (GEODE-10106) CI Failure: CacheClientNotifierDUnitTest > testNormalClient2MultipleCacheServer

2022-04-04 Thread Hale Bales (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hale Bales updated GEODE-10106:
---
Labels: pull-request-available  (was: needsTriage pull-request-available)

> CI Failure: CacheClientNotifierDUnitTest > 
> testNormalClient2MultipleCacheServer
> ---
>
> Key: GEODE-10106
> URL: https://issues.apache.org/jira/browse/GEODE-10106
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.15.0
>Reporter: Jens Deppe
>Assignee: Hale Bales
>Priority: Major
>  Labels: pull-request-available
>
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1382]
> {noformat}
> CacheClientNotifierDUnitTest > testNormalClient2MultipleCacheServer FAILED
> 11:49:39java.lang.AssertionError: Suspicious strings were written to the 
> log during this run.
> 11:49:39Fix the strings or use IgnoredException.addIgnoredException to 
> ignore.
> 11:49:39
> ---
> 11:49:39Found suspect string in 'dunit_suspect-vm4.log' at line 431
> 11:49:39
> 11:49:39[error 2022/03/05 19:49:36.075 UTC 
>  tid=55] Error in 
> redundancy satisfier
> 11:49:39java.lang.NullPointerException
> 11:49:39  at 
> org.apache.geode.cache.client.internal.QueueManagerImpl.recoverPrimary(QueueManagerImpl.java:856)
> 11:49:39  at 
> org.apache.geode.cache.client.internal.QueueManagerImpl$RedundancySatisfierTask.run2(QueueManagerImpl.java:1454)
> 11:49:39  at 
> org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1340)
> 11:49:39  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 11:49:39  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 11:49:39  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> 11:49:39  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> 11:49:39  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 11:49:39  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 11:49:39  at java.lang.Thread.run(Thread.java:750)
> 11:49:39at org.junit.Assert.fail(Assert.java:89)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.DUnitLauncher.closeAndCheckForSuspects(DUnitLauncher.java:422)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.DUnitLauncher.closeAndCheckForSuspects(DUnitLauncher.java:438)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.cleanupAllVms(JUnit4DistributedTestCase.java:551)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.doTearDownDistributedTestCase(JUnit4DistributedTestCase.java:498)
> 11:49:39at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.tearDownDistributedTestCase(JUnit4DistributedTestCase.java:481)
> 11:49:39at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 11:49:39at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 11:49:39at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 11:49:39at java.lang.reflect.Method.invoke(Method.java:498)
> 11:49:39at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 11:49:39at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 11:49:39at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 11:49:39at 
> org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
> 11:49:39at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> 11:49:39at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 11:49:39at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 11:49:39at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 11:49:39at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 11:49:39at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 11:49:39at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 11:49:39at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 11:49:39at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 11:49:39

[jira] [Commented] (GEODE-7739) JMX managers may fail to federate mbeans for other members

2022-04-04 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517142#comment-17517142
 ] 

Geode Integration commented on GEODE-7739:
--

Seen in [distributed-test-openjdk8 
#275|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/distributed-test-openjdk8/builds/275]
 ... see [test 
results|http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1065/test-results/distributedTest/1649109870/]
 or download 
[artifacts|http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1065/test-artifacts/1649109870/distributedtestfiles-openjdk8-1.15.0-build.1065.tgz].

> JMX managers may fail to federate mbeans for other members
> --
>
> Key: GEODE-7739
> URL: https://issues.apache.org/jira/browse/GEODE-7739
> Project: Geode
>  Issue Type: Bug
>  Components: jmx
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JMX Manager may fail to federate one or more MXBeans for other members 
> because of a race condition during startup. When ManagementCacheListener is 
> first constructed, it is in a state that will ignore all callbacks because 
> the field readyForEvents is false.
> 
> Debugging with JMXMBeanReconnectDUnitTest revealed this bug.
> The test starts two locators with jmx manager configured and started. 
> Locator1 always has all of locator2's mbeans, but locator2 is intermittently 
> missing the personal mbeans of locator1. 
> I think this is caused by some sort of race condition in the code that 
> creates the monitoring regions for other members in locator2.
> It's possible that the jmx manager that hits this bug might fail to have 
> mbeans for servers as well as other locators but I haven't seen a test case 
> for this scenario.
> The exposure of this bug means that a user running more than one locator 
> might have a locator that is missing one or more mbeans for the cluster.
> 
> Studying the JMX code also reveals the existence of *GEODE-8012*.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (GEODE-9037) Do not expose unsupported commands in Geode compatibility with Redis

2022-04-04 Thread Julia ida (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517207#comment-17517207
 ] 

Julia ida commented on GEODE-9037:
--

The distinction between 'supported' and 'unsupported' commands is no longer 
useful and will be removed. Any commands still listed as unsupported will now 
respond if they were unimplemented/unknown.

A system property will allow them to be enabled solely for tests.



> Do not expose unsupported commands in Geode compatibility with Redis
> 
>
> Key: GEODE-9037
> URL: https://issues.apache.org/jira/browse/GEODE-9037
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Ray Ingles
>Assignee: Ray Ingles
>Priority: Major
>  Labels: blocks-1.14.0​, pull-request-available
> Fix For: 1.14.0, 1.15.0
>
>
> The distinction between 'supported' and 'unsupported' commands is no longer 
> useful and will be removed. Any commands still listed as unsupported will now 
> respond if they were unimplemented/unknown.
> A system property will allow them to be enabled solely for tests.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)