[jira] [Resolved] (GEODE-10424) Improve parallel gateway sender logic

2022-11-04 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10424.
--
Resolution: Won't Fix

> Improve parallel gateway sender logic
> -
>
> Key: GEODE-10424
> URL: https://issues.apache.org/jira/browse/GEODE-10424
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> Improve logic of putting events in queue, for parallel gateway sender 
> connected to region with redundancy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10425) Add option to suppress Krf file creation

2022-11-04 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10425.
--
Resolution: Won't Fix

> Add option to suppress Krf file creation
> 
>
> Key: GEODE-10425
> URL: https://issues.apache.org/jira/browse/GEODE-10425
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> Add option to not create Krf file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (GEODE-10425) Add option to suppress Krf file creation

2022-09-29 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10425:


 Summary: Add option to suppress Krf file creation
 Key: GEODE-10425
 URL: https://issues.apache.org/jira/browse/GEODE-10425
 Project: Geode
  Issue Type: Improvement
  Components: persistence
Reporter: Mario Ivanac


Add option to not create Krf file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10425) Add option to suppress Krf file creation

2022-09-29 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10425:


Assignee: Mario Ivanac

> Add option to suppress Krf file creation
> 
>
> Key: GEODE-10425
> URL: https://issues.apache.org/jira/browse/GEODE-10425
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> Add option to not create Krf file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (GEODE-10410) Rebalance Guard Prevent Lost Bucket Recovery

2022-09-24 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10410:
-
Fix Version/s: 1.16.0

> Rebalance Guard Prevent Lost Bucket Recovery
> 
>
> Key: GEODE-10410
> URL: https://issues.apache.org/jira/browse/GEODE-10410
> Project: Geode
>  Issue Type: Bug
>Reporter: Weijie Xu
>Assignee: Weijie Xu
>Priority: Major
>  Labels: needsTriage, pull-request-available
> Fix For: 1.16.0
>
> Attachments: server2.log, test.tar.gz
>
>
> Following steps reproduce the issue:
> Run the start.gfsh in the attached example, which configures a geode system 
> with a partitioned region and a gateway sender. So there are two regions, the 
> manually created region, and the queue region.
> Then run the example code, which will source ~400M data and 5 times amount of 
> events into the system. All data are sourced into the system, no bucket lost, 
> and no out of memory.
> Then stop one of the server, and revoke the disk file of the server.
> Then start the server, which will trigger a bucket recovery. After that, 
> there will be part of secondary bucket lost.
> gfsh>show metrics --region=/example-region
>           | numBucketsWithoutRedundancy  | 63
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10424) Improve parallel gateway sender logic

2022-09-24 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10424:


Assignee: Mario Ivanac

> Improve parallel gateway sender logic
> -
>
> Key: GEODE-10424
> URL: https://issues.apache.org/jira/browse/GEODE-10424
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> Improve logic of putting events in queue, for parallel gateway sender 
> connected to region with redundancy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (GEODE-10424) Improve parallel gateway sender logic

2022-09-24 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10424:


 Summary: Improve parallel gateway sender logic
 Key: GEODE-10424
 URL: https://issues.apache.org/jira/browse/GEODE-10424
 Project: Geode
  Issue Type: Improvement
  Components: wan
Reporter: Mario Ivanac


Improve logic of putting events in queue, for parallel gateway sender connected 
to region with redundancy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10331) DistributionImpl.destroyMember keeps cache alive for some number of seconds

2022-09-19 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10331.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> DistributionImpl.destroyMember keeps cache alive for some number of seconds
> ---
>
> Key: GEODE-10331
> URL: https://issues.apache.org/jira/browse/GEODE-10331
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0, pull-request-available
> Fix For: 1.16.0
>
>
> org.apache.geode.distributed.internal.DistributionImpl.destroyMember creates 
> a thread that will hold onto the DIstributesSystem/Cache through the 
> DirectChannel it has for 3 seconds by default. It could be even longer if 
> p2p.disconnectDelay is set to a value > 3000.
> This can be a problem if the JVM is trying to reconnect since this old cache 
> uses memory.
> Instead of creating a new thread for every call of destroyMember, we should 
> just have a single ScheduledExecutor that we schedule the background 
> "closeEndpoint" with.
> Also since all this code interacts with the DirectChannel all the logic about 
> the executor and scheduling it should belong to DirectChannel, not the 
> DistributionImpl.
> When the DirectChannel has disconnect called on it, then it should get rid of 
> all the tasks scheduled in the executor since they are no longer needed.
> I think this issue has been around for a long time because the creation of 
> the thread refers to fixing "Bug 37944" which is on old bug system that is 
> not longer used for geode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10421) Enhancment of start gw sender with clean-queue

2022-09-16 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10421.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Enhancment of start gw sender with clean-queue
> --
>
> Key: GEODE-10421
> URL: https://issues.apache.org/jira/browse/GEODE-10421
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh, wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Reject command if gateway sender is not stopped on all servers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10419) Enhancment of backup disk-store command

2022-09-14 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10419.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Enhancment of backup disk-store command
> ---
>
> Key: GEODE-10419
> URL: https://issues.apache.org/jira/browse/GEODE-10419
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh, persistence
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Add additional option to perform backup only for specified disk-store(s).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10421) Enhancment of start gw sender with clean-queue

2022-09-13 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10421:


Assignee: Mario Ivanac

> Enhancment of start gw sender with clean-queue
> --
>
> Key: GEODE-10421
> URL: https://issues.apache.org/jira/browse/GEODE-10421
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh, wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> Reject command if gateway sender is not stopped on all servers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (GEODE-10421) Enhancment of start gw sender with clean-queue

2022-09-13 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10421:


 Summary: Enhancment of start gw sender with clean-queue
 Key: GEODE-10421
 URL: https://issues.apache.org/jira/browse/GEODE-10421
 Project: Geode
  Issue Type: Improvement
  Components: gfsh, wan
Reporter: Mario Ivanac


Reject command if gateway sender is not stopped on all servers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10419) Enhancment of backup disk-store command

2022-09-11 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10419:


Assignee: Mario Ivanac

> Enhancment of backup disk-store command
> ---
>
> Key: GEODE-10419
> URL: https://issues.apache.org/jira/browse/GEODE-10419
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh, persistence
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> Add additional option to perform backup only for specified disk-store(s).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (GEODE-10419) Enhancment of backup disk-store command

2022-09-11 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10419:


 Summary: Enhancment of backup disk-store command
 Key: GEODE-10419
 URL: https://issues.apache.org/jira/browse/GEODE-10419
 Project: Geode
  Issue Type: Improvement
  Components: gfsh, persistence
Reporter: Mario Ivanac


Add additional option to perform backup only for specified disk-store(s).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10336) ConnectionTable.close does not null out its static lastInstance field

2022-09-09 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10336.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> ConnectionTable.close does not null out its static lastInstance field
> -
>
> Key: GEODE-10336
> URL: https://issues.apache.org/jira/browse/GEODE-10336
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0, pull-request-available
> Fix For: 1.16.0
>
>
> The ConnectionTable.close method does a bunch of work but it does not null 
> out the static "lastInstance" atomic. This causes it to keep the 
> ConnectionTable alive which ends up keeping the InternalDistributedSystem 
> alive.
> The easiest fix is to do  this at the end of close: "emergencyClose();". The 
> emergencyClose correctly set the lastInstance atomic to null.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10335) TXManagerImpl.close does not remove itself from the static singeton

2022-09-08 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10335.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> TXManagerImpl.close does not remove itself from the static singeton
> ---
>
> Key: GEODE-10335
> URL: https://issues.apache.org/jira/browse/GEODE-10335
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0, pull-request-available
> Fix For: 1.16.0
>
>
> TXManagerImpl.close does not remove itself from the static singleton. This 
> causes it to keep the GemFireCacheImpl alive after it has been closed.
> The simple fix is to add "currentInstance = null" at the end of the close 
> method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10337) SocketCreatorFactory does not null out instance static

2022-09-07 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10337.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> SocketCreatorFactory does not null out instance static
> --
>
> Key: GEODE-10337
> URL: https://issues.apache.org/jira/browse/GEODE-10337
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0, pull-request-available
> Fix For: 1.16.0
>
>
> The SocketCreatorFactory has a static "instance" field that keeps the 
> singleton factory. The factory has a reference in "distributionConfig" that 
> ends up keeping the InternalDistributedSystem alive after disconnect.
> It also has a static close method but the product never calls it.
> To fix this leak do the following:
> On InternalDistributedSystem.disconnect add to the end of it:
> {code:java}
>   if (!attemptingToReconnect) {
> SocketCreatorFactory.close();
>   }
> {code}
> Also I think it would be good to change SocketCreatorFactory.getInstance to 
> null out the static when close it called like so:
> {code:java}
>   private static synchronized SocketCreatorFactory getInstance(boolean 
> closing) {
> SocketCreatorFactory result = instance;
> if (result == null && !closing) {
>   result = new SocketCreatorFactory();
>   instance = result;
> } else if (result != null && closing) {
>   instance = null;
> }
> return result;
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10405) Offline server won't start if gateway senders were restarted with clean queue option

2022-09-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10405.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Offline server won't start if gateway senders were restarted with clean queue 
> option
> 
>
> Key: GEODE-10405
> URL: https://issues.apache.org/jira/browse/GEODE-10405
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence, regions, wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> In case gateway senders were restarted with "clean queue" option, while 
> server was offline, if we want to start offline server, it will fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10331) DistributionImpl.destroyMember keeps cache alive for some number of seconds

2022-09-05 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10331:


Assignee: Mario Ivanac

> DistributionImpl.destroyMember keeps cache alive for some number of seconds
> ---
>
> Key: GEODE-10331
> URL: https://issues.apache.org/jira/browse/GEODE-10331
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0
>
> org.apache.geode.distributed.internal.DistributionImpl.destroyMember creates 
> a thread that will hold onto the DIstributesSystem/Cache through the 
> DirectChannel it has for 3 seconds by default. It could be even longer if 
> p2p.disconnectDelay is set to a value > 3000.
> This can be a problem if the JVM is trying to reconnect since this old cache 
> uses memory.
> Instead of creating a new thread for every call of destroyMember, we should 
> just have a single ScheduledExecutor that we schedule the background 
> "closeEndpoint" with.
> Also since all this code interacts with the DirectChannel all the logic about 
> the executor and scheduling it should belong to DirectChannel, not the 
> DistributionImpl.
> When the DirectChannel has disconnect called on it, then it should get rid of 
> all the tasks scheduled in the executor since they are no longer needed.
> I think this issue has been around for a long time because the creation of 
> the thread refers to fixing "Bug 37944" which is on old bug system that is 
> not longer used for geode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10335) TXManagerImpl.close does not remove itself from the static singeton

2022-09-05 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10335:


Assignee: Mario Ivanac

> TXManagerImpl.close does not remove itself from the static singeton
> ---
>
> Key: GEODE-10335
> URL: https://issues.apache.org/jira/browse/GEODE-10335
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0
>
> TXManagerImpl.close does not remove itself from the static singleton. This 
> causes it to keep the GemFireCacheImpl alive after it has been closed.
> The simple fix is to add "currentInstance = null" at the end of the close 
> method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10336) ConnectionTable.close does not null out its static lastInstance field

2022-09-05 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10336:


Assignee: Mario Ivanac

> ConnectionTable.close does not null out its static lastInstance field
> -
>
> Key: GEODE-10336
> URL: https://issues.apache.org/jira/browse/GEODE-10336
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0
>
> The ConnectionTable.close method does a bunch of work but it does not null 
> out the static "lastInstance" atomic. This causes it to keep the 
> ConnectionTable alive which ends up keeping the InternalDistributedSystem 
> alive.
> The easiest fix is to do  this at the end of close: "emergencyClose();". The 
> emergencyClose correctly set the lastInstance atomic to null.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10337) SocketCreatorFactory does not null out instance static

2022-09-05 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10337:


Assignee: Mario Ivanac

> SocketCreatorFactory does not null out instance static
> --
>
> Key: GEODE-10337
> URL: https://issues.apache.org/jira/browse/GEODE-10337
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.16.0
>
> The SocketCreatorFactory has a static "instance" field that keeps the 
> singleton factory. The factory has a reference in "distributionConfig" that 
> ends up keeping the InternalDistributedSystem alive after disconnect.
> It also has a static close method but the product never calls it.
> To fix this leak do the following:
> On InternalDistributedSystem.disconnect add to the end of it:
> {code:java}
>   if (!attemptingToReconnect) {
> SocketCreatorFactory.close();
>   }
> {code}
> Also I think it would be good to change SocketCreatorFactory.getInstance to 
> null out the static when close it called like so:
> {code:java}
>   private static synchronized SocketCreatorFactory getInstance(boolean 
> closing) {
> SocketCreatorFactory result = instance;
> if (result == null && !closing) {
>   result = new SocketCreatorFactory();
>   instance = result;
> } else if (result != null && closing) {
>   instance = null;
> }
> return result;
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10408) Improvre data incosistency after failed offline compaction

2022-08-27 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10408.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Improvre data incosistency after failed offline compaction
> --
>
> Key: GEODE-10408
> URL: https://issues.apache.org/jira/browse/GEODE-10408
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence, regions
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> We have cluster with 3 servers and configured partitioned persistent 
> redundant region. Region if filled with data. We stop one server, and perform 
> offline compaction. While compaction is ongoing, that process is interrupted. 
> When that server is started, data incosistency is observed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (GEODE-10405) Offline server won't start if gateway senders were restarted with clean queue option

2022-08-24 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10405:
-
Summary: Offline server won't start if gateway senders were restarted with 
clean queue option  (was: Offline server want start if gateway senders were 
restarted with clean queue option)

> Offline server won't start if gateway senders were restarted with clean queue 
> option
> 
>
> Key: GEODE-10405
> URL: https://issues.apache.org/jira/browse/GEODE-10405
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence, regions, wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> In case gateway senders were restarted with "clean queue" option, while 
> server was offline, if we want to start offline server, it will fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10407) java.lang.LinkageError: loader org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted duplicate class definition for org.apache.kafka.common.Kafka

2022-08-24 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10407.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> java.lang.LinkageError: loader 
> org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted 
> duplicate class definition for org.apache.kafka.common.KafkaException
> -
>
> Key: GEODE-10407
> URL: https://issues.apache.org/jira/browse/GEODE-10407
> Project: Geode
>  Issue Type: Bug
>  Components: configuration, gfsh
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
> Fix For: 1.16.0
>
>
> In geode cluster 5 jars are depliyed.
>  
> 1st jar
> 2nd jar
> 3rd jar
> 4th CacheListener jar
> 5th jar
>  
> Cache listeners implementation uses classes from 5th jar.
> When creating region with CacheListener, LinkageError error is detected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (GEODE-10407) java.lang.LinkageError: loader org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted duplicate class definition for org.apache.kafka.common.KafkaE

2022-08-18 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10407:
-
Description: 
In geode cluster 5 jars are depliyed.

 

1st jar

2nd jar

3rd jar

4th CacheListener jar

5th jar

 

Cache listeners implementation uses classes from 5th jar.

When creating region with CacheListener, LinkageError error is detected

  was:When creating region with CacheListener, LinkageError error is detected


> java.lang.LinkageError: loader 
> org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted 
> duplicate class definition for org.apache.kafka.common.KafkaException
> -
>
> Key: GEODE-10407
> URL: https://issues.apache.org/jira/browse/GEODE-10407
> Project: Geode
>  Issue Type: Bug
>  Components: configuration, gfsh
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
>
> In geode cluster 5 jars are depliyed.
>  
> 1st jar
> 2nd jar
> 3rd jar
> 4th CacheListener jar
> 5th jar
>  
> Cache listeners implementation uses classes from 5th jar.
> When creating region with CacheListener, LinkageError error is detected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (GEODE-10408) Improvre data incosistency after failed offline compaction

2022-08-16 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10408:
-
Description: We have cluster with 3 servers and configured partitioned 
persistent redundant region. Region if filled with data. We stop one server, 
and perform offline compaction. While compaction is ongoing, that process is 
interrupted. When that server is started, data incosistency is observed.

> Improvre data incosistency after failed offline compaction
> --
>
> Key: GEODE-10408
> URL: https://issues.apache.org/jira/browse/GEODE-10408
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence, regions
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> We have cluster with 3 servers and configured partitioned persistent 
> redundant region. Region if filled with data. We stop one server, and perform 
> offline compaction. While compaction is ongoing, that process is interrupted. 
> When that server is started, data incosistency is observed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10408) Improvre data incosistency after failed offline compaction

2022-08-16 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10408:


Assignee: Mario Ivanac

> Improvre data incosistency after failed offline compaction
> --
>
> Key: GEODE-10408
> URL: https://issues.apache.org/jira/browse/GEODE-10408
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence, regions
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (GEODE-10407) java.lang.LinkageError: loader org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted duplicate class definition for org.apache.kafka.common.KafkaE

2022-08-07 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10407:


 Summary: java.lang.LinkageError: loader 
org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted 
duplicate class definition for org.apache.kafka.common.KafkaException
 Key: GEODE-10407
 URL: https://issues.apache.org/jira/browse/GEODE-10407
 Project: Geode
  Issue Type: Bug
  Components: configuration, gfsh
Reporter: Mario Ivanac


When creating region with CacheListener, LinkageError error is detected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10407) java.lang.LinkageError: loader org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted duplicate class definition for org.apache.kafka.common.Kafka

2022-08-07 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10407:


Assignee: Mario Ivanac

> java.lang.LinkageError: loader 
> org.apache.geode.internal.DeployJarChildFirstClassLoader @2b0a05b0 attempted 
> duplicate class definition for org.apache.kafka.common.KafkaException
> -
>
> Key: GEODE-10407
> URL: https://issues.apache.org/jira/browse/GEODE-10407
> Project: Geode
>  Issue Type: Bug
>  Components: configuration, gfsh
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage
>
> When creating region with CacheListener, LinkageError error is detected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (GEODE-10405) Offline server want start if gateway senders were restarted with clean queue option

2022-08-04 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10405:


Assignee: Mario Ivanac

> Offline server want start if gateway senders were restarted with clean queue 
> option
> ---
>
> Key: GEODE-10405
> URL: https://issues.apache.org/jira/browse/GEODE-10405
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence, regions, wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> In case gateway senders were restarted with "clean queue" option, while 
> server was offline, if we want to start offline server, it will fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (GEODE-10405) Offline server want start if gateway senders were restarted with clean queue option

2022-08-04 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10405:


 Summary: Offline server want start if gateway senders were 
restarted with clean queue option
 Key: GEODE-10405
 URL: https://issues.apache.org/jira/browse/GEODE-10405
 Project: Geode
  Issue Type: Improvement
  Components: persistence, regions, wan
Reporter: Mario Ivanac


In case gateway senders were restarted with "clean queue" option, while server 
was offline, if we want to start offline server, it will fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-6150) Allow use of client/server max-threads with SSL

2022-07-26 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-6150.
-
Resolution: Abandoned

> Allow use of client/server max-threads with SSL
> ---
>
> Key: GEODE-6150
> URL: https://issues.apache.org/jira/browse/GEODE-6150
> Project: Geode
>  Issue Type: New Feature
>  Components: client/server
>Reporter: Bruce J Schuchardt
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: SmallFeature, pull-request-available
>
> Cache servers have a max-threads setting that causes the server to limit the 
> number of threads used by clients.  The implementation uses Java new I/O 
> though and that doesn't currently play well with SSL/TLS secure 
> communications.  If you attempt to configure the server to use secure 
> communications _and_ max-threads it throws an IllegalArgumentException with 
> the message
> Selector thread pooling can not be used with client/server SSL. The selector 
> can be disabled by setting max-threads=0.
> The server code should be modified to use the JDK's SSLEngine to implement 
> SSL/TLS over NIO and get rid of this restriction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10397) Meyhomes Capital Phú Quốc Landup

2022-07-16 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10397.
--
Resolution: Invalid

> Meyhomes Capital Phú Quốc Landup
> 
>
> Key: GEODE-10397
> URL: https://issues.apache.org/jira/browse/GEODE-10397
> Project: Geode
>  Issue Type: Bug
>Reporter: Meyhomes Capital Phú Quốc Landup
>Priority: Major
>  Labels: needsTriage
>
> Thông tin mới về dự án Meyhomes Capital Phú Quốc LandUp 
> [https://landup.net/du-an/meyhomes-capital-phu-quoc/], vị trí, thanh toán, 
> chiết khấu, giá bán...dự án Meyhomes Capital Phú Quốc của chủ đầu tư Tân Á 
> Đại Thành. Dự án có vị trí đắc địa ngay gần bờ biển Bãi Trường, được nhiều 
> nhà đầu tư quan tâm tìm mua.
> *Website:* ++ [https://landup.net/du-an/meyhomes-capital-phu-quoc/]
> {*}Youtube{*}: [https://www.youtube.com/channel/UCkXw7NGUetiWxAkET5Vc1WA/]
> *Blog:* 
> [https://sites.google.com/view/meyhomescapitalpq-landup/|https://sites.google.com/view/meyhomescapitalpq-landup/trang-ch%E1%BB%A7]
> *Email:* meyhomescapitalphuq...@gmail.com
> *Địa chỉ:* 20 Nguyễn Văn Đậu, Phường 5, Quận Phú Nhuận, Thành phố Hồ Chí Minh
> *Số điện thoại:* 0928.0168.69
> *Tags:* Dự án Meyhomes Capital Phú Quốc, Meyhomes Capital Phú Quốc, BĐS Tân Á 
> Đại Thành, Bất động sản Phú Quốc, LandUp, Bất động sản
> *Hashtag:* #meyhomescapital #meyhomescapitalphuquoc #batdongsanphuquoc 
> #tanadaithanh #batdongsan #landup
> *Xã hội của tôi:*
> [https://www.facebook.com/Meyhomes-Capital-Ph%C3%BA-Qu%E1%BB%91c-112117104876503]
> [https://www.pinterest.com/meyhomescapitalpqlandup|https://www.pinterest.com/meyhomescapitalpqlandup/]
> [https://twitter.com/MeyhomesCapitaI]
> [https://about.me/meyhomescapitalpqlandup]
> [https://www.behance.net/meyhomescapitalpq]
> [https://meyhomescapitalpqlandup.tumblr.com|https://meyhomescapitalpqlandup.tumblr.com/]
> [https://www.tumblr.com/blog/view/meyhomescapitalpqlandup]
> [https://soundcloud.com/meyhomescapitalpq-landup]
> [https://www.instagram.com/meyhomescapitalpglandup|https://www.instagram.com/meyhomescapitalpglandup/]
> [https://myhomescapitalphuquoclandup.wordpress.com/]
> [https://meyhomescapitalpqlandup.weebly.com/]
> [https://github.com/meyhomescapitalpq-landup]
> [https://meyhomescapitalpq-landup.blogspot.com/2022/07/meyhomes-capital-phu-quoc-landup.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (GEODE-10397) Meyhomes Capital Phú Quốc Landup

2022-07-16 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac closed GEODE-10397.


> Meyhomes Capital Phú Quốc Landup
> 
>
> Key: GEODE-10397
> URL: https://issues.apache.org/jira/browse/GEODE-10397
> Project: Geode
>  Issue Type: Bug
>Reporter: Meyhomes Capital Phú Quốc Landup
>Priority: Major
>  Labels: needsTriage
>
> Thông tin mới về dự án Meyhomes Capital Phú Quốc LandUp 
> [https://landup.net/du-an/meyhomes-capital-phu-quoc/], vị trí, thanh toán, 
> chiết khấu, giá bán...dự án Meyhomes Capital Phú Quốc của chủ đầu tư Tân Á 
> Đại Thành. Dự án có vị trí đắc địa ngay gần bờ biển Bãi Trường, được nhiều 
> nhà đầu tư quan tâm tìm mua.
> *Website:* ++ [https://landup.net/du-an/meyhomes-capital-phu-quoc/]
> {*}Youtube{*}: [https://www.youtube.com/channel/UCkXw7NGUetiWxAkET5Vc1WA/]
> *Blog:* 
> [https://sites.google.com/view/meyhomescapitalpq-landup/|https://sites.google.com/view/meyhomescapitalpq-landup/trang-ch%E1%BB%A7]
> *Email:* meyhomescapitalphuq...@gmail.com
> *Địa chỉ:* 20 Nguyễn Văn Đậu, Phường 5, Quận Phú Nhuận, Thành phố Hồ Chí Minh
> *Số điện thoại:* 0928.0168.69
> *Tags:* Dự án Meyhomes Capital Phú Quốc, Meyhomes Capital Phú Quốc, BĐS Tân Á 
> Đại Thành, Bất động sản Phú Quốc, LandUp, Bất động sản
> *Hashtag:* #meyhomescapital #meyhomescapitalphuquoc #batdongsanphuquoc 
> #tanadaithanh #batdongsan #landup
> *Xã hội của tôi:*
> [https://www.facebook.com/Meyhomes-Capital-Ph%C3%BA-Qu%E1%BB%91c-112117104876503]
> [https://www.pinterest.com/meyhomescapitalpqlandup|https://www.pinterest.com/meyhomescapitalpqlandup/]
> [https://twitter.com/MeyhomesCapitaI]
> [https://about.me/meyhomescapitalpqlandup]
> [https://www.behance.net/meyhomescapitalpq]
> [https://meyhomescapitalpqlandup.tumblr.com|https://meyhomescapitalpqlandup.tumblr.com/]
> [https://www.tumblr.com/blog/view/meyhomescapitalpqlandup]
> [https://soundcloud.com/meyhomescapitalpq-landup]
> [https://www.instagram.com/meyhomescapitalpglandup|https://www.instagram.com/meyhomescapitalpglandup/]
> [https://myhomescapitalphuquoclandup.wordpress.com/]
> [https://meyhomescapitalpqlandup.weebly.com/]
> [https://github.com/meyhomescapitalpq-landup]
> [https://meyhomescapitalpq-landup.blogspot.com/2022/07/meyhomes-capital-phu-quoc-landup.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10392) Faulty statistics when parallel gateway sender is started with clean queue, on restarted member

2022-07-05 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10392.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Faulty statistics when parallel gateway sender is started with clean queue, 
> on restarted member
> ---
>
> Key: GEODE-10392
> URL: https://issues.apache.org/jira/browse/GEODE-10392
> Project: Geode
>  Issue Type: Bug
>  Components: statistics, wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
> Fix For: 1.16.0
>
>
> we have following scenario:
> we fill parallel gateway-sender queue with some events, restart one server, 
> and after it is recovered, execute stop gw sender, and then start gw sender 
> with clean queue option.
> At this moment, queue is cleared, and all stats are zero.
> After this moment, if we put any data in queue, you can see that stats 
> getTotalQueueSizeBytesInUse is 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down

2022-07-01 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9484.
-
Fix Version/s: 1.16.0
   Resolution: Fixed

> Data inconsistency in replicated region with 3 or more servers, and one 
> server is down 
> ---
>
> Key: GEODE-9484
> URL: https://issues.apache.org/jira/browse/GEODE-9484
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server, regions
>Affects Versions: 1.13.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.1, 1.16.0
>
>
> We have configured replicated region with 3 or more servers, and client is 
> configured with read timeout set to value same or smaller than member timeout.
> In case while client is putting data in region,  one of replicated servers is 
> shutdown, it is observed that we have data inconsistency.
>  
> We see that data part of data is written in server connected with client, but 
> in remaining replicated servers it is missing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-9997) Improve possible duplicate logic in WAN

2022-06-29 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9997.
-
Fix Version/s: 1.16.0
   Resolution: Fixed

> Improve possible duplicate logic in WAN
> ---
>
> Key: GEODE-9997
> URL: https://issues.apache.org/jira/browse/GEODE-9997
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> It has been observed, that in case server with full parallel gw sender queue 
> is restarted,
> after it is up, is much slower unqueueing events, than other members of 
> cluster.
> After analysis, we found out, that reason for this is current logic  to mark 
> all events in queue as possible duplicate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (GEODE-10020) Improving LiveServerPinger logic

2022-06-28 Thread Mario Ivanac (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Mario Ivanac resolved as Fixed  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Geode /  GEODE-10020  
 
 
  Improving LiveServerPinger logic   
 

  
 
 
 
 

 
Change By: 
 Mario Ivanac  
 
 
Fix Version/s: 
 1.16.0  
 
 
Resolution: 
 Fixed  
 
 
Status: 
 Reopened Resolved  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v8.20.10#820010-sha1:ace47f9)  
 
 

 
   
 

  
 

  
 

   



[jira] [Closed] (GEODE-10362) Expose gateway sender recovery status, after restart of server

2022-06-28 Thread Mario Ivanac (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Mario Ivanac closed an issue as Won't Fix  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Not needed.  
 

  
 
 
 
 

 
 Geode /  GEODE-10362  
 
 
  Expose gateway sender recovery status, after restart of server   
 

  
 
 
 
 

 
Change By: 
 Mario Ivanac  
 
 
Status: 
 Resolved Closed  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v8.20.10#820010-sha1:ace47f9)  
 
 

 
   
 

  
 

  
 

   



[jira] [Resolved] (GEODE-10362) Expose gateway sender recovery status, after restart of server

2022-06-28 Thread Mario Ivanac (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Mario Ivanac resolved as Won't Fix  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Geode /  GEODE-10362  
 
 
  Expose gateway sender recovery status, after restart of server   
 

  
 
 
 
 

 
Change By: 
 Mario Ivanac  
 
 
Resolution: 
 Won't Fix  
 
 
Status: 
 In Progress Resolved  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v8.20.10#820010-sha1:ace47f9)  
 
 

 
   
 

  
 

  
 

   



[jira] [Created] (GEODE-10392) Faulty statistics when parallel gateway sender is started with clean queue, on restarted member

2022-06-20 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10392:


 Summary: Faulty statistics when parallel gateway sender is started 
with clean queue, on restarted member
 Key: GEODE-10392
 URL: https://issues.apache.org/jira/browse/GEODE-10392
 Project: Geode
  Issue Type: Bug
  Components: statistics, wan
Reporter: Mario Ivanac


we have following scenario:

we fill parallel gateway-sender queue with some events, restart one server, and 
after it is recovered, execute stop gw sender, and then start gw sender with 
clean queue option.

At this moment, queue is cleared, and all stats are zero.

After this moment, if we put any data in queue, you can see that stats 
getTotalQueueSizeBytesInUse is 0.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (GEODE-10392) Faulty statistics when parallel gateway sender is started with clean queue, on restarted member

2022-06-20 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10392:


Assignee: Mario Ivanac

> Faulty statistics when parallel gateway sender is started with clean queue, 
> on restarted member
> ---
>
> Key: GEODE-10392
> URL: https://issues.apache.org/jira/browse/GEODE-10392
> Project: Geode
>  Issue Type: Bug
>  Components: statistics, wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage
>
> we have following scenario:
> we fill parallel gateway-sender queue with some events, restart one server, 
> and after it is recovered, execute stop gw sender, and then start gw sender 
> with clean queue option.
> At this moment, queue is cleared, and all stats are zero.
> After this moment, if we put any data in queue, you can see that stats 
> getTotalQueueSizeBytesInUse is 0.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (GEODE-10280) Additional info in Server Status command

2022-06-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10280.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Additional info in Server Status command
> 
>
> Key: GEODE-10280
> URL: https://issues.apache.org/jira/browse/GEODE-10280
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Update "status server" command to additionally print status Message if 
> present in state "starting".
> This could be useful for cases when server is stuck in state "starting" due 
> to waiting on offline member with newest information.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (GEODE-10362) Expose gateway sender recovery status, after restart of server

2022-06-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10362:


Assignee: Mario Ivanac

> Expose gateway sender recovery status, after restart of server
> --
>
> Key: GEODE-10362
> URL: https://issues.apache.org/jira/browse/GEODE-10362
> Project: Geode
>  Issue Type: Wish
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> Expose gateway sender recovery status, after restart of server



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (GEODE-10362) Expose gateway sender recovery status, after restart of server

2022-06-06 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10362:


 Summary: Expose gateway sender recovery status, after restart of 
server
 Key: GEODE-10362
 URL: https://issues.apache.org/jira/browse/GEODE-10362
 Project: Geode
  Issue Type: Wish
  Components: wan
Reporter: Mario Ivanac


Expose gateway sender recovery status, after restart of server



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Reopened] (GEODE-10020) Improving LiveServerPinger logic

2022-05-31 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reopened GEODE-10020:
--

> Improving LiveServerPinger logic
> 
>
> Key: GEODE-10020
> URL: https://issues.apache.org/jira/browse/GEODE-10020
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> _When several gateway receivers have the same value for hostname-for-senders 
> (for example when running Geode under kubernetes and a load balancer balances 
> the load among the remote servers), it has been observed that number of 
> connections in GW senders pool used for sending ping message is much greater 
> then number of dispatcher threads, although in this case only one connection 
> could be used_ _(since destinations have same address and port )._
>  
> _For details see RFC._



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (GEODE-10277) Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option

2022-05-25 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10277.
--
Fix Version/s: 1.15.0
   Resolution: Fixed

> Exception thrown when checking gatewaySender EventQueueSize, while restarting 
> gateway sender with clean queue option
> 
>
> Key: GEODE-10277
> URL: https://issues.apache.org/jira/browse/GEODE-10277
> Project: Geode
>  Issue Type: Bug
>  Components: statistics
>Affects Versions: 1.15.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: blocks-1.15.0, pull-request-available
> Fix For: 1.15.0
>
>
> In case we are checking EventQueueSize in server with full parallel gateway 
> sender queue, and gateway sender is restarted with --cleanqueue option, 
> NullPointerException occures in JMX client.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (GEODE-10020) Improving LiveServerPinger logic

2022-05-24 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10020.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Improving LiveServerPinger logic
> 
>
> Key: GEODE-10020
> URL: https://issues.apache.org/jira/browse/GEODE-10020
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> _When several gateway receivers have the same value for hostname-for-senders 
> (for example when running Geode under kubernetes and a load balancer balances 
> the load among the remote servers), it has been observed that number of 
> connections in GW senders pool used for sending ping message is much greater 
> then number of dispatcher threads, although in this case only one connection 
> could be used_ _(since destinations have same address and port )._
>  
> _For details see RFC._



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (GEODE-10226) Introduce monitoring of async writer thread

2022-05-24 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10226.
--
Fix Version/s: 1.16.0
   Resolution: Fixed

> Introduce monitoring of async writer thread
> ---
>
> Key: GEODE-10226
> URL: https://issues.apache.org/jira/browse/GEODE-10226
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence
>Affects Versions: 1.15.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> _Introduce new (or extend existing) thread monitoring to monitor async writer 
> thread, and report warning (or fatal) level alert in case thread is stack 
> more then 15 seconds._



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (GEODE-10310) Deactivate possibility to retry operation on partitioned region, when member containing primary bucket is restarted

2022-05-19 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10310.
--
Fix Version/s: 1.15.0
   Resolution: Fixed

> Deactivate possibility to retry operation on partitioned region, when member 
> containing primary bucket is restarted
> ---
>
> Key: GEODE-10310
> URL: https://issues.apache.org/jira/browse/GEODE-10310
> Project: Geode
>  Issue Type: Improvement
>  Components: regions
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> In case client is putting data in partitioned region, and server is 
> restarted, add option not to retry operation for case when member with 
> primary bucket is restarted. In that case, operation will be aborted and 
> notified to client, instead of waiting for primary bucket to be reallocated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (GEODE-10310) Deactivate possibility to retry operation on partitioned region, when member containing primary bucket is restarted

2022-05-18 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10310:
-
Affects Version/s: (was: 1.14.4)

> Deactivate possibility to retry operation on partitioned region, when member 
> containing primary bucket is restarted
> ---
>
> Key: GEODE-10310
> URL: https://issues.apache.org/jira/browse/GEODE-10310
> Project: Geode
>  Issue Type: Improvement
>  Components: regions
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> In case client is putting data in partitioned region, and server is 
> restarted, add option not to retry operation for case when member with 
> primary bucket is restarted. In that case, operation will be aborted and 
> notified to client, instead of waiting for primary bucket to be reallocated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (GEODE-10310) Deactivate possibility to retry operation on partitioned region, when member containing primary bucket is restarted

2022-05-13 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10310:


 Summary: Deactivate possibility to retry operation on partitioned 
region, when member containing primary bucket is restarted
 Key: GEODE-10310
 URL: https://issues.apache.org/jira/browse/GEODE-10310
 Project: Geode
  Issue Type: Improvement
  Components: regions
Affects Versions: 1.14.4
Reporter: Mario Ivanac


In case client is putting data in partitioned region, and server is restarted, 
add option not to retry operation for case when member with primary bucket is 
restarted. In that case, operation will be aborted and notified to client, 
instead of waiting for primary bucket to be reallocated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (GEODE-10310) Deactivate possibility to retry operation on partitioned region, when member containing primary bucket is restarted

2022-05-13 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10310:


Assignee: Mario Ivanac

> Deactivate possibility to retry operation on partitioned region, when member 
> containing primary bucket is restarted
> ---
>
> Key: GEODE-10310
> URL: https://issues.apache.org/jira/browse/GEODE-10310
> Project: Geode
>  Issue Type: Improvement
>  Components: regions
>Affects Versions: 1.14.4
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> In case client is putting data in partitioned region, and server is 
> restarted, add option not to retry operation for case when member with 
> primary bucket is restarted. In that case, operation will be aborted and 
> notified to client, instead of waiting for primary bucket to be reallocated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down

2022-05-05 Thread Mario Ivanac (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17532128#comment-17532128
 ] 

Mario Ivanac edited comment on GEODE-9484 at 5/5/22 7:44 PM:
-

It seems that in some scenario, exception without message is thrown, an due to 
that, when checking content of message, we have NPE.


was (Author: mivanac):
It seems that some scenario, exception without message is thrown, an due to 
that, when checking content of message, we have NPE.

> Data inconsistency in replicated region with 3 or more servers, and one 
> server is down 
> ---
>
> Key: GEODE-9484
> URL: https://issues.apache.org/jira/browse/GEODE-9484
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server, regions
>Affects Versions: 1.13.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> We have configured replicated region with 3 or more servers, and client is 
> configured with read timeout set to value same or smaller than member timeout.
> In case while client is putting data in region,  one of replicated servers is 
> shutdown, it is observed that we have data inconsistency.
>  
> We see that data part of data is written in server connected with client, but 
> in remaining replicated servers it is missing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (GEODE-10280) Additional info in Server Status command

2022-05-05 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10280:


 Summary: Additional info in Server Status command
 Key: GEODE-10280
 URL: https://issues.apache.org/jira/browse/GEODE-10280
 Project: Geode
  Issue Type: Improvement
  Components: gfsh
Reporter: Mario Ivanac


Update "status server" command to additionally print status Message if present 
in state "starting".

This could be useful for cases when server is stuck in state "starting" due to 
waiting on offline member with newest information.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (GEODE-10280) Additional info in Server Status command

2022-05-05 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10280:


Assignee: Mario Ivanac

> Additional info in Server Status command
> 
>
> Key: GEODE-10280
> URL: https://issues.apache.org/jira/browse/GEODE-10280
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> Update "status server" command to additionally print status Message if 
> present in state "starting".
> This could be useful for cases when server is stuck in state "starting" due 
> to waiting on offline member with newest information.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down

2022-05-05 Thread Mario Ivanac (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17532128#comment-17532128
 ] 

Mario Ivanac commented on GEODE-9484:
-

It seems that some scenario, exception without message is thrown, an due to 
that, when checking content of message, we have NPE.

> Data inconsistency in replicated region with 3 or more servers, and one 
> server is down 
> ---
>
> Key: GEODE-9484
> URL: https://issues.apache.org/jira/browse/GEODE-9484
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server, regions
>Affects Versions: 1.13.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> We have configured replicated region with 3 or more servers, and client is 
> configured with read timeout set to value same or smaller than member timeout.
> In case while client is putting data in region,  one of replicated servers is 
> shutdown, it is observed that we have data inconsistency.
>  
> We see that data part of data is written in server connected with client, but 
> in remaining replicated servers it is missing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Reopened] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down

2022-05-05 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reopened GEODE-9484:
-

> Data inconsistency in replicated region with 3 or more servers, and one 
> server is down 
> ---
>
> Key: GEODE-9484
> URL: https://issues.apache.org/jira/browse/GEODE-9484
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server, regions
>Affects Versions: 1.13.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> We have configured replicated region with 3 or more servers, and client is 
> configured with read timeout set to value same or smaller than member timeout.
> In case while client is putting data in region,  one of replicated servers is 
> shutdown, it is observed that we have data inconsistency.
>  
> We see that data part of data is written in server connected with client, but 
> in remaining replicated servers it is missing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (GEODE-10277) Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option

2022-05-04 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10277:


 Summary: Exception thrown when checking gatewaySender 
EventQueueSize, while restarting gateway sender with clean queue option
 Key: GEODE-10277
 URL: https://issues.apache.org/jira/browse/GEODE-10277
 Project: Geode
  Issue Type: Bug
  Components: statistics
Reporter: Mario Ivanac


In case we are checking EventQueueSize in server with full parallel gateway 
sender queue, and gateway sender is restarted with --cleanqueue option, 
NullPointerException occures in JMX client.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (GEODE-10277) Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option

2022-05-04 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10277:


Assignee: Mario Ivanac

> Exception thrown when checking gatewaySender EventQueueSize, while restarting 
> gateway sender with clean queue option
> 
>
> Key: GEODE-10277
> URL: https://issues.apache.org/jira/browse/GEODE-10277
> Project: Geode
>  Issue Type: Bug
>  Components: statistics
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage
>
> In case we are checking EventQueueSize in server with full parallel gateway 
> sender queue, and gateway sender is restarted with --cleanqueue option, 
> NullPointerException occures in JMX client.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down

2022-05-02 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9484.
-
Fix Version/s: 1.15.0
   Resolution: Fixed

> Data inconsistency in replicated region with 3 or more servers, and one 
> server is down 
> ---
>
> Key: GEODE-9484
> URL: https://issues.apache.org/jira/browse/GEODE-9484
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server, regions
>Affects Versions: 1.13.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> We have configured replicated region with 3 or more servers, and client is 
> configured with read timeout set to value same or smaller than member timeout.
> In case while client is putting data in region,  one of replicated servers is 
> shutdown, it is observed that we have data inconsistency.
>  
> We see that data part of data is written in server connected with client, but 
> in remaining replicated servers it is missing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (GEODE-10266) CI Failure: SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest > testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection Failed

2022-05-02 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10266.
--
Resolution: Fixed

> CI Failure: SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest > 
> testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection 
> Failed
> ---
>
> Key: GEODE-10266
> URL: https://issues.apache.org/jira/browse/GEODE-10266
> Project: Geode
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Vince Ford
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> F : acceptance-test-openjdk8
> > Task :geode-assembly:acceptanceTest
> SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest > 
> testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection 
> FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.cache.wan.SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest
>  that uses org.apache.geode.test.dunit.VM, 
> org.apache.geode.test.dunit.VMjava.lang.String 
> expected: 4
>  but was: 3 within 5 minutes.
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
> at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:985)
> at 
> org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:769)
> at 
> org.apache.geode.cache.wan.SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest.testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection(SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest.java:261)
> Caused by:
> java.util.concurrent.TimeoutException
> at java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at 
> org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:101)
> at 
> org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:81)
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:103)
> ... 5 more
> 185 tests completed, 1 failed, 4 skipped
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> [*http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1127/test-results/acceptanceTest/1651109383/*]
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Test report artifacts from this job are available at:
> [*http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1127/test-artifacts/1651109383/acceptancetestfiles-openjdk8-1.15.0-build.1127.tgz*]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (GEODE-10266) CI Failure: SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest > testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection Failed

2022-04-28 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10266:


Assignee: Mario Ivanac

> CI Failure: SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest > 
> testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection 
> Failed
> ---
>
> Key: GEODE-10266
> URL: https://issues.apache.org/jira/browse/GEODE-10266
> Project: Geode
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Vince Ford
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage
>
> F : acceptance-test-openjdk8
> > Task :geode-assembly:acceptanceTest
> SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest > 
> testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection 
> FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.cache.wan.SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest
>  that uses org.apache.geode.test.dunit.VM, 
> org.apache.geode.test.dunit.VMjava.lang.String 
> expected: 4
>  but was: 3 within 5 minutes.
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
> at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:985)
> at 
> org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:769)
> at 
> org.apache.geode.cache.wan.SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest.testPingsToReceiversWithSamePortAndHostnameForSendersUseOnlyOneMoreConnection(SeveralGatewayReceiversWithSamePortAndHostnameForSendersTest.java:261)
> Caused by:
> java.util.concurrent.TimeoutException
> at java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at 
> org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:101)
> at 
> org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:81)
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:103)
> ... 5 more
> 185 tests completed, 1 failed, 4 skipped
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> [*http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1127/test-results/acceptanceTest/1651109383/*]
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Test report artifacts from this job are available at:
> [*http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1127/test-artifacts/1651109383/acceptancetestfiles-openjdk8-1.15.0-build.1127.tgz*]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (GEODE-10226) Introduce monitoring of async writer thread

2022-04-08 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10226:


Assignee: Mario Ivanac

> Introduce monitoring of async writer thread
> ---
>
> Key: GEODE-10226
> URL: https://issues.apache.org/jira/browse/GEODE-10226
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence
>Affects Versions: 1.15.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> _Introduce new (or extend existing) thread monitoring to monitor async writer 
> thread, and report warning (or fatal) level alert in case thread is stack 
> more then 15 seconds._



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-10226) Introduce monitoring of async writer thread

2022-04-08 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10226:


 Summary: Introduce monitoring of async writer thread
 Key: GEODE-10226
 URL: https://issues.apache.org/jira/browse/GEODE-10226
 Project: Geode
  Issue Type: Improvement
  Components: persistence
Affects Versions: 1.15.0
Reporter: Mario Ivanac


_Introduce new (or extend existing) thread monitoring to monitor async writer 
thread, and report warning (or fatal) level alert in case thread is stack more 
then 15 seconds._



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10020) Improving LiveServerPinger logic

2022-03-30 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10020:
-
Description: _When several gateway receivers have the same value for 
hostname-for-senders (for example when running Geode under kubernetes and a 
load balancer balances the load among the remote servers), it has been observed 
that number of connections in GW senders pool used for sending ping message is 
much greater then number of dispatcher threads, although in this case only one 
connection could be used_ _(since destinations have same address and port )._  
(was: Add configurable option to gradually activate pinging toward destination. 
This can be accomplish by increasing initial delay of each ping task.)

> Improving LiveServerPinger logic
> 
>
> Key: GEODE-10020
> URL: https://issues.apache.org/jira/browse/GEODE-10020
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> _When several gateway receivers have the same value for hostname-for-senders 
> (for example when running Geode under kubernetes and a load balancer balances 
> the load among the remote servers), it has been observed that number of 
> connections in GW senders pool used for sending ping message is much greater 
> then number of dispatcher threads, although in this case only one connection 
> could be used_ _(since destinations have same address and port )._



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10020) Improving LiveServerPinger logic

2022-03-30 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10020:
-
Description: 
_When several gateway receivers have the same value for hostname-for-senders 
(for example when running Geode under kubernetes and a load balancer balances 
the load among the remote servers), it has been observed that number of 
connections in GW senders pool used for sending ping message is much greater 
then number of dispatcher threads, although in this case only one connection 
could be used_ _(since destinations have same address and port )._

 

_For details see RFC._

  was:_When several gateway receivers have the same value for 
hostname-for-senders (for example when running Geode under kubernetes and a 
load balancer balances the load among the remote servers), it has been observed 
that number of connections in GW senders pool used for sending ping message is 
much greater then number of dispatcher threads, although in this case only one 
connection could be used_ _(since destinations have same address and port )._


> Improving LiveServerPinger logic
> 
>
> Key: GEODE-10020
> URL: https://issues.apache.org/jira/browse/GEODE-10020
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> _When several gateway receivers have the same value for hostname-for-senders 
> (for example when running Geode under kubernetes and a load balancer balances 
> the load among the remote servers), it has been observed that number of 
> connections in GW senders pool used for sending ping message is much greater 
> then number of dispatcher threads, although in this case only one connection 
> could be used_ _(since destinations have same address and port )._
>  
> _For details see RFC._



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10020) Improving LiveServerPinger logic

2022-03-30 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10020:
-
Summary: Improving LiveServerPinger logic  (was: Improving LiveServerPinger 
logic in )

> Improving LiveServerPinger logic
> 
>
> Key: GEODE-10020
> URL: https://issues.apache.org/jira/browse/GEODE-10020
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> Add configurable option to gradually activate pinging toward destination. 
> This can be accomplish by increasing initial delay of each ping task.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10020) Improving LiveServerPinger logic in

2022-03-30 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10020:
-
Summary: Improving LiveServerPinger logic in   (was: Gradual activation of 
LiveServerPinger for each destination)

> Improving LiveServerPinger logic in 
> 
>
> Key: GEODE-10020
> URL: https://issues.apache.org/jira/browse/GEODE-10020
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> Add configurable option to gradually activate pinging toward destination. 
> This can be accomplish by increasing initial delay of each ping task.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-9969) The region name starting with underscore lead to missing disk store after restart

2022-03-25 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9969.
-
Fix Version/s: 1.15.0
   Resolution: Fixed

> The region name starting with underscore lead to missing disk store after 
> restart
> -
>
> Key: GEODE-9969
> URL: https://issues.apache.org/jira/browse/GEODE-9969
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Affects Versions: 1.12.8, 1.13.6, 1.14.2, 1.15.0
>Reporter: Mario Kevo
>Assignee: Mario Kevo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The problem is when using the region with a name starting with an 
> underscore(allowed by documentation 
> [region_naming|https://geode.apache.org/docs/guide/114/basic_config/data_regions/region_naming.html]).
> If we stop one of the members and then rename the working dir(include disk 
> store dir) to some new name and start the server with the name like renamed 
> working dir, it will lead that we have the same disk-store-id in the listed 
> disk-stores and in the missing disk store.
> This happens only if we are using the region with an underscore at the 
> beginning.
> Steps to reproduce:
> Run locator and 4 servers, create region with name starting by underscore
>  # start locator --name=locator
>  # start server --name=server1 --server-port=40401
>  # start server --name=server2 --server-port=40402
>  # start server --name=server3 --server-port=40403
>  # start server --name=server4 --server-port=40404
>  # create region --name=_test-region --type=PARTITION_REDUNDANT_PERSISTENT 
> --redundant-copies=1 --total-num-buckets=10 --enable-synchronous-disk=false
>  # query --query="select * from /_test-region"
> From another terminal (Kill server and rename working dir)
>  # kill -9 $(cat server4/vf.gf.server.pid)
>  # mv server4/ server5
> {code:java}
> gfsh>list disk-stores
> Member Name |               Member Id                | Disk Store Name | Disk 
> Store ID
> --- | -- | --- | 
> 
> server1     | 192.168.0.145(server1:16916):41001 | DEFAULT         | 
> d5d17b43-4a06-408b-917f-08e5b2533ebe
> server2     | 192.168.0.145(server2:17004):41002 | DEFAULT         | 
> 31d47cb4-718e-4b58-bde3-ae15b4657910
> server3     | 192.168.0.145(server3:17094):41003 | DEFAULT         | 
> f12850c6-a73b-443e-9ee0-87f0819ae6bc
> server5     | 192.168.0.145(server5:17428):41004 | DEFAULT         | 
> 7a552fb3-e43d-4fa8-baa8-f6dc794cbf74
> gfsh>show missing-disk-stores
> Missing Disk Stores
>            Disk Store ID             |     Host      | Directory
>  | - | 
> 
> 7a552fb3-e43d-4fa8-baa8-f6dc794cbf74 | 192.168.0.145 | 
> /home/mkevo/apache-geode-1.15.0-build.0/bin/server4/.
> No missing colocated region found
> {code}
> Start a new server with a name like you rename your working dir from the 
> restarted server.
>  # start server --name=server5 --server-port=40405
> Now we have the following output:
> {code:java}
> gfsh>list disk-stores
> Member Name |   Member Id| Disk Store Name | Disk 
> Store ID
> --- | -- | --- | 
> 
> server1 | 192.168.0.145(server1:16916):41001 | DEFAULT | 
> d5d17b43-4a06-408b-917f-08e5b2533ebe
> server2 | 192.168.0.145(server2:17004):41002 | DEFAULT | 
> 31d47cb4-718e-4b58-bde3-ae15b4657910
> server3 | 192.168.0.145(server3:17094):41003 | DEFAULT | 
> f12850c6-a73b-443e-9ee0-87f0819ae6bc
> server5 | 192.168.0.145(server5:17428):41004 | DEFAULT | 
> 7a552fb3-e43d-4fa8-baa8-f6dc794cbf74
> gfsh>show missing-disk-stores
> Missing Disk Stores
>Disk Store ID | Host  | Directory
>  | - | 
> 
> 7a552fb3-e43d-4fa8-baa8-f6dc794cbf74 | 192.168.0.145 | 
> /home/mkevo/apache-geode-1.15.0-build.0/bin/server4/.
> No missing colocated region found
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-10105) restarting of gateway sender during dispatching of events, causes duplication of events without indication of possibleduplicate

2022-03-18 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10105.
--
Fix Version/s: 1.15.0
   Resolution: Fixed

> restarting of gateway sender during dispatching of events, causes duplication 
> of events without indication of possibleduplicate
> ---
>
> Key: GEODE-10105
> URL: https://issues.apache.org/jira/browse/GEODE-10105
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.14.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
> Fix For: 1.15.0
>
>
> During dispatching events in parallel gateway sender, if we stop dispatching, 
> and after some time restart dispatching, multiple duplicate events are 
> observed on receiving side (without indication possible duplicate).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Reopened] (GEODE-8191) MemberMXBeanDistributedTest.testBucketCount fails intermittently

2022-03-17 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reopened GEODE-8191:
-

> MemberMXBeanDistributedTest.testBucketCount fails intermittently
> 
>
> Key: GEODE-8191
> URL: https://issues.apache.org/jira/browse/GEODE-8191
> Project: Geode
>  Issue Type: Bug
>  Components: jmx, tests
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Kirk Lund
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: flaky, pull-request-available
>
> This appears to be a flaky test related to GEODE-7963 which was resolved by 
> Mario Ivanac so I've assigned the ticket to him.
> {noformat}
> org.apache.geode.management.MemberMXBeanDistributedTest > testBucketCount 
> FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.management.MemberMXBeanDistributedTest Expected bucket count 
> is 4000, and actual count is 3750 expected:<3750> but was:<4000> within 5 
> minutes.
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:165)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
> at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:895)
> at 
> org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:679)
> at 
> org.apache.geode.management.MemberMXBeanDistributedTest.testBucketCount(MemberMXBeanDistributedTest.java:102)
> Caused by:
> java.lang.AssertionError: Expected bucket count is 4000, and actual 
> count is 3750 expected:<3750> but was:<4000>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at 
> org.apache.geode.management.MemberMXBeanDistributedTest.lambda$testBucketCount$1(MemberMXBeanDistributedTest.java:107)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-8191) MemberMXBeanDistributedTest.testBucketCount fails intermittently

2022-03-17 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-8191.
-
Resolution: Cannot Reproduce

> MemberMXBeanDistributedTest.testBucketCount fails intermittently
> 
>
> Key: GEODE-8191
> URL: https://issues.apache.org/jira/browse/GEODE-8191
> Project: Geode
>  Issue Type: Bug
>  Components: jmx, tests
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Kirk Lund
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: flaky, pull-request-available
>
> This appears to be a flaky test related to GEODE-7963 which was resolved by 
> Mario Ivanac so I've assigned the ticket to him.
> {noformat}
> org.apache.geode.management.MemberMXBeanDistributedTest > testBucketCount 
> FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.management.MemberMXBeanDistributedTest Expected bucket count 
> is 4000, and actual count is 3750 expected:<3750> but was:<4000> within 5 
> minutes.
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:165)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
> at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:895)
> at 
> org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:679)
> at 
> org.apache.geode.management.MemberMXBeanDistributedTest.testBucketCount(MemberMXBeanDistributedTest.java:102)
> Caused by:
> java.lang.AssertionError: Expected bucket count is 4000, and actual 
> count is 3750 expected:<3750> but was:<4000>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at 
> org.apache.geode.management.MemberMXBeanDistributedTest.lambda$testBucketCount$1(MemberMXBeanDistributedTest.java:107)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-8872) Add client option, to request locators internal host addresses

2022-03-17 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-8872.
-
Resolution: Won't Fix

> Add client option, to request locators internal host addresses 
> ---
>
> Key: GEODE-8872
> URL: https://issues.apache.org/jira/browse/GEODE-8872
> Project: Geode
>  Issue Type: New Feature
>  Components: client/server
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> Additional option for clients, when pool is created, to request locators 
> internal host addresses.
> When sending LocatorListRequest in some cases we need to get internal host 
> addresses.
> To describe use case:
> When deploying geode, for locator we define hostname-for-clients. This is 
> external address used for WAN, and external clients.
> If we also deploy some clients in internal network (for example in the same 
> kubernetes cluster as geode), then for those clients to connect with locator, 
> communication will go from internal network to external, then from external 
> to internal. This increases latency, and we should communicate over the 
> shortest path.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-8191) MemberMXBeanDistributedTest.testBucketCount fails intermittently

2022-03-17 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-8191.
-
Resolution: Fixed

Fault not reproduced last 2 months.

> MemberMXBeanDistributedTest.testBucketCount fails intermittently
> 
>
> Key: GEODE-8191
> URL: https://issues.apache.org/jira/browse/GEODE-8191
> Project: Geode
>  Issue Type: Bug
>  Components: jmx, tests
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Kirk Lund
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: flaky, pull-request-available
>
> This appears to be a flaky test related to GEODE-7963 which was resolved by 
> Mario Ivanac so I've assigned the ticket to him.
> {noformat}
> org.apache.geode.management.MemberMXBeanDistributedTest > testBucketCount 
> FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.management.MemberMXBeanDistributedTest Expected bucket count 
> is 4000, and actual count is 3750 expected:<3750> but was:<4000> within 5 
> minutes.
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:165)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
> at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:895)
> at 
> org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:679)
> at 
> org.apache.geode.management.MemberMXBeanDistributedTest.testBucketCount(MemberMXBeanDistributedTest.java:102)
> Caused by:
> java.lang.AssertionError: Expected bucket count is 4000, and actual 
> count is 3750 expected:<3750> but was:<4000>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at 
> org.apache.geode.management.MemberMXBeanDistributedTest.lambda$testBucketCount$1(MemberMXBeanDistributedTest.java:107)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-9642) Adding GW sender to allready initialized partitioned region is hanging in large cluster

2022-03-17 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9642.
-
Fix Version/s: 1.15.0
   Resolution: Fixed

> Adding GW sender to allready initialized partitioned region is hanging in 
> large cluster
> ---
>
> Key: GEODE-9642
> URL: https://issues.apache.org/jira/browse/GEODE-9642
> Project: Geode
>  Issue Type: Bug
>  Components: regions, wan
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> We have observed, that adding parallel GW sender to existing (allready 
> initialized) partitioned regions is hanging.
> In case command alter-region is executed (attaching GW sender to initialized 
> region), it is hanging in cluster with more then 20 servers.
> Execution of command in cluster with 16 or less servers was successful, but 
> if cluster is expanded to 20 or more, command is hanging.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-9809) Memory leak in PersistentBucketRecoverer

2022-03-16 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9809.
-
Fix Version/s: 1.16.0
   Resolution: Fixed

> Memory leak in PersistentBucketRecoverer
> 
>
> Key: GEODE-9809
> URL: https://issues.apache.org/jira/browse/GEODE-9809
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 1.14.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> When performing consecutive create/destroy colocated persistent partitioned 
> regions, memory leak is observed.
> In our test, in cluster with 50 servers, we have leader persisted partitioned 
> region with more than 1000 buckets, and colocated 2 persisted regions. If we 
> consecutively create and destroy child regions, memory leak is observed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-10104) Create parallel gateway sender with dispatcher-threads value greater then 1 is failing

2022-03-15 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-10104.
--
Fix Version/s: 1.15.0
   Resolution: Fixed

> Create parallel gateway sender with dispatcher-threads value greater then 1 
> is failing
> --
>
> Key: GEODE-10104
> URL: https://issues.apache.org/jira/browse/GEODE-10104
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, wan
>Affects Versions: 1.14.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
> Fix For: 1.15.0
>
>
> When creating parallel gateway sender with attribute dispatcher-threads 
> greater then 1, command is rejected with error "{color:#6a8759}Must specify 
> --order-policy when --dispatcher-threads is larger than 1.{color}".
>  
> But according to documentation:
> You cannot configure the {{order-policy}} for a parallel event queue, because 
> parallel queues cannot preserve event ordering for regions. Only the ordering 
> of events for a given partition (or in a given queue of a distributed region) 
> can be preserved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-10105) restarting of gateway sender during dispatching of events, causes duplication of events without indication of possibleduplicate

2022-03-09 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10105:


Assignee: Mario Ivanac

> restarting of gateway sender during dispatching of events, causes duplication 
> of events without indication of possibleduplicate
> ---
>
> Key: GEODE-10105
> URL: https://issues.apache.org/jira/browse/GEODE-10105
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.14.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
>
> During dispatching events in parallel gateway sender, if we stop dispatching, 
> and after some time restart dispatching, multiple duplicate events are 
> observed on receiving side (without indication possible duplicate).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-10105) restarting of gateway sender during dispatching of events, causes duplication of events without indication of possibleduplicate

2022-03-06 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10105:


 Summary: restarting of gateway sender during dispatching of 
events, causes duplication of events without indication of possibleduplicate
 Key: GEODE-10105
 URL: https://issues.apache.org/jira/browse/GEODE-10105
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Mario Ivanac


During dispatching events in parallel gateway sender, if we stop dispatching, 
and after some time restart dispatching, multiple duplicate events are observed 
on receiving side (without indication possible duplicate).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-10105) restarting of gateway sender during dispatching of events, causes duplication of events without indication of possibleduplicate

2022-03-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-10105:
-
Affects Version/s: 1.14.0

> restarting of gateway sender during dispatching of events, causes duplication 
> of events without indication of possibleduplicate
> ---
>
> Key: GEODE-10105
> URL: https://issues.apache.org/jira/browse/GEODE-10105
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.14.0
>Reporter: Mario Ivanac
>Priority: Major
>  Labels: needsTriage
>
> During dispatching events in parallel gateway sender, if we stop dispatching, 
> and after some time restart dispatching, multiple duplicate events are 
> observed on receiving side (without indication possible duplicate).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-10104) Create parallel gateway sender with dispatcher-threads value greater then 1 is failing

2022-03-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10104:


Assignee: Mario Ivanac

> Create parallel gateway sender with dispatcher-threads value greater then 1 
> is failing
> --
>
> Key: GEODE-10104
> URL: https://issues.apache.org/jira/browse/GEODE-10104
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, wan
>Affects Versions: 1.14.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage
>
> When creating parallel gateway sender with attribute dispatcher-threads 
> greater then 1, command is rejected with error "{color:#6a8759}Must specify 
> --order-policy when --dispatcher-threads is larger than 1.{color}".
>  
> But according to documentation:
> You cannot configure the {{order-policy}} for a parallel event queue, because 
> parallel queues cannot preserve event ordering for regions. Only the ordering 
> of events for a given partition (or in a given queue of a distributed region) 
> can be preserved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-10104) Create parallel gateway sender with dispatcher-threads value greater then 1 is failing

2022-03-06 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10104:


 Summary: Create parallel gateway sender with dispatcher-threads 
value greater then 1 is failing
 Key: GEODE-10104
 URL: https://issues.apache.org/jira/browse/GEODE-10104
 Project: Geode
  Issue Type: Bug
  Components: gfsh, wan
Affects Versions: 1.14.0
Reporter: Mario Ivanac


When creating parallel gateway sender with attribute dispatcher-threads greater 
then 1, command is rejected with error "{color:#6a8759}Must specify 
--order-policy when --dispatcher-threads is larger than 1.{color}".

 

But according to documentation:

You cannot configure the {{order-policy}} for a parallel event queue, because 
parallel queues cannot preserve event ordering for regions. Only the ordering 
of events for a given partition (or in a given queue of a distributed region) 
can be preserved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-10020) Gradual activation of LiveServerPinger for each destination

2022-02-07 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-10020:


Assignee: Mario Ivanac

> Gradual activation of LiveServerPinger for each destination
> ---
>
> Key: GEODE-10020
> URL: https://issues.apache.org/jira/browse/GEODE-10020
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> Add configurable option to gradually activate pinging toward destination. 
> This can be accomplish by increasing initial delay of each ping task.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-10020) Gradual activation of LiveServerPinger for each destination

2022-02-07 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-10020:


 Summary: Gradual activation of LiveServerPinger for each 
destination
 Key: GEODE-10020
 URL: https://issues.apache.org/jira/browse/GEODE-10020
 Project: Geode
  Issue Type: Improvement
  Components: wan
Reporter: Mario Ivanac


Add configurable option to gradually activate pinging toward destination. This 
can be accomplish by increasing initial delay of each ping task.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-9997) Improve possible duplicate logic in WAN

2022-01-28 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-9997:

Summary: Improve possible duplicate logic in WAN  (was: Improve possible 
duplicate loggic in WAN)

> Improve possible duplicate logic in WAN
> ---
>
> Key: GEODE-9997
> URL: https://issues.apache.org/jira/browse/GEODE-9997
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Priority: Major
>
> It has been observed, that in case server with full parallel gw sender queue 
> is restarted,
> after it is up, is much slower unqueueing events, than other members of 
> cluster.
> After analysis, we found out, that reason for this is current logic  to mark 
> all events in queue as possible duplicate.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-9997) Improve possible duplicate logic in WAN

2022-01-28 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-9997:
---

Assignee: Mario Ivanac

> Improve possible duplicate logic in WAN
> ---
>
> Key: GEODE-9997
> URL: https://issues.apache.org/jira/browse/GEODE-9997
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> It has been observed, that in case server with full parallel gw sender queue 
> is restarted,
> after it is up, is much slower unqueueing events, than other members of 
> cluster.
> After analysis, we found out, that reason for this is current logic  to mark 
> all events in queue as possible duplicate.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-9997) Improve possible duplicate loggic in WAN

2022-01-28 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-9997:
---

 Summary: Improve possible duplicate loggic in WAN
 Key: GEODE-9997
 URL: https://issues.apache.org/jira/browse/GEODE-9997
 Project: Geode
  Issue Type: Improvement
  Components: wan
Reporter: Mario Ivanac


It has been observed, that in case server with full parallel gw sender queue is 
restarted,

after it is up, is much slower unqueueing events, than other members of cluster.

After analysis, we found out, that reason for this is current logic  to mark 
all events in queue as possible duplicate.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-9853) Optimize number of ParallelQueueRemovalMessage sent for dropped events in large clusters

2022-01-27 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9853.
-
Resolution: Fixed

> Optimize number of ParallelQueueRemovalMessage sent for dropped events in 
> large clusters
> 
>
> Key: GEODE-9853
> URL: https://issues.apache.org/jira/browse/GEODE-9853
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> In multi site cluster (16 or more servers), in case we stop gw sender, and 
> continue to put new entries in region, it is observed large number of 
> ParallelQueueRemovalMessage sent.
> It seems that this can be optimized, since most of sent messages are ignored 
> by most of other servers.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-9853) Optimize number of ParallelQueueRemovalMessage sent for dropped events in large clusters

2022-01-27 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-9853:

Fix Version/s: 1.16.0

> Optimize number of ParallelQueueRemovalMessage sent for dropped events in 
> large clusters
> 
>
> Key: GEODE-9853
> URL: https://issues.apache.org/jira/browse/GEODE-9853
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> In multi site cluster (16 or more servers), in case we stop gw sender, and 
> continue to put new entries in region, it is observed large number of 
> ParallelQueueRemovalMessage sent.
> It seems that this can be optimized, since most of sent messages are ignored 
> by most of other servers.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-9809) Memory leak in PersistentBucketRecoverer

2022-01-24 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-9809:

Component/s: persistence
 (was: memcached)

> Memory leak in PersistentBucketRecoverer
> 
>
> Key: GEODE-9809
> URL: https://issues.apache.org/jira/browse/GEODE-9809
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 1.14.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> When performing consecutive create/destroy colocated persistent partitioned 
> regions, memory leak is observed.
> In our test, in cluster with 50 servers, we have leader persisted partitioned 
> region with more than 1000 buckets, and colocated 2 persisted regions. If we 
> consecutively create and destroy child regions, memory leak is observed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-9961) Server restart during processing of incoming batches, causes infinite loop till cache is closed.

2022-01-19 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9961.
-
Fix Version/s: 1.15.0
   Resolution: Fixed

> Server restart during processing of incoming batches, causes infinite loop 
> till cache is closed.
> 
>
> Key: GEODE-9961
> URL: https://issues.apache.org/jira/browse/GEODE-9961
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
> Fix For: 1.15.0
>
>
> Server restart during processing of incoming batches, causes infinite loop 
> till cache is closed. This happens, if on originating site system property 
> {color:#00}REMOVE_FROM_QUEUE_ON_EXCEPTION{color} is set to false.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-9961) Server restart during processing of incoming batches, causes infinite loop till cache is closed.

2022-01-15 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-9961:

Description: Server restart during processing of incoming batches, causes 
infinite loop till cache is closed. This happens, if on originating site system 
property {color:#00}REMOVE_FROM_QUEUE_ON_EXCEPTION{color} is set to false.  
(was: Server restart during processing of incoming batches, causes infinite 
loop till cache is closed.)

> Server restart during processing of incoming batches, causes infinite loop 
> till cache is closed.
> 
>
> Key: GEODE-9961
> URL: https://issues.apache.org/jira/browse/GEODE-9961
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage, pull-request-available
>
> Server restart during processing of incoming batches, causes infinite loop 
> till cache is closed. This happens, if on originating site system property 
> {color:#00}REMOVE_FROM_QUEUE_ON_EXCEPTION{color} is set to false.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-9961) Server restart during processing of incoming batches, causes infinite loop till cache is closed.

2022-01-13 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-9961:
---

Assignee: Mario Ivanac

> Server restart during processing of incoming batches, causes infinite loop 
> till cache is closed.
> 
>
> Key: GEODE-9961
> URL: https://issues.apache.org/jira/browse/GEODE-9961
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needsTriage
>
> Server restart during processing of incoming batches, causes infinite loop 
> till cache is closed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-9961) Server restart during processing of incoming batches, causes infinite loop till cache is closed.

2022-01-13 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-9961:
---

 Summary: Server restart during processing of incoming batches, 
causes infinite loop till cache is closed.
 Key: GEODE-9961
 URL: https://issues.apache.org/jira/browse/GEODE-9961
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Mario Ivanac


Server restart during processing of incoming batches, causes infinite loop till 
cache is closed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (GEODE-9768) Consecutive start gateway sender with clean queues hangs

2021-11-29 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-9768:

Fix Version/s: 1.15.0

> Consecutive start gateway sender with clean queues hangs
> 
>
> Key: GEODE-9768
> URL: https://issues.apache.org/jira/browse/GEODE-9768
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.15.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> In case we are starting consecutive gw senders with clean queue, 2nd gw 
> sender hangs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (GEODE-9768) Consecutive start gateway sender with clean queues hangs

2021-11-29 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-9768.
-
Resolution: Fixed

> Consecutive start gateway sender with clean queues hangs
> 
>
> Key: GEODE-9768
> URL: https://issues.apache.org/jira/browse/GEODE-9768
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.15.0
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
>
> In case we are starting consecutive gw senders with clean queue, 2nd gw 
> sender hangs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (GEODE-9853) Optimize number of ParallelQueueRemovalMessage sent for dropped events in large clusters

2021-11-26 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac reassigned GEODE-9853:
---

Assignee: Mario Ivanac

> Optimize number of ParallelQueueRemovalMessage sent for dropped events in 
> large clusters
> 
>
> Key: GEODE-9853
> URL: https://issues.apache.org/jira/browse/GEODE-9853
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>
> In multi site cluster (16 or more servers), in case we stop gw sender, and 
> continue to put new entries in region, it is observed large number of 
> ParallelQueueRemovalMessage sent.
> It seems that this can be optimized, since most of sent messages are ignored 
> by most of other servers.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (GEODE-9853) Optimize number of ParallelQueueRemovalMessage sent for dropped events in large clusters

2021-11-26 Thread Mario Ivanac (Jira)
Mario Ivanac created GEODE-9853:
---

 Summary: Optimize number of ParallelQueueRemovalMessage sent for 
dropped events in large clusters
 Key: GEODE-9853
 URL: https://issues.apache.org/jira/browse/GEODE-9853
 Project: Geode
  Issue Type: Improvement
  Components: wan
Reporter: Mario Ivanac


In multi site cluster (16 or more servers), in case we stop gw sender, and 
continue to put new entries in region, it is observed large number of 
ParallelQueueRemovalMessage sent.

It seems that this can be optimized, since most of sent messages are ignored by 
most of other servers.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   >