Hi Ashish,

What version are you running with? There is a JIRA related to multiple pools 
from a single client causing an unknown PDX type error:

https://issues.apache.org/jira/browse/GEODE-6271

I think this JIRA is fixed in 9.8.0 even though the JIRA says its in progress.

https://gemfire.docs.pivotal.io/98/gemfire/release_notes.html

The relevant code is in ClientTypeRegistration here:

https://github.com/apache/geode/blob/develop/geode-core/src/main/java/org/apache/geode/pdx/internal/ClientTypeRegistration.java#L85

Barry

________________________________
From: aashish choudhary <[email protected]>
Sent: Tuesday, July 21, 2020 12:58 PM
To: [email protected] <[email protected]>
Subject: Re: issue while region close using two pools

Checked this further and it seems if we remove pool details for secondary 
locator then everything works fine no unknown PDX type error on client. Just 
wondering if it has something to do with creating two pools. Then I looked at 
the source code of geode and found this. Here are trying to get pdx registry 
metadata while creating a pool.


@Override
public Pool create(String name) throws CacheException {
  InternalDistributedSystem distributedSystem = 
InternalDistributedSystem.getAnyInstance();
  InternalCache cache = getInternalCache();
  ThreadsMonitoring threadMonitoring = null;
  if (cache != null) {
    threadMonitoring = cache.getDistributionManager().getThreadMonitoring();
    TypeRegistry registry = cache.getPdxRegistry();
    if (registry != null && !attributes.isGateway()) {
      registry.creatingPool();
    }
  }
  return PoolImpl.create(pm, name, attributes, locatorAddresses, 
distributedSystem,
      cache, threadMonitoring);
}

Barry by any chance you faced an issue related to Uknown PDX type during your 
test or is it possible it could arise because of creating two pools.

With best regards,
Ashish

On Mon, Jul 20, 2020, 4:12 PM aashish choudhary 
<[email protected]<mailto:[email protected]>> wrote:
Thanks Barry. These are really helpful tips. I will probably try rest option 
you have suggested.

By calling this will it clear PDX registry metadata also.

((InternalClientCache) this.cache).getClientMetadataService().close();


The reason I am asking this is because we got this unknown PDX type error on 
client side and no matter what we do it just won't  go away even after 
restarting client application many times. And we have PDX data persistence 
enabled. Even restart of Geode cluster did not work.

https://stackoverflow.com/questions/51150105/apache-geode-debug-unknown-pdx-type-2140705<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstackoverflow.com%2Fquestions%2F51150105%2Fapache-geode-debug-unknown-pdx-type-2140705&data=02%7C01%7Cboglesby%40vmware.com%7Ce97ad29f632e4059afcd08d82db0fbe8%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637309585457973014&sdata=cSPVQL7sInE%2BCHBTJpvY%2FM58vDXYoKRVKInTHKCi9Gg%3D&reserved=0>

And this actually happened without even pool switching and in normal case. 
Would it possible that since we create two pools initially so one of the pool 
is still holding old PDX registry metadata. I understand it remains in memory 
and restart should solve this issue but it didn't work.

We were able to solve it somehow by deploying old version of client application 
then hit geode and then again redeployed using new version of client 
application. This worked. But it is very weird.

With best regards,
Ashish

On Tue, May 12, 2020, 4:32 AM Barry Oglesby 
<[email protected]<mailto:[email protected]>> wrote:
Ashish,

Sorry I haven't responded to this sooner, but I wanted to write a small example 
to prove it works.

I have this scenario:

- 1 client with 1 region
- 2 servers in each of 2 distributed systems
- the client has 2 pools - 1 pointing to each set of servers
- the client also has a thread that switches the region between the pools every 
N seconds using your idea of closing the region and recreating it on the other 
pool
- the client is constantly doing puts

I had some issues with single hop (pr-single-hop-enabled). If I disabled single 
hop, the test worked fine. The client did puts into 1 set of servers, then the 
other.

If I enabled single-hop, the client stuck to only 1 set of servers. I found I 
had to to clear the ClientMetadataService after closing the region like:

((InternalClientCache) this.cache).getClientMetadataService().close();

I had this trouble because all the servers were up the whole time.

Here are a few other comments:

- Automating this is tricky. If there are any network issues between 1 client 
and the servers, but not other clients, then that client may switch over when 
you don't want it to.
- Assuming there is a wan connection between the sites, you'll have to handle 
events in the queue that don't get sent before the clients switch. Depending on 
the ops they are doing, there might be stale data.
- Automated failback is also tricky. Its better to control it.
- If the client threads are active when the switch occurs, they'll have to 
handle the RegionDestroyedExceptions that occur when the region is closed out 
from under them
- I don't know if your clients are REST-enabled, but a REST API to switch pools 
would be pretty cool.

Thanks,
Barry Oglesby



On Wed, May 6, 2020 at 9:40 AM aashish choudhary 
<[email protected]<mailto:[email protected]>> wrote:
Thanks Barry you were right I was not properly resetting the pool. It worked 
after resetting it properly. Some more questions.

We have same certs on prod and prod parallel cluster so we don't have to switch 
certs in the future. We don't have auth/authz enabled aa of now. May be we need 
to change few things in that case. Do you see any  challanges with this use 
case?.

With best regards,
Ashish

On Wed, May 6, 2020, 1:14 AM Barry Oglesby 
<[email protected]<mailto:[email protected]>> wrote:
Ashish,

The keySetOnServer call is just an example of doing something on the server.

In your example, the switchToSecondary and switchToPrimary methods are 
recreating the regions but not resetting the pointers to region1 and region2. 
They are still pointing to the previous (closed) instances of the Region.

Something like this should work:

this.region1 = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
  .setPoolName("B")
  .create("region1");

Thanks,
Barry Oglesby



On Tue, May 5, 2020 at 11:14 AM aashish choudhary 
<[email protected]<mailto:[email protected]>> wrote:
ClientCache clientCache = new ClientCacheFactory().create();
PoolFactory poolFactory = PoolManager.createFactory();
poolFactory.addLocator("Locator11", 10334);
poolFactory.addLocator("Locator22", 10334);
poolFactory.create("A");

poolFactory = PoolManager.createFactory();
poolFactory.addLocator("Locator33", 10334);
poolFactory.addLocator("Locator44", 10334);
poolFactory.create("B");

Region region11 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("A")
        .create("region1");
Region region22 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("A")
        .create("region2");


Region region33 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("B")
        .create("region1");
Region region44 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("B")
        .create("region2");
In normal scenario i don't switch pool and get Data from pool A only.To 
forcefully switch pools at runtime i have created and registered MBean. This 
MBean will have a flag for forcefully switching pools at runtime.
For example if Mbean attribute secondaryPool is set to true then before 
fetching data it should call switchToSecondary. Likewise for the case of 
switching to primary pool.
Sample code
if(isPrimarypool){
switchToPrimary();
}
if(isSecondarypool){
switchToSecondary();
}

In the method switchToPrimary() this is what i am trying is something like 
below.
switchToPrimary(){
closeRegion();
//then re-creating regions again something like this
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("A")
        .create("region1");
        clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
                .setPoolName("A")
                .create("region2");
}
In the method switchToSecondary() this is what i am trying is something like 
below
switchToSecondary(){
closeRegion();
//then creating regions again
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("B")
        .create("region1");
        clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
                .setPoolName("B")
                .create("region2");
}
closeRegion(){
region1.close();
region2.close();
}
Now the problem is if i switch to secondaryPool from primaryPool(changing Mbean 
attribute value) it works. But when i try to switch back to primary it says 
region destroyed.
Let me know if you have any doubts about my approach. Basically i am trying to 
follow what is explained in the link i have shared earlier. As per that i have 
close and re create region if i need to switch. But i can't get handle to pool 
regions so that i can close regions pool wise It would have been easier that 
way but as you said that it's not possible for region to below to two pools.

Also i have looked at your code not sure why you have to call keySetOnServer() 
method.



With Best Regards,
Ashish


On Tue, May 5, 2020 at 11:03 PM Barry Oglesby 
<[email protected]<mailto:[email protected]>> wrote:
Yes, you can't have a single region pointing at 2 pools. What I posted is the 
way I've done it in the past. I just tried it, and it still works. Maybe if you 
post your full code, I can take a look.

Thanks,
Barry Oglesby



On Tue, May 5, 2020 at 10:19 AM aashish choudhary 
<[email protected]<mailto:[email protected]>> wrote:
Hi Barry,

Thanks for the response.

I am actually calling region.close() which I believe calls localDestroyRegion 
only. The problem is region names are same in both cluster so do how do I close 
region of cluster before switching to another cluster. I just want to switch 
pools based upon my use case at run-time.

With best regards,
Ashish

On Tue, May 5, 2020, 10:25 PM Barry Oglesby 
<[email protected]<mailto:[email protected]>> wrote:
Ashish,

You're probably using region.destroyRegion to destroy the client region. If so, 
that also destroys the region on the server.

You should be able to use localDestroyRegion to just destroy the client region 
like:

// Get the region's key set using pool 1
Region pool11Region = createRegion(this.pool1.getName());
Set pool1Keys = pool1Region.keySetOnServer();
pool1Region.localDestroyRegion();

// Get the region's key set using pool 2
Region pool2Region = createRegion(this.pool2.getName());
Set pool2Keys = pool2Region.keySetOnServer();
pool2Region.localDestroyRegion();

The createRegion method is:

private Region createRegion(String poolName) {
  return this.cache
    .createClientRegionFactory(ClientRegionShortcut.PROXY)
    .setPoolName(poolName)
    .create(this.regionName);
}

Thanks,
Barry Oglesby



On Tue, May 5, 2020 at 9:37 AM aashish choudhary 
<[email protected]<mailto:[email protected]>> wrote:
Hi,

I am sort of trying to implement a failover/failback for geode client. I have 
followed the approach the mentioned in 
here<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommunity.pivotal.io%2Fs%2Farticle%2FConfigure-client-to-use-several-clusters&data=02%7C01%7Cboglesby%40vmware.com%7Ce97ad29f632e4059afcd08d82db0fbe8%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637309585457973014&sdata=kBwik%2BXqzB5HGeiVl%2FKyO91rc7KlbgE3ztKq%2BiK48Fs%3D&reserved=0>.
 I understand it's for gemfire but sharing just for reference as the use case 
is same and will apply this for geode as well. I hope that's ok.

This is what i am trying to do. Creating two pools but in my case i have same 
region names in both cluster A and B.
ClientCache clientCache = new ClientCacheFactory().create();

PoolFactory poolFactory = PoolManager.createFactory();
poolFactory.addLocator("Locator1", 10334);
poolFactory.addLocator("Locator2", 10334);
poolFactory.create("A");

poolFactory = PoolManager.createFactory();
poolFactory.addLocator("Locator1", 10334);
poolFactory.addLocator("Locator2", 10334);
poolFactory.create("B");

Region region11 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("A")
        .create("region1");

Region region22 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("A")
        .create("region2");


Region region33 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("B")

        .create("region1");

Region region44 = 
clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY)
        .setPoolName("B")

        .create("region2");

So If a client needs to connect to regions with the same name in two different 
distributed systems then it must close or destroy the region and recreate it 
with the changed pool configuration each time it wants to change between two 
distributed systems.

I am doing the same but getting region destroyed exception instead. I am using 
geode java client not spring data geode.

How to do this? Please help.


With Best Regards,
Ashish

Reply via email to