Hi,

I've been looking into the following issue:
"Whenever performing a stress test on a Geode cluster and forcefully killing 
one of the members, all the threads in the application get stuck".

To give more context these are the conditions under the test is performed:

  *   A cluster is deployed with:
     *   2 locators.
     *   3 servers.
  *   2 partitioned regions are created and collocated with a third one (from 
now on called the "anchor").
     *   Also, regions have a single redundant copy configured.
     *   Whether or not to enable persistence on these regions do not affect to 
the test outcome.
     *   Note that we've configured a PartitionResolver for both of these 
regions.
  *   A geode-native test application is spin up with 20 threads sending a pack 
of 1 put request to each of the partitioned
regions regions (except for the "anchor"), all of that within a transaction. 
See example below to illustrate the kind of traffic sent:
void thread() {
  while(true) {
    common_prefix = to_string(time(nullptr));
    tx_manager->begin();
    for(region_name : {"region_a", "region_b"}) {
      key = "key-" + common_prefix + "|" + to_string(rand());
      value = to_string(rand());
      cache->getRegion(region_name)->put(key, value);
    }
    tx_manager->commit();
  }
}

The test consists of:

  *   Spinning up the cluster.
  *   Running the application.
  *   One of the servers (from now on called "server-0") is forcefully 
restarted by
using kill -KILL <PID> and after that starting it up again with gfsh.

The expectation of this test is that given that data has a redundant copy, and 
we have 2 servers up and running all the time, then writing data should be 
handled smoothly.
However, what actually happens is that all application threads end up being 
stuck.

So, in the process of troubleshooting, we noticed that there was several 
dead-locks in the geode-native client, which resulted in the following PRs:

  *   https://github.com/apache/geode-native/pull/660
  *   https://github.com/apache/geode-native/pull/676
  *   https://github.com/apache/geode-native/pull/699

After solving all dead-locks in the client-side, we were still noticing the 
same outcome in the test.
So, after more digging, there it is what we noticed:

  *   Once the server is killed, geode-native removes the server endpoint from 
the ClientMetadataService.
  *   But given that put requests can be only executed on the server holding 
the primary copy, these requests ended up being proxied towards the server that 
was just killed.
  *   As it takes some time for the cluster members to notice that other 
members are down, requests proxied trough "healthy" servers take longer than 
expected. Something between 5-30 seconds.
  *   So, in the end, all the threads are stuck for this interval of time 
because the server they are contacting, are contacting "server-0".

For the sake of clarity I've attached a diagram demonstrating the test 
scenario. Let me know any additional clarifications you might need to 
understand the test itself.

And now, my questions here are:

  *   Have you encountered this behavior before? And if so, how did you solved 
that?
  *   Is this expected behavior? And if so, what's the point of having a 
cluster of several members with partitioned redundant data?

Sorry for the long reading and thanks for any help you can throw in.

BR,
Mario.

Reply via email to