Thanks Barry.  That makes a lot of sense.  With power comes great
responsibility... It sounds like we would want to have the CacheListener be
asynchronous, adding events to a queue that that the application code pulls
from.


-- Eric

On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <[email protected]> wrote:

> The big difference between a peer and a client is that the peer is a
> member of the distributed system whereas the client is not. This means,
> among other things, that CacheListener callbacks are synchronous with the
> original operation whereas CqListener callbacks are not. When the Trade
> Server peer is started, your application put performance may degrade
> depending on what is done in the CacheListener callback.
>
> You'll have synchronous replication of data between the server and peer as
> well, but if the client's queue is on a node remote to where the operation
> occurs, then that is also a synchronous replication of data. So, that
> more-or-less balances out.
>
> Also, the health of a Trade Server peer can affect the other distributed
> system members to a greater degree than a client. For example, operations
> being replicated to the Trade Server peer will be impacted if a long GC is
> occurring in it.
>
>
> Barry Oglesby
> GemFire Advanced Customer Engineering (ACE)
> For immediate support please contact Pivotal Support at
> http://support.pivotal.io/
>
>
> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <[email protected]> wrote:
>
>> Thanks for the answers to my previous question about getting a callback
>> if the cluster goes down.  We decided to go with EndpointListener in the
>> short term as we’re still on Gemfire 7.0.2 (I forgot to mention that).
>> We’re going to upgrade soon though and then we’ll move to
>> ClientMembershipListener as it’s a public API.
>>
>>
>>
>> I have some related questions – here’s some background:  We have a
>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>> microservice architecture where all of our applications are publishers for
>> some regions and clients for other regions.  We use CQs for most if not all
>> of the client scenarios.  Because of the CQ requirement all of our
>> applications are clients.
>>
>>
>>
>> In one of these applications (called Trade Server) we would like to avoid
>> needing to have it reload its region in the cluster if the cluster goes
>> down completely and comes back up.  I discussed with my colleagues the
>> possibility of making the Trade Server a peer instead of a client.  It
>> could be a replica for its region and then it would not be impacted if the
>> main cluster went down.  And then when the cluster came back up Trade
>> Server would replicate its data back to it.  The only glitch is that it is
>> a client for other regions.  I told them that instead of using CQs in Trade
>> Server we could use CacheListeners (still determining whether any query
>> is more complicated than select * from /otherRegion).  They are hesitant
>> because they are attached to CQs.
>>
>>
>>
>> Does this sound reasonable to you?
>>
>>
>>
>> Something that has caused us a bit of pain in the past is the fact that
>> one JVM can either be a Client or a Peer, but not both.  And you can’t have
>> multiple instances of ClientCache since it uses statics.  The latter was
>> a problem in our microservices architecture as each service has its own
>> client API, but each client API can’t have its own ClientCache.  We
>> worked around it by wrapping ClientCache and making the wrapper API a
>> singleton.  But there are still some gotchas, like if two services use
>> different PDX serialization configs, etc.
>>
>>
>>
>> Is that something you have been thinking about fixing for the future?
>> That is, making it so, in one JVM, you can have multiple clients/peers?
>> With microservices becoming a bigger trend I think more people will want
>> that.
>>
>>
>>
>> Thanks,
>>
>> -- Eric
>>
>
>

Reply via email to