Hi Anil - thanks, I will try that and get back to you.

-- Eric

On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <[email protected]>
wrote:

> Are you looking at connecting client to multiple environments (servers in
> dev, UAT, prod...) and getting the events...If this is the case, one option
> to try is, create client connection pools to different environment and
> register CQs using those pools...(I haven't tried this, but I think its
> doable)...
>
> -Anil..
>
>
>
>
>
>
>
>
>
> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <[email protected]> wrote:
>
>> Hi all -
>>
>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
>> request to support multiple Caches per JVM.  One thing I forgot in my
>> earlier email and is probably the biggest pain point with the current
>> limitation is the ability to connect to multiple environments at the same
>> time.  For example, we will to connect to UAT for most services, but we'll
>> want to point one service in particular to Dev for debugging, or maybe
>> point it to Prod to get some live data.
>>
>> Thanks,
>>
>>
>> -- Eric
>>
>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <[email protected]>
>> wrote:
>>
>>> Hi Barry -
>>>
>>> The CQs are on other regions and they are doing puts on the main Trade
>>> region.  The Trade region is Replicated in the cluster and the Trade Server
>>> has a CACHING_PROXY client region.
>>>
>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>
>>>
>>> -- Eric
>>>
>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <[email protected]>
>>> wrote:
>>>
>>>> One thing I wanted to clarify is how you're loading the data in the
>>>> Trade Server client now. Are you doing puts from the CqListener into a
>>>> local region?
>>>>
>>>> Also, one thing to be careful about with asynchronous CacheListeners is
>>>> they tend to hide memory usage if the thread pool can't keep up with the
>>>> tasks being executed. At the very least, make sure to monitor the size of
>>>> the thread pool's backing queue.
>>>>
>>>> Barry Oglesby
>>>> GemFire Advanced Customer Engineering (ACE)
>>>> For immediate support please contact Pivotal Support at
>>>> http://support.pivotal.io/
>>>>
>>>>
>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <[email protected]>
>>>> wrote:
>>>>
>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>> responsibility... It sounds like we would want to have the CacheListener 
>>>>> be
>>>>> asynchronous, adding events to a queue that that the application code 
>>>>> pulls
>>>>> from.
>>>>>
>>>>>
>>>>> -- Eric
>>>>>
>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> The big difference between a peer and a client is that the peer is a
>>>>>> member of the distributed system whereas the client is not. This means,
>>>>>> among other things, that CacheListener callbacks are synchronous with the
>>>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>>>> Server peer is started, your application put performance may degrade
>>>>>> depending on what is done in the CacheListener callback.
>>>>>>
>>>>>> You'll have synchronous replication of data between the server and
>>>>>> peer as well, but if the client's queue is on a node remote to where the
>>>>>> operation occurs, then that is also a synchronous replication of data. 
>>>>>> So,
>>>>>> that more-or-less balances out.
>>>>>>
>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>> distributed system members to a greater degree than a client. For 
>>>>>> example,
>>>>>> operations being replicated to the Trade Server peer will be impacted if 
>>>>>> a
>>>>>> long GC is occurring in it.
>>>>>>
>>>>>>
>>>>>> Barry Oglesby
>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>> For immediate support please contact Pivotal Support at
>>>>>> http://support.pivotal.io/
>>>>>>
>>>>>>
>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>> EndpointListener in the short term as we’re still on Gemfire 7.0.2
>>>>>>> (I forgot to mention that).  We’re going to upgrade soon though and then
>>>>>>> we’ll move to ClientMembershipListener as it’s a public API.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I have some related questions – here’s some background:  We have a
>>>>>>> cluster of Gemfire servers and a number of Replicated regions.  We have 
>>>>>>> a
>>>>>>> microservice architecture where all of our applications are publishers 
>>>>>>> for
>>>>>>> some regions and clients for other regions.  We use CQs for most if not 
>>>>>>> all
>>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>>> applications are clients.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> In one of these applications (called Trade Server) we would like to
>>>>>>> avoid needing to have it reload its region in the cluster if the cluster
>>>>>>> goes down completely and comes back up.  I discussed with my colleagues 
>>>>>>> the
>>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>>> could be a replica for its region and then it would not be impacted if 
>>>>>>> the
>>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>>> Server would replicate its data back to it.  The only glitch is that it 
>>>>>>> is
>>>>>>> a client for other regions.  I told them that instead of using CQs in 
>>>>>>> Trade
>>>>>>> Server we could use CacheListeners (still determining whether any
>>>>>>> query is more complicated than select * from /otherRegion).  They
>>>>>>> are hesitant because they are attached to CQs.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Does this sound reasonable to you?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Something that has caused us a bit of pain in the past is the fact
>>>>>>> that one JVM can either be a Client or a Peer, but not both.  And you 
>>>>>>> can’t
>>>>>>> have multiple instances of ClientCache since it uses statics.  The
>>>>>>> latter was a problem in our microservices architecture as each service 
>>>>>>> has
>>>>>>> its own client API, but each client API can’t have its own
>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, 
>>>>>>> like
>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>> people will want that.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> -- Eric
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to