Questions on the mechanics of activated on-heap in ignite 2.x.x

2020-06-06 Thread VincentCE
In our project we are currently using ignite 2.7.6 with native persistence
disabled and java 11. At the moment we are not using the on-heap feature,
i.e. all our data lives in off-heap. However in order to gain performance we
are thinking about activating on-heap. While there are already quite many
questions/answers on that topic in this forum we are still missing some
points. I would like to use the following scenario for our questions: Say we
have one ignite-server-instance living on a *kubernetes-pod of 32 GiB memory
request/limit size* with the following "hypothetical" configuration:

- JVM options exactly as described here
https://apacheignite.readme.io/docs/jvm-and-system-tuning, i.e. in 
  particular *10 GB heap fixed*.
- Off-heap is limited to 15 GiB by adjusting the default region with
*initSize = maxSize = 15 GiB*. 
  No more data regions are defined.

Before doing anything with our ignite-server-instance we *initially fill its
off-heap with 10 GiB* of data and this will be the only data that it will
receive. 

What happens when we set
*org.apache.ignite.configuration.CacheConfiguration.setOnheapCacheEnabled(true)
in each data configuration and for now use no eviction policies* in
particular during loading these 10 GB of data? 

More precisely: 
1. As it is emphasised several times in this forum the data will still be be
loaded into off-heap. But will it immediately also be loaded into heap, i.e.
during the loading procedure each data point gets replicated simultaneously
to heap resulting in two copies of the same data one in off-heap and one on
heap after the procedure is finished? 
2. ... Or will a given data point only be replicated to heap when ever it is
being used, i.e. during computations?
3. Lets furthermore assume that our overall configuration was stable before
switching to on-heap. In order to guarantee that it will do so afterwards
would we need to increase the heap size by roughly 10 GB to 20 GB and
therefore also our pod size to roughly 42 GiB? That would imply that using
on-heap always goes hand in hand with increasing memory resources.
4. Obviously in this example we did not define any eviction-policy to
control the on-heap cache size. However this is indeed intended here because
we would like each data point to be quickly available living also in heap.
Is this a useful approach (i.e. replicating the whole off-heap also in heap)
in order to reach the overall goal namely better performance? It feels like
this approach would counteract the change to the off-heap model from ignite
2.x.x onwards in terms of GC impacts and so on. Is this correct?

Please let me know if you need more detailed informations about the
configurations/settings we use.

Thanks in advance!

Vincent



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Test Mail

2020-06-06 Thread Akash Shinde



Countdown latch issue with 2.6.0

2020-06-06 Thread Akash Shinde
Hi,
Issue: Countdown latched gets reinitialize to original value(4) when one or
more (but not all) node goes down. (Partition loss happened)

We are using ignite's distributed countdownlatch to make sure that cache
loading is completed on all server nodes. We do this to make sure that our
kafka consumers starts only after cache loading is complete on all server
nodes. This is the basic criteria which needs to be fulfilled before starts
actual processing


 We have 4 server nodes and countdownlatch is initialized to 4. We use
"cache.loadCache" method to start the cache loading. When each server
completes cache loading it reduces the count by 1 using countDown method.
So when all the nodes completes cache loading, the count reaches to zero.
When this count  reaches to zero we start kafka consumers on all server
nodes.

 But we saw weird behavior in prod env. The 3 server nodes were shut down
at the same time. But 1 node is still alive. When this happened the count
down was reinitialized to original value i.e. 4. But I am not able to
reproduce this in dev env.

 Is this a bug, when one or more (but not all) nodes goes down then count
re initializes back to original value?

Thanks,
Akash


CountDownLatch issue in Ignite 2.6 version

2020-06-06 Thread Akash Shinde
*Issue:* Countdown latched gets reinitialize to original value(4) when one
or more (but not all) node goes down. *(Partition loss happened)*

We are using ignite's distributed countdownlatch to make sure that cache
loading is completed on all server nodes. We do this to make sure that our
kafka consumers starts only after cache loading is complete on all server
nodes. This is the basic criteria which needs to be fulfilled before starts
actual processing


 We have 4 server nodes and countdownlatch is initialized to 4. We use
cache.loadCache method to start the cache loading. When each server
completes cache loading it reduces the count by 1 using countDown method.
So when all the nodes completes cache loading, the count reaches to zero.
When this count  reaches to zero we start kafka consumers on all server
nodes.

 But we saw weird behavior in prod env. The 3 server nodes were shut down
at the same time. But 1 node is still alive. When this happened the count
down was reinitialized to original value i.e. 4. But I am not able to
reproduce this in dev env.

 Is this a bug, when one or more (but not all) nodes goes down then count
re initializes back to original value?

Thanks,
Akash


Re: BinaryObjectException: Conflicting enum values

2020-06-06 Thread Denis Magda
You might have hit the following specificity ("Cluster Doesn’t Start After
Field Type Changes") that happens if you change a type of a field:
https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/troubleshooting#cluster-doesnt-start-after-field-type-changes

-
Denis


On Fri, Jun 5, 2020 at 10:53 AM Andrew Munn  wrote:

> I'm seeing the same issue as this one
> 
> .
>
> I had values in the cache.  I cleared the cache, but did not shutdown the
> cluster node.  I modified the enums in the class.  Then I repopulated the
> cache with instances of the updated class.  There must be some way to purge
> leftover metadata without bringing the cluster down, right?   Can it be
> purged when the cache is cleared?
>
> Thanks
>
>


Re: embedded jetty & ignite

2020-06-06 Thread Clay Teahouse
Hi Denis -- My main reason was for embedding jetty as an ignite service was
to have ignite manage jetty instance, the same it does for any other kind
of service.

On Thu, Jun 4, 2020 at 3:30 PM Denis Magda  wrote:

> Clay,
>
> Do you have any specific requirements in mind for the ignite service +
> jetty deployment? If possible, please tell us a bit more about your
> application.
>
> Generally, I would deploy Jetty separately and use load balancers when
> several instances of an application are needed.
>
> -
> Denis
>
>
> On Wed, Jun 3, 2020 at 3:20 PM Clay Teahouse 
> wrote:
>
>> Thank you, Denis. I'll research this topic further.
>>
>> Any recommendation for/against using jetty as an embedded servlet
>> container, in this case, say, deployed as an ignite service?
>>
>> On Fri, May 29, 2020 at 11:22 PM Denis Magda  wrote:
>>
>>> Clay,
>>>
>>> Just start your Jetty server and deploy as many instances of your web
>>> app as needed. Inside the logic of those apps start Ignite server nodes
>>> instances. Then, refer to this documentation page for session clustering
>>> configuration:
>>> https://apacheignite-mix.readme.io/docs/web-session-clustering
>>>
>>> Also, there were many related questions related to this topic. Try to
>>> search for specific by googling for "session clustering with ignite and
>>> jetty".
>>>
>>> Let us know if further help is needed.
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, May 29, 2020 at 6:57 PM Clay Teahouse 
>>> wrote:
>>>
 thank you Denis.
 If I want to go with the first option, how would I deploy jetty as
 embedded server? Do I deploy it as an ignite service?
 How would I do session clustering in this case?

 On Fri, May 29, 2020 at 3:18 PM Denis Magda  wrote:

> Hi Clay,
>
> I wouldn't suggest using Ignite's Jetty instance for the deployment of
> your services. Ignite's Jetty primary function is to handle REST requests
> specific to Ignite: https://apacheignite.readme.io/docs/rest-api
>
> Instead, deploy and manage your restful services separately. Then, if
> the goal is to do a web session clustering, deploy Ignite server nodes in
> the embedded mode making the sessions' caches replicated. Otherwise, 
> deploy
> the server nodes independently and reach the cluster out from the restful
> services using existing Ignite APIs. This tutorial shows how to do the
> latter with Spring Boot:
> https://www.gridgain.com/docs/tutorials/spring/spring_ignite_tutorial
>
> -
> Denis
>
>
> On Fri, May 29, 2020 at 8:25 AM Clay Teahouse 
> wrote:
>
>> hello,
>> I understand that ignite comes with embedded jetty server.
>> 1) Can I utilize this jetty server to deploy my own restful services
>> (using Jersey implementation)? If yes, can you please direct me to some
>> examples.
>> Further questions:
>> 2)How does the ignite embedded jetty work with regard to load
>> balancing? Are there multiple instances of the embedded jetty server
>> running behind a load balancer? In other words, can I invoke multiple
>> instances?
>> 2) How does this scheme work with web session clustering?
>> 3) Would the ignite node run in server mode?
>> 4) I want the jetty sessions access ignite caches (on the server
>> side) as the data source for the data returned from the restful services.
>>
>> Any help and advice would be much appreciated. Thank you
>>
>


Re: BinaryObjectException: Conflicting enum values

2020-06-06 Thread Andrew Munn
So once I insert an instance of a class in a Map I can't change the type of
any existing member variable in the class, even if I clear the map?
Seems like there should be some programatic way to clear any left-over
class metadata/schema if simply clearing the cache won't do it. Right?

On Sat, Jun 6, 2020 at 10:43 AM Denis Magda  wrote:

> You might have hit the following specificity ("Cluster Doesn’t Start After
> Field Type Changes") that happens if you change a type of a field:
> https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/troubleshooting#cluster-doesnt-start-after-field-type-changes
>
> -
> Denis
>
>
> On Fri, Jun 5, 2020 at 10:53 AM Andrew Munn  wrote:
>
>> I'm seeing the same issue as this one
>> 
>> .
>>
>> I had values in the cache.  I cleared the cache, but did not shutdown the
>> cluster node.  I modified the enums in the class.  Then I repopulated the
>> cache with instances of the updated class.  There must be some way to purge
>> leftover metadata without bringing the cluster down, right?   Can it be
>> purged when the cache is cleared?
>>
>> Thanks
>>
>>


Re: Countdown latch issue with 2.6.0

2020-06-06 Thread Akash Shinde
Can someone please help me with this issue.

On Sat, Jun 6, 2020 at 6:45 PM Akash Shinde  wrote:

> Hi,
> Issue: Countdown latched gets reinitialize to original value(4) when one
> or more (but not all) node goes down. (Partition loss happened)
>
> We are using ignite's distributed countdownlatch to make sure that cache
> loading is completed on all server nodes. We do this to make sure that our
> kafka consumers starts only after cache loading is complete on all server
> nodes. This is the basic criteria which needs to be fulfilled before starts
> actual processing
>
>
>  We have 4 server nodes and countdownlatch is initialized to 4. We use
> "cache.loadCache" method to start the cache loading. When each server
> completes cache loading it reduces the count by 1 using countDown method.
> So when all the nodes completes cache loading, the count reaches to zero.
> When this count  reaches to zero we start kafka consumers on all server
> nodes.
>
>  But we saw weird behavior in prod env. The 3 server nodes were shut down
> at the same time. But 1 node is still alive. When this happened the count
> down was reinitialized to original value i.e. 4. But I am not able to
> reproduce this in dev env.
>
>  Is this a bug, when one or more (but not all) nodes goes down then count
> re initializes back to original value?
>
> Thanks,
> Akash
>


Re: embedded jetty & ignite

2020-06-06 Thread Denis Magda
Clay,

Probably, such frameworks as Quarkus, Spring Boot, Micronaut would work as
a better foundation for your microservices. As you know, those already go
with embedded REST servers and you can always use Ignite clients to
reach out to the cluster.

Usually, Ignite servers are deployed in the embedded mode when you're
dealing with ultra-low latency use case or doing web-sessions clustering:
https://www.gridgain.com/docs/latest/installation-guide/deployment-modes#embedded-deployment


-
Denis


On Sat, Jun 6, 2020 at 9:03 AM Clay Teahouse  wrote:

> Hi Denis -- My main reason was for embedding jetty as an ignite service
> was to have ignite manage jetty instance, the same it does for any other
> kind of service.
>
> On Thu, Jun 4, 2020 at 3:30 PM Denis Magda  wrote:
>
>> Clay,
>>
>> Do you have any specific requirements in mind for the ignite service +
>> jetty deployment? If possible, please tell us a bit more about your
>> application.
>>
>> Generally, I would deploy Jetty separately and use load balancers when
>> several instances of an application are needed.
>>
>> -
>> Denis
>>
>>
>> On Wed, Jun 3, 2020 at 3:20 PM Clay Teahouse 
>> wrote:
>>
>>> Thank you, Denis. I'll research this topic further.
>>>
>>> Any recommendation for/against using jetty as an embedded servlet
>>> container, in this case, say, deployed as an ignite service?
>>>
>>> On Fri, May 29, 2020 at 11:22 PM Denis Magda  wrote:
>>>
 Clay,

 Just start your Jetty server and deploy as many instances of your web
 app as needed. Inside the logic of those apps start Ignite server nodes
 instances. Then, refer to this documentation page for session clustering
 configuration:
 https://apacheignite-mix.readme.io/docs/web-session-clustering

 Also, there were many related questions related to this topic. Try to
 search for specific by googling for "session clustering with ignite and
 jetty".

 Let us know if further help is needed.


 -
 Denis


 On Fri, May 29, 2020 at 6:57 PM Clay Teahouse 
 wrote:

> thank you Denis.
> If I want to go with the first option, how would I deploy jetty as
> embedded server? Do I deploy it as an ignite service?
> How would I do session clustering in this case?
>
> On Fri, May 29, 2020 at 3:18 PM Denis Magda  wrote:
>
>> Hi Clay,
>>
>> I wouldn't suggest using Ignite's Jetty instance for the deployment
>> of your services. Ignite's Jetty primary function is to handle REST
>> requests specific to Ignite:
>> https://apacheignite.readme.io/docs/rest-api
>>
>> Instead, deploy and manage your restful services separately. Then, if
>> the goal is to do a web session clustering, deploy Ignite server nodes in
>> the embedded mode making the sessions' caches replicated. Otherwise, 
>> deploy
>> the server nodes independently and reach the cluster out from the restful
>> services using existing Ignite APIs. This tutorial shows how to do the
>> latter with Spring Boot:
>> https://www.gridgain.com/docs/tutorials/spring/spring_ignite_tutorial
>>
>> -
>> Denis
>>
>>
>> On Fri, May 29, 2020 at 8:25 AM Clay Teahouse 
>> wrote:
>>
>>> hello,
>>> I understand that ignite comes with embedded jetty server.
>>> 1) Can I utilize this jetty server to deploy my own restful services
>>> (using Jersey implementation)? If yes, can you please direct me to some
>>> examples.
>>> Further questions:
>>> 2)How does the ignite embedded jetty work with regard to load
>>> balancing? Are there multiple instances of the embedded jetty server
>>> running behind a load balancer? In other words, can I invoke multiple
>>> instances?
>>> 2) How does this scheme work with web session clustering?
>>> 3) Would the ignite node run in server mode?
>>> 4) I want the jetty sessions access ignite caches (on the server
>>> side) as the data source for the data returned from the restful 
>>> services.
>>>
>>> Any help and advice would be much appreciated. Thank you
>>>
>>