I am personally content if you provide the total amount of memory for Pod and 
you as OSB designer decide of the -Xms/Xmx for the services. Unlike what Sanne 
said I think, Amazon and the like they don’t give you x GB of cache. They give 
you an instance of Redis or Memcached within a VM that has x amount of GB 
allocated. What you can stuck in is left as an exercise for the reader.

Not ideal but I think they went for the practical in this case.

For the pain JDG, then more options is fine.

> On 28 Sep 2017, at 12:00, Sebastian Laskawiec <slask...@redhat.com> wrote:
> 
> So how about exposing two parameters - Xms/Xmx and Total amount of memory for 
> Pod (Request = Limit in that case). Would it work for you?
> 
> On Thu, Sep 28, 2017 at 8:38 AM Emmanuel Bernard <emman...@hibernate.org 
> <mailto:emman...@hibernate.org>> wrote:
> Sebastian,
> 
> What Galder, Sanne and others are saying is that in OpenShift on prem, there 
> is no or at least a higher limit in the minimal container memory you can ask. 
> And in these deployment, Infinispan should target the multi GB, not 512 MB.
> 
> Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try 
> and consume more.
> 
> 
>> On 25 Sep 2017, at 12:30, Sebastian Laskawiec <slask...@redhat.com 
>> <mailto:slask...@redhat.com>> wrote:
>> 
> 
>> 
>> 
>> On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarreño <gal...@redhat.com 
>> <mailto:gal...@redhat.com>> wrote:
>> I don't understand your reply here... are you talking about Infinispan 
>> instances deployed on OpenShift Online? Or on premise?
>> 
>> TBH - I think there is no difference, so I'm thinking about both.
>>  
>> I can understand having some limits for OpenShift Online, but these 
>> templates should also be applicable on premise, in which case I should be 
>> able to easily define how much memory I want for the data grid, and the rest 
>> of the parameters would be worked out by OpenShift/Kubernetes?
>> 
>> I have written a couple of emails about this on internal mailing list. Let 
>> me just point of some bits here:
>> We need to set either Xmx or MaxRAM to tell the JVM how much memory it can 
>> allocate. As you probably know JDK8 is not CGroups aware by default (there 
>> are some experimental options but they set MaxRAM parameter equal to CGroups 
>> limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating 
>> Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly.
>> in our Docker image we set Xmx = 50% of CGroups limit. This is better than 
>> settings above but there is some risk in certain scenarios.
>> As I mentioned in my previous email, in the templates we are setting 
>> Requests (not Limits!!!). So you will probably get more memory than 
>> specified in the template but it depends on the node you're running on. The 
>> key point is that you won't get less than those 512 MB.
>> You can always edit your DeploymentConfig (after creating your application 
>> from template) and adjust Limits (or even requests).
>> For simple scenarios and bigger containers (like 4 GB) we can go more than 
>> 50% (see internal mailing list for details).
>> And as I said before - if you guys think we should do it differently, I'm 
>> open for suggestions. I think it's quite standard way of configuring this 
>> sort of stuff.
>> 
>> To demand on premise users to go and change their template just to adjust 
>> the memory settings seems to me goes against all the usability improvements 
>> we're trying to achieve.
>> 
>> At some point you need to define how much memory you will need. Whether it's 
>> in the template, your DeploymentConfiguration (created from template using 
>> oc process), Quota - it doesn't matter. You must write it somewhere - don't 
>> you? With current approach, the best way to do it is in Deployment 
>> Configuration Requests. This sets CGroups limit, and based on that, 
>> Infinispan bootstrap scripts will calculate Xmx. 
>>  
>> 
>> Cheers,
>> 
>> > On 22 Sep 2017, at 14:49, Sebastian Laskawiec <slask...@redhat.com 
>> > <mailto:slask...@redhat.com>> wrote:
>> >
>> > It's very tricky...
>> >
>> > Memory is adjusted automatically to the container size [1] (of course you 
>> > may override it by supplying Xmx or "-n" as parameters [2]). The safe 
>> > limit is roughly Xmx=Xms=50% of container capacity (unless you do the 
>> > off-heap, that you can squeeze Infinispan much, much more).
>> >
>> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in 
>> > bustable memory category so if there is additional memory in the node, 
>> > we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).
>> >
>> > Thanks,
>> > Sebastian
>> >
>> > [1] 
>> > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
>> >  
>> > <https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory>
>> > [2] 
>> > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
>> >  
>> > <https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308>
>> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 
>> > <https://www.youtube.com/watch?v=nWGkvrIPqJ4>
>> > [4] 
>> > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html 
>> > <https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html>
>> >
>> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <gal...@redhat.com 
>> > <mailto:gal...@redhat.com>> wrote:
>> > Hi Sebastian,
>> >
>> > How do you change memory settings for Infinispan started via service 
>> > catalog?
>> >
>> > The memory settings seem defined in [1], but this is not one of the 
>> > parameters supported.
>> >
>> > I guess we want this as parameter?
>> >
>> > Cheers,
>> >
>> > [1] 
>> > https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
>> >  
>> > <https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308>
>> > --
>> > Galder Zamarreño
>> > Infinispan, Red Hat
>> >
>> 
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev 
>> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>_______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev 
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>_______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to