OK, if we have RAM lets add it, I thought we had some constraint there.

> On Nov 29, 2017, at 9:43 AM, Jamo Luhrsen <[email protected]> wrote:
> 
> Well, I think Thanh's point is that if the output.xml is too big, we just add 
> more RAM to
> fix it, although instead we could add some swap which is "slower RAM"
> 
> JamO
> 
> On 11/29/2017 09:13 AM, Luis Gomez wrote:
>> I do not remember all the issues related with memory but at least swap is 
>> required in the robot VM to generate the log.html
>> when output.xml file is big.
>> 
>>> On Nov 29, 2017, at 2:56 AM, Thanh Ha <[email protected] 
>>> <mailto:[email protected]> <mailto:[email protected] 
>>> <mailto:[email protected]>>> wrote:
>>> 
>>> We can certainly add swap but imo swap is just a slower ram. At least in my 
>>> experience when a system starts needing to swap
>>> things have already gone beyond recovery and swap just delays the 
>>> inevitable program crash. In any case there's a few ways
>>> we can add swap if we really want to:
>>> 
>>> 1) At the job level, create a swap file and activate it for the size needed 
>>> for the job.
>>> 2) At the jenkins-init level (via jenkins-scripts in releng/builder) we can 
>>> add swap for systems here
>>> 3) Bake it into the VM via packer file
>>> 
>>> If we want to try this with a specific job type then I would try it with 
>>> option 1 first and see how things go. If we find
>>> it really useful we can move it into 2 or 3.
>>> 
>>> Regards,
>>> Thanh
>>> 
>>> On Mon, Nov 13, 2017 at 6:19 PM, Luis Gomez <[email protected] 
>>> <mailto:[email protected]> <mailto:[email protected] 
>>> <mailto:[email protected]>>> wrote:
>>> 
>>>    +1, adding Andy: is it possible to get swap space in the VM from the 
>>> cloud provider? otherwise is it possible to modify
>>>    the releng/builder packer scripts to add the swap?
>>> 
>>>>    On Nov 13, 2017, at 3:05 PM, Sam Hague <[email protected] 
>>>> <mailto:[email protected]> <mailto:[email protected] 
>>>> <mailto:[email protected]>>> wrote:
>>>> 
>>>>    On Thu, Nov 9, 2017 at 5:45 AM, Stephen Kitt <[email protected] 
>>>> <mailto:[email protected]> <mailto:[email protected] 
>>>> <mailto:[email protected]>>> wrote:
>>>> 
>>>>        Exactly, that’s what I was alluding to with “the lack of swap 
>>>> probably
>>>>        doesn’t help” ;-). We should really configure VMs with a small 
>>>> amount
>>>>        of swap, it can save the kernel in tricky situations...
>>>> 
>>>>    I was looking at this also and wondering about swap. We had a different 
>>>> issue where the robot VMs blow up on producing
>>>>    the log.html file because it ends up being a 1gb file. With swap I 
>>>> think it would have passed fine. As is, we bumped
>>>>    the 2gb vm to a 4gb to make it work. So we will likely hit it again. 
>>>> 
>>>> 
>>>>        On Thu, 9 Nov 2017 01:47:08 -0800
>>>>        Anil Vishnoi <[email protected] <mailto:[email protected]> 
>>>> <mailto:[email protected] <mailto:[email protected]>>> wrote:
>>>> 
>>>>> I suspect you might hit it again if you run it a bit longer because
>>>>> of this
>>>>> 
>>>>> [Thu Nov  2 05:58:08 2017] Free swap  = 0kB
>>>>> [Thu Nov  2 05:58:08 2017] Total swap = 0kB
>>>>> 
>>>>> 
>>>>> On Thu, Nov 9, 2017 at 1:39 AM, Stephen Kitt <[email protected] 
>>>>> <mailto:[email protected]> <mailto:[email protected] 
>>>>> <mailto:[email protected]>>> wrote:
>>>>> 
>>>>>> On Thu, 9 Nov 2017 10:28:14 +0100
>>>>>> Robert Varga <[email protected] <mailto:[email protected]> <mailto:[email protected] 
>>>>>> <mailto:[email protected]>>> wrote:
>>>>>>> On 02/11/17 23:02, Luis Gomez wrote:
>>>>>>>> 1) JVM does not kill itself, the OS does instead after the java
>>>>>>>> process grows to 3.7G in a VM of 4G RAM  (note Xmx is set to 2G
>>>>>>>> but still the jvm goes far beyond that).
>>>>>>> 
>>>>>>> Indicates this lies out side of heap -- check thread count.
>>>>>> 
>>>>>> We verified separately that this is an OOM issue, but one detected
>>>>>> by the kernel rather than by the JVM (the OOM killer kills the JVM,
>>>>>> see
>>>>>> https://jira.opendaylight.org/secure/attachment/14207/dmesg.log.txt 
>>>>>> <https://jira.opendaylight.org/secure/attachment/14207/dmesg.log.txt>
>>>>        
>>>> <https://jira.opendaylight.org/secure/attachment/14207/dmesg.log.txt>
>>>>>> for details; the number of threads wasn’t an issue here, but the
>>>>>> lack of swap probably didn’t help).
>>>>>> 
>>>>>> Upgrading to OpenJDK 8 patch 151 fixed the problem, it might have
>>>>>> been related to one of the several memory usage bugs in 144 that
>>>>>> were fixed in 151. It’s probably just moving the goalposts though
>>>>>> since the problem was new — basically, I suspect we recently
>>>>>> started using a little too much off-heap memory for some reason,
>>>>>> and the upgrade to 151 reduces the JVM’s memory usage enough to
>>>>>> make us fit in our VMs again.
>>>>>> 
>>>>>> Regards,
>>>>>> 
>>>>>> Stephen
>>> 
>>> 
>> 
>> 
>> 
>> _______________________________________________
>> integration-dev mailing list
>> [email protected] 
>> <mailto:[email protected]>
>> https://lists.opendaylight.org/mailman/listinfo/integration-dev 
>> <https://lists.opendaylight.org/mailman/listinfo/integration-dev>
_______________________________________________
controller-dev mailing list
[email protected]
https://lists.opendaylight.org/mailman/listinfo/controller-dev

Reply via email to