Can we retrieve a high watermark of the container memory usage during
a job run ?

Then we could take that, multiply by 2, for sanity verify that it is
not larger than the previous 3x times 3 (i.e. 9x)and verify if it hits
the previously configured limit of 3x, and if it does, then install a
new 3x number, and if needs to, decrease the number of concurrently
running jobs accordingly, and send the notification about that.

This would a manual process to rest in a simple and relatively safe
fashion, what do you think ?

--a

On 2/1/19, Ed Kern via Lists.Fd.Io <ejk=cisco....@lists.fd.io> wrote:
> Request with numbers has been made.  Not a ci-man change so it requires
> vanessa, but she
> is typically super fast turning around these changes, so hopefully in a
> couple hours.
>
> Apologies for the trouble.   We have seen a 4-6x increase (depending on OS)
> in the last 5 months
> and so it finally started pinching my memory reservations of 'everything it
> needs x3’.
>
> Ed
>
>
>
>
>> On Jan 31, 2019, at 6:26 PM, Florin Coras <fcoras.li...@gmail.com> wrote:
>>
>> It seems centos verify jobs are failing with errors of the type:
>>
>> 00:27:16
>> FAILED: vnet/CMakeFiles/vnet.dir/span/node.c.o
>>
>> 00:27:16 ccache /opt/rh/devtoolset-7/root/bin/cc -DWITH_LIBSSL=1
>> -Dvnet_EXPORTS -I/w/workspace/vpp-verify-master-centos7/src -I. -Iinclude
>> -Wno-address-of-packed-member -march=corei7 -mtune=corei7-avx -g -O2
>> -DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror -fPIC   -Wall -MD -MT
>> vnet/CMakeFiles/vnet.dir/span/node.c.o -MF
>> vnet/CMakeFiles/vnet.dir/span/node.c.o.d -o
>> vnet/CMakeFiles/vnet.dir/span/node.c.o   -c
>> /w/workspace/vpp-verify-master-centos7/src/vnet/span/node.c
>>
>> I suspect this may be a memory issue. Could someone with ci superpowers
>> try increasing it for the centos containers?
>>
>> Thanks,
>> Florin
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12124): https://lists.fd.io/g/vpp-dev/message/12124
Mute This Topic: https://lists.fd.io/mt/29613366/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to