Problem is likely caused by old multiarch scheme + gcc7 (6 or 8 work fine) + 
march=(cpu type which doesn't do avx2).

https://gerrit.fd.io/r/#/c/17252 <https://gerrit.fd.io/r/#/c/17252>

Should fix the issue....

> On 1 Feb 2019, at 18:52, Florin Coras <fcoras.li...@gmail.com> wrote:
> 
> Just a heads up to everyone. 
> 
> I’ve tried rebuilding vpp with cleared ccache (rm -rf build-root/.ccache/*) 
> and on my ubuntu 18.04 with gcc 7.3.0 it turned out to be really slow. For 
> some reason gcc seems to be stalling towards the end of the build. To see if 
> this is gcc specific, I installed gcc8 and it turns out it’s much faster.
> 
> I don’t know if this is just an isolated issue for my box or a genuine bug. 
> So, before starting to think about a gcc upgrade, could others confirm the 
> issue?
> 
> Florin
> 
>> On Feb 1, 2019, at 6:46 AM, Ed Kern (ejk) <e...@cisco.com> wrote:
>> 
>> 
>> 
>>> On Feb 1, 2019, at 7:06 AM, Andrew 👽 Yourtchenko <ayour...@gmail.com> wrote:
>>> 
>>> Can we retrieve a high watermark of the container memory usage during
>>> a job run ?
>>> 
>> 
>> So my answer to that is ‘I have no idea’
>> 
>> The memory allocation from my ‘automated’ point of view happens during make 
>> pkg-deb or
>> make test  (for example).  Looking at the memory before or after those 
>> commands are
>> run is pointless because they are low/nil.
>> 
>> The way i have seen allocations in the past is just by running builds by 
>> hand so I have
>> a separate terminal attached to monitor.
>> 
>> This ‘works’ with the exception of the oom killer that will sometimes shoot 
>> things down if
>> there is huge memory spike in the ‘middle’.  Ive seen this with some java 
>> bits.
>> 
>> 
>>> Then we could take that, multiply by 2, for sanity verify that it is
>>> not larger than the previous 3x times 3 (i.e. 9x)and verify if it hits
>>> the previously configured limit of 3x, and if it does, then install a
>>> new 3x number, and if needs to, decrease the number of concurrently
>>> running jobs accordingly, and send the notification about that.
>>> 
>>> This would a manual process to rest in a simple and relatively safe
>>> fashion, what do you think ?
>>> 
>> 
>> would still be a manual process to change the number but sure..   
>> 
>> If someone had a slick way to see max memory usage during any
>> section of a ‘make <option>’  that would be awesome.
>> 
>> Ed
>> 
>> 
>> 
>>> --a
>>> 
>>> On 2/1/19, Ed Kern via Lists.Fd.Io <ejk=cisco....@lists.fd.io> wrote:
>>>> Request with numbers has been made.  Not a ci-man change so it requires
>>>> vanessa, but she
>>>> is typically super fast turning around these changes, so hopefully in a
>>>> couple hours.
>>>> 
>>>> Apologies for the trouble.   We have seen a 4-6x increase (depending on OS)
>>>> in the last 5 months
>>>> and so it finally started pinching my memory reservations of 'everything it
>>>> needs x3’.
>>>> 
>>>> Ed
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On Jan 31, 2019, at 6:26 PM, Florin Coras <fcoras.li...@gmail.com> wrote:
>>>>> 
>>>>> It seems centos verify jobs are failing with errors of the type:
>>>>> 
>>>>> 00:27:16
>>>>> FAILED: vnet/CMakeFiles/vnet.dir/span/node.c.o
>>>>> 
>>>>> 00:27:16 ccache /opt/rh/devtoolset-7/root/bin/cc -DWITH_LIBSSL=1
>>>>> -Dvnet_EXPORTS -I/w/workspace/vpp-verify-master-centos7/src -I. -Iinclude
>>>>> -Wno-address-of-packed-member -march=corei7 -mtune=corei7-avx -g -O2
>>>>> -DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror -fPIC   -Wall -MD -MT
>>>>> vnet/CMakeFiles/vnet.dir/span/node.c.o -MF
>>>>> vnet/CMakeFiles/vnet.dir/span/node.c.o.d -o
>>>>> vnet/CMakeFiles/vnet.dir/span/node.c.o   -c
>>>>> /w/workspace/vpp-verify-master-centos7/src/vnet/span/node.c
>>>>> 
>>>>> I suspect this may be a memory issue. Could someone with ci superpowers
>>>>> try increasing it for the centos containers?
>>>>> 
>>>>> Thanks,
>>>>> Florin
>>>> 
>>>> 
>> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12131): https://lists.fd.io/g/vpp-dev/message/12131 
> <https://lists.fd.io/g/vpp-dev/message/12131>
> Mute This Topic: https://lists.fd.io/mt/29613366/675642 
> <https://lists.fd.io/mt/29613366/675642>
> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> <https://lists.fd.io/g/vpp-dev/unsub>  [dmar...@me.com 
> <mailto:dmar...@me.com>]
> -=-=-=-=-=-=-=-=-=-=-=-

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12136): https://lists.fd.io/g/vpp-dev/message/12136
Mute This Topic: https://lists.fd.io/mt/29613366/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to