Re: [vpp-dev] Gerrit 25463 fails API CRC check for no obvious reason...

2020-02-26 Thread Ed Kern via Lists.Fd.Io
adding vpp-dev to to:

A bad retry vote on a job (jan will be sending it along) caused a merge to 
happen with this bug.

vpp folks….

Since we have gone a few months now with no jenkins connection issues (one of 
the main drivers for the retry introduction). Id like
to propose removing the retry from ALL voting jobs.   

This MAY lead to an increase in manual rechecks if we see spike of intermittent 
job failures.   But IMO that should not be a large
percentage increase currently without those connection reset issues.

Please let me know if anyone feels that we should keep the retry mechanic in 
place otherwise ill be putting up a ci-management patch
later today for its removal.

thanks,

Ed






> On Feb 26, 2020, at 8:08 AM, otr...@employees.org wrote:
> 
> I merged a bunch of .api changes from Jakub today. Might be related...
> 
> Ole
> 
> 
>> On 26 Feb 2020, at 15:44, Dave Barach via Lists.Fd.Io 
>>  wrote:
>> 
>> Other patches failing as well. Please investigate AYEC.
>> 
>> Thanks... Dave
>> 
>> P.S. See https://gerrit.fd.io/r/c/vpp/+/25463. No .api files involved.
>> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15549): https://lists.fd.io/g/vpp-dev/message/15549
Mute This Topic: https://lists.fd.io/mt/71566402/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Docs and test-docs jobs are unstable

2020-01-22 Thread Ed Kern via Lists.Fd.Io
The moral of the story here is ‘only madmen/people run pip —upgrade pip in a 
script and expect the wheels to not fly off’

Obviously the LF has not learned this lesson enough. (But with the release of 
pip 20.0.0 yesterday it was another reminder)

On getting to root cause ive opened a ticket against this script so if we ever 
find ourselves using it again
we wont get bitten.


Ill reserve my thoughts on how imo all unstable jobs could be made override +1 
at this point. Since you would only get an
unstable (versus a flat fail) because of some non-critical issue in the LF 
infra going wrong. But only because after having
to triage their code after a day long breakage has me in a bad mood.

Ed




> On Jan 22, 2020, at 10:34 AM, Florin Coras  wrote:
> 
> Great! Thanks, Ed!
> 
> Florin
> 
>> On Jan 22, 2020, at 9:22 AM, Ed Kern (ejk)  wrote:
>> 
>> 
>> florin this should be ok now on all branches….
>> 
>> Vanessa changed the publisher which worked around the problem.
>> 
>> 
>> (The underlying problem with lftools and their use of pip flags which dont 
>> jibe with the latest pip release version
>> is still open but shouldn’t be hitting us.)
>> 
>> 
>> Ed
>> 
>> 
>> 
>>> On Jan 22, 2020, at 8:36 AM, Ed Kern via Lists.Fd.Io 
>>>  wrote:
>>> 
>>> I am looking into this for the last 30 minutes or so and have raised it 
>>> with LF folks as well.
>>> 
>>> More hopefully soon.
>>> 
>>> Ed
>>> 
>>> 
>>> 
>>> 
>>> 
>>>> On Jan 22, 2020, at 8:28 AM, Florin Coras  wrote:
>>>> 
>>>> Hi, 
>>>> 
>>>> All verify jobs are now failing because of these. “lftools command not 
>>>> found” seems to be the issue but not the only error. 
>>>> 
>>>> Could somebody take a look at them? 
>>>> 
>>>> Regards,
>>>> Florin
>>> 
>>> 
>> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15230): https://lists.fd.io/g/vpp-dev/message/15230
Mute This Topic: https://lists.fd.io/mt/69981727/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Docs and test-docs jobs are unstable

2020-01-22 Thread Ed Kern via Lists.Fd.Io

florin this should be ok now on all branches….

Vanessa changed the publisher which worked around the problem.


(The underlying problem with lftools and their use of pip flags which dont jibe 
with the latest pip release version
is still open but shouldn’t be hitting us.)


Ed



> On Jan 22, 2020, at 8:36 AM, Ed Kern via Lists.Fd.Io 
>  wrote:
> 
> I am looking into this for the last 30 minutes or so and have raised it with 
> LF folks as well.
> 
> More hopefully soon.
> 
> Ed
> 
> 
> 
> 
> 
>> On Jan 22, 2020, at 8:28 AM, Florin Coras  wrote:
>> 
>> Hi, 
>> 
>> All verify jobs are now failing because of these. “lftools command not 
>> found” seems to be the issue but not the only error. 
>> 
>> Could somebody take a look at them? 
>> 
>> Regards,
>> Florin
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15225): https://lists.fd.io/g/vpp-dev/message/15225
Mute This Topic: https://lists.fd.io/mt/69981727/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Docs and test-docs jobs are unstable

2020-01-22 Thread Ed Kern via Lists.Fd.Io
I am looking into this for the last 30 minutes or so and have raised it with LF 
folks as well.

More hopefully soon.

Ed





> On Jan 22, 2020, at 8:28 AM, Florin Coras  wrote:
> 
> Hi, 
> 
> All verify jobs are now failing because of these. “lftools command not found” 
> seems to be the issue but not the only error. 
> 
> Could somebody take a look at them? 
> 
> Regards,
> Florin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15222): https://lists.fd.io/g/vpp-dev/message/15222
Mute This Topic: https://lists.fd.io/mt/69981727/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Arm verify job broken

2020-01-14 Thread Ed Kern via Lists.Fd.Io
short answer: fixed..


longer answer:  a ci-management patch which did a whole bunch of cleanup and 
readability improvements accidentally
separated two scripts that had to stay together.  Having the script apart meant 
arm was using only 1 core instead
of the correct number of 16 cores.

This lead to all arm jobs answering the question ‘what does a yellow light 
mean’  (apologies to those that miss the taxi reference)

ci-man patch being merged now…all subsequent jobs should be back to normal 
timing.

Ed



On Jan 14, 2020, at 1:54 PM, Andrew 👽 Yourtchenko 
mailto:ayour...@gmail.com>> wrote:

better to debug some before changing anything.

With my RM hat on I would rather understand what is going on, if it is 24 hours 
before the RC1 milestone ;-)

--a

On 14 Jan 2020, at 21:28, Ed Kern via Lists.Fd.Io 
mailto:ejk=cisco@lists.fd.io>> wrote:

 ok FOR THE MOST PART…  build times haven’t changed  (@20 min)

make test portion has gone off the chain and is often hitting the 120 min build 
timeout threshold.

Ive currently got several jobs running on sandbox testing

a. without any timeouts (just to see if we are still passing on master)
b. three different memory and disk profiles (just in case someone/thing got 
messy and we have had another memory spike)

the bad news is that it will be at least a couple of hours more than likely 
before I get any results that I feel are actionable.


Having said that im happy to shoot the build timeout for the time being if 
committers feel that is warranted to ‘keep the train rolling’

Ed




On Jan 14, 2020, at 1:07 PM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

Thanks, Ed!

Florin

On Jan 14, 2020, at 11:53 AM, Ed Kern (ejk) 
mailto:e...@cisco.com>> wrote:

looking into it now…..

its a strange one ill tell you that up front…failures are all over the place 
inside the build and even some hitting the 120 min timeout….

more as i dig hopefully

thanks for the ping

Ed



On Jan 14, 2020, at 11:40 AM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

Hi,

Jobs have been failing since yesterday. Did anybody try to look into it?

Regards,
Florin






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15173): https://lists.fd.io/g/vpp-dev/message/15173
Mute This Topic: https://lists.fd.io/mt/69698855/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Arm verify job broken

2020-01-14 Thread Ed Kern via Lists.Fd.Io
ok FOR THE MOST PART…  build times haven’t changed  (@20 min)

make test portion has gone off the chain and is often hitting the 120 min build 
timeout threshold.

Ive currently got several jobs running on sandbox testing

a. without any timeouts (just to see if we are still passing on master)
b. three different memory and disk profiles (just in case someone/thing got 
messy and we have had another memory spike)

the bad news is that it will be at least a couple of hours more than likely 
before I get any results that I feel are actionable.


Having said that im happy to shoot the build timeout for the time being if 
committers feel that is warranted to ‘keep the train rolling’

Ed




On Jan 14, 2020, at 1:07 PM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

Thanks, Ed!

Florin

On Jan 14, 2020, at 11:53 AM, Ed Kern (ejk) 
mailto:e...@cisco.com>> wrote:

looking into it now…..

its a strange one ill tell you that up front…failures are all over the place 
inside the build and even some hitting the 120 min timeout….

more as i dig hopefully

thanks for the ping

Ed



On Jan 14, 2020, at 11:40 AM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

Hi,

Jobs have been failing since yesterday. Did anybody try to look into it?

Regards,
Florin




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15169): https://lists.fd.io/g/vpp-dev/message/15169
Mute This Topic: https://lists.fd.io/mt/69698855/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Arm verify job broken

2020-01-14 Thread Ed Kern via Lists.Fd.Io
looking into it now…..

its a strange one ill tell you that up front…failures are all over the place 
inside the build and even some hitting the 120 min timeout….

more as i dig hopefully

thanks for the ping

Ed



On Jan 14, 2020, at 11:40 AM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

Hi,

Jobs have been failing since yesterday. Did anybody try to look into it?

Regards,
Florin


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15167): https://lists.fd.io/g/vpp-dev/message/15167
Mute This Topic: https://lists.fd.io/mt/69698855/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] merge jobs failing..

2019-12-17 Thread Ed Kern via Lists.Fd.Io

Hey benoit,

Happened to notice (because I was watching the queue anyway since santa damjan 
did a bunch of merges) that ubuntu test/merges started failing
after
https://gerrit.fd.io/r/c/vpp/+/23913

No its not clear to me why it verified…
it looks like a simple typo where you meant to do
from scapy.contrib.geneve import GENEVE
but did
from scapy.layers.geneve import GENEVE

in test/test_trace_filter.py

anyway..all ubuntu merge jobs are failing after that point so if this ISNT the 
error feel
free to flame me and/or the real root cause.

Ed
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14915): https://lists.fd.io/g/vpp-dev/message/14915
Mute This Topic: https://lists.fd.io/mt/68776336/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] May see some job restarts/retries for the next couple of hours

2019-12-16 Thread Ed Kern via Lists.Fd.Io
Ok piggybacked on jenkins restart (they had to deinstall totally unrelated 
plugin) and got the cloud collapse done.

I rechecked a couple of jobs but things seem to be running smoothly at this 
point.

Feel free to reach out in email or slack if folks see issues.

thanks,

Ed




> On Dec 16, 2019, at 11:13 AM, Ed Kern via Lists.Fd.Io 
>  wrote:
> 
> 
> TLDR: some infra and arm job (especially) may see some restarts or rechecks 
> needed for the next couple of hours
> 
> 
> Ill hopefully be collapsing the arm and and csit clusters into the primary 
> cluster so some random behavior may be seen.
> This is especially true when nodes are removed from one cluster and added to 
> another.
> 
> Hopefully it will be uneventful.  I have tested these client node moves as 
> much as possible without actually doing the migration.
> 
> Ed-=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14902): https://lists.fd.io/g/vpp-dev/message/14902
> Mute This Topic: https://lists.fd.io/mt/68733483/675649
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [e...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14904): https://lists.fd.io/g/vpp-dev/message/14904
Mute This Topic: https://lists.fd.io/mt/68733483/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] May see some job restarts/retries for the next couple of hours

2019-12-16 Thread Ed Kern via Lists.Fd.Io

TLDR: some infra and arm job (especially) may see some restarts or rechecks 
needed for the next couple of hours


Ill hopefully be collapsing the arm and and csit clusters into the primary 
cluster so some random behavior may be seen.
This is especially true when nodes are removed from one cluster and added to 
another.

Hopefully it will be uneventful.  I have tested these client node moves as much 
as possible without actually doing the migration.

Ed-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14902): https://lists.fd.io/g/vpp-dev/message/14902
Mute This Topic: https://lists.fd.io/mt/68733483/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Coverity run FAILED as of 2019-12-07 14:01:55 UTC

2019-12-07 Thread Ed Kern via Lists.Fd.Io
Hmm…website must have been briefly unavailable during the run this 
morning…seems ok now though after manual sandbox run.

Ed




Coverity run failed today.

Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects




On Dec 7, 2019, at 7:02 AM, nore...@jenkins.fd.io 
wrote:

Coverity run failed today.

[Error replacing 'FILE' - Workspace is not accessible]

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14835): https://lists.fd.io/g/vpp-dev/message/14835
Mute This Topic: https://lists.fd.io/mt/67626387/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] xenial (ubuntu 16.04) goes 'historic' ubuntu 18.04 loses beta tag

2019-11-05 Thread Ed Kern via Lists.Fd.Io

xenial ubuntu 16.04:
As discussed in a couple vpp meetings I have pulled the job that builds ubuntu 
16.04 from master verify/merges.
It will continue to verify/merge on older releases until we decide otherwise or 
the support of the whole release ends.

bionic ubuntu 18.04:
When we first started building I had these jobs with a beta tag  ( 
vpp-beta-verify-master-ubuntu1804 )  to keep them from
building on older (non bionic supported releases).  This stop being necessary 
earlier in the year I just never removed the ‘beta’
tag until today.

In spite of the various jenkins jnlp issues and subsequent jenkins reload that 
followed there are very few (two) patches that may
need an additional recheck to get a valid vote.


This will hopefully be a behind the scenes change that wont impact any dev work 
but wanted folks to be aware just in case.

thanks,

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14518): https://lists.fd.io/g/vpp-dev/message/14518
Mute This Topic: https://lists.fd.io/mt/42783749/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] current verify issues and jenkins build queue

2019-10-08 Thread Ed Kern via Lists.Fd.Io
Update:

Still ongoing:   1device cluster still dead….per patch job has been removed so 
verify’s can happen

Jenkins is back up with an empty queue as of @30 minutes ago.   I have three 
rechecks running and when those pass
ill be going through all the open gerrits I see (without any verification vote 
at all) for the past day and recheck them.


Wont send another update until/if something else goes sideways or 1device 
cluster is back in service and is part
of the verify process again.

Ill be looking into the health checker style so ’the right thing’ will happen 
when the port is open and receiving but without
a fully functional brain behind it.

Ed



> On Oct 8, 2019, at 9:59 AM, Ed Kern via Lists.Fd.Io 
>  wrote:
> 
> 
> Problems currently still ongoing:
>   1device cluster worker nodes are currently down.. I’ve notified csit in 
> slack and am cc’ing them here..  In the meantime I have a gerrit to remove 
> 1device per patch 
>   so it doesn’t delay voting on verify jobs.
>   Jenkins just crashed so that will take awhile to sort.
>   vanessa and I are trying to just empty the build queue at this 
> point to get back to zero so jenkins won’t just crash again when it gets 
> opened.
> 
> 
> History:
> 
> root cause: 
>   a. will have to wait on csit folks for answers on the two 1device node 
> failure
>   b. during the night the internal docker registry stopped responding 
> (but still passed socket health check so didnt fail over)
> 
> Workflow:
>   1. I saw there was an issue reading email around 6am pacific this 
> morning. 
>   2. saw that the registry wasn’t responding and attempted restart.
>   3. due to the jenkins server queue hammering on the nomad cluster it 
> took a long while to get that restart to go through (roughly 40 min)
>   4. once the bottle was uncorked the sixty jobs pending (including a 
> large number of checkstyle jobs) turned into 160.
>   5. jenkins ‘chokes’ and crashes
>   6. ‘we’ start scrubbing the queue which will cause a huge number of 
> rechecks but at least jenkins wont crash again..
> 
> **current time
> 
>  future:
>   7.  will force the ci-man patch removing per patch verify 
>   8. jenkins queue will re-open and ill send another email.
>   9. Im adding myself to the queue high threshold alarm LF system so I 
> get paged/called when the queue gets above 90 (their current severe water 
> mark)
>   10. Ill see if i can find a way to troll gerrit to manually recheck 
> what i can find
> 
> 
> more as it rolls along-=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14147): https://lists.fd.io/g/vpp-dev/message/14147
> Mute This Topic: https://lists.fd.io/mt/34443895/675649
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [e...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14149): https://lists.fd.io/g/vpp-dev/message/14149
Mute This Topic: https://lists.fd.io/mt/34443895/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] current verify issues and jenkins build queue

2019-10-08 Thread Ed Kern via Lists.Fd.Io

Problems currently still ongoing:
1device cluster worker nodes are currently down.. I’ve notified csit in 
slack and am cc’ing them here..  In the meantime I have a gerrit to remove 
1device per patch 
so it doesn’t delay voting on verify jobs.
Jenkins just crashed so that will take awhile to sort.
vanessa and I are trying to just empty the build queue at this 
point to get back to zero so jenkins won’t just crash again when it gets opened.


History:

root cause: 
a. will have to wait on csit folks for answers on the two 1device node 
failure
b. during the night the internal docker registry stopped responding 
(but still passed socket health check so didnt fail over)

Workflow:
1. I saw there was an issue reading email around 6am pacific this 
morning. 
2. saw that the registry wasn’t responding and attempted restart.
3. due to the jenkins server queue hammering on the nomad cluster it 
took a long while to get that restart to go through (roughly 40 min)
4. once the bottle was uncorked the sixty jobs pending (including a 
large number of checkstyle jobs) turned into 160.
5. jenkins ‘chokes’ and crashes
6. ‘we’ start scrubbing the queue which will cause a huge number of 
rechecks but at least jenkins wont crash again..

**current time

  future:
7.  will force the ci-man patch removing per patch verify 
8. jenkins queue will re-open and ill send another email.
9. Im adding myself to the queue high threshold alarm LF system so I 
get paged/called when the queue gets above 90 (their current severe water mark)
10. Ill see if i can find a way to troll gerrit to manually recheck 
what i can find


more as it rolls along-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14147): https://lists.fd.io/g/vpp-dev/message/14147
Mute This Topic: https://lists.fd.io/mt/34443895/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] jenkins queue...

2019-10-08 Thread Ed Kern via Lists.Fd.Io


super short version right now…

yes the queue was stuck…..it is now unstuck BUT….do expect the queue to go UP a 
good deal before it goes down..

(due to the nature of one stuck check style job launches 6+ jobs when it gets 
unstuck and finishes)


a lot more details coming but didn’t want further panic

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14144): https://lists.fd.io/g/vpp-dev/message/14144
Mute This Topic: https://lists.fd.io/mt/34442828/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io Jenkins - long queue of jobs

2019-09-26 Thread Ed Kern via Lists.Fd.Io
This has nothing to do with the LF (outside of jobs getting stuck (which I 
haven’t seen since the upgrade yesterday)).

The queue has been ballooning and shrinking the last couple of days for a few 
reasons:
a.  a very large number of 1908 verify and then merge jobs.
b.  increase recently in the number and in some cases long duration of csit jobs
c. merge jobs that have to wait for the previous so they just (correctly) sit 
in the build queue.
d. multi hour shutdown yesterday which always causes a bubble…

There is no current way to prioritize master runs over 1908 cherrypicks so for 
that wait I apologize.  Current longest
wait time I’ve seen on a verify based job was around 80 minutes to start.

Andrew has said that he had a backlog of cherrypicks and he expects the number 
at any one time to reduce.

We have more than doubled the amount of jobs running at any one time over the 
last three weeks.  The system is fine
but certainly the amount of ’slack’ is considerably less then there used to be…

Currently thought is still to repurpose ‘retiring’ virl hardware thats in place 
into generic x86 infra build to get us a lot more
’slack’.

Ed



On Sep 26, 2019, at 6:56 AM, Jan Gelety via Lists.Fd.Io 
mailto:jgelety=cisco@lists.fd.io>> wrote:

Hello,

Could you, please, check FD.io Jenkins? There’s a quite a huge 
queue of wating jobs (87 at the moment) there.

Thanks,
Jan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14059): https://lists.fd.io/g/vpp-dev/message/14059
Mute This Topic: https://lists.fd.io/mt/34298430/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14064): https://lists.fd.io/g/vpp-dev/message/14064
Mute This Topic: https://lists.fd.io/mt/34298430/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Lots of patches failing like this... (CentOS)

2019-09-17 Thread Ed Kern via Lists.Fd.Io
Ok so looks like centos obsoleted the packages

python36-devel in favor of the name python3-devel
python36-pip in favor of the name python3-pip

So all the centos builds barfed because it had the ‘old’ name in required list

https://gerrit.fd.io/r/c/vpp/+/22109

is verifying and the mighty wallace promises to merge once it verifies..


Note to the future:

centos repo/maintainers did NOT rename either
python36-ply
or
python36-jsonschema

which are also req’s  in the Makefile just waiting for some random tuesday in 
the future to fail.

But for now the above gerrit should get centos moving again.

Ed



On Sep 17, 2019, at 11:32 AM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:


13:00:41 [PostBuildScript] - Executing post build scripts.

13:00:41 [vpp-verify-master-centos7] $ /bin/bash 
/tmp/jenkins653227054931649374.sh

13:00:41 Build logs: https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/21710";>https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/21710

13:00:41 /w/workspace/vpp-verify-master-centos7 
/w/workspace/vpp-verify-master-centos7/.archives

13:00:41 Archiving **/build-root/*.rpm

13:00:41 mv: cannot stat '**/build-root/*.rpm': No such file or directory

13:00:41 Archiving **/build-root/*.deb

13:00:41 mv: cannot stat '**/build-root/*.deb': No such file or directory

13:00:41 Archiving **/dpdk/*.rpm

13:00:41 mv: cannot stat '**/dpdk/*.rpm': No such file or directory

13:00:41 Archiving **/dpdk/*.deb

13:00:41 mv: cannot stat '**/dpdk/*.deb': No such file or directory

13:00:41 /w/workspace/vpp-verify-master-centos7/.archives

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14009): https://lists.fd.io/g/vpp-dev/message/14009
Mute This Topic: https://lists.fd.io/mt/34180141/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] jenkins jobs failing timing out trying to reach gerrit

2019-07-30 Thread Ed Kern via Lists.Fd.Io

Just a heads up that this outage looks like network issues between vexxhost and 
gerrit servers
and lasted <30 minutes.

Looks like many of the failed jobs passed in the retry.

We are now full steam ahead into the maint window.

Ed



On Jul 30, 2019, at 9:12 AM, Ed Kern via Lists.Fd.Io 
mailto:ejk=cisco@lists.fd.io>> wrote:


https://jira.linuxfoundation.org/servicedesk/customer/portal/4/SUPPORT-321


vanessa is actively looking into this presently.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13616): https://lists.fd.io/g/vpp-dev/message/13616
Mute This Topic: https://lists.fd.io/mt/32655012/675649
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com<mailto:e...@cisco.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13618): https://lists.fd.io/g/vpp-dev/message/13618
Mute This Topic: https://lists.fd.io/mt/32655012/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] jenkins jobs failing timing out trying to reach gerrit

2019-07-30 Thread Ed Kern via Lists.Fd.Io

https://jira.linuxfoundation.org/servicedesk/customer/portal/4/SUPPORT-321


vanessa is actively looking into this presently.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13616): https://lists.fd.io/g/vpp-dev/message/13616
Mute This Topic: https://lists.fd.io/mt/32655012/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] centos jobs failing

2019-07-01 Thread Ed Kern via Lists.Fd.Io

https://gerrit.fd.io/r/#/c/20442/6/build/external/Makefile


I was too chicken to suggest it as a clean fix :)

Ed



On Jul 1, 2019, at 5:17 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:

See https://gerrit.fd.io/r/#/c/20446/ - .../build/external/Makefile needed a 
bit of “|| true” action to avoid falling all over itself if no vpp-ext-deps 
.rpm was installed:

ifneq ($(INSTALLED_RPM_VER),$(RPM_VER)-$(PKG_SUFFIX))
  @$(MAKE) $(DEV_RPM)
  sudo rpm -e vpp-ext-deps || true
  sudo rpm -Uih --force $(DEV_RPM)
Definitely fixes the issue in a freshly-built Centos7 VM.

HTH... Dave

From: Ed Kern (ejk) mailto:e...@cisco.com>>
Sent: Monday, July 1, 2019 3:20 PM
To: Florin Coras mailto:fcoras.li...@gmail.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>; Dave Barach 
(dbarach) mailto:dbar...@cisco.com>>; Thomas F Herbert 
mailto:therb...@redhat.com>>
Subject: Re: centos jobs failing

adding thomas directly just to cover my centos base…


TLDR  vpp-ext-deps makefile issues lead to failures resulting in immediate job 
failure…..working on workaround while
someone comes up with a real fix.




Packagecloud.io  retention on master APPEARS to be 
about a month (unless someone hand removed things and didn’t mention it)

vpp-ext-deps-19.08-6.x86_64.rpm/deb was no longer in the repo as of last 12 
hours or so.

On the next verify run (not finding the prebuilt in package cloud) simply built
vpp-ext-deps*.deb
and at the end of that job uploaded it back to package cloud and ‘deb-life’ 
builds continued as normal

The scripts that build/install  (that are only run when vpp-ext-deps needs to 
be hand built (aka not in package cloud))
vpp-ext-deps.rpm
fail…..and have continued to fail ever since..


Ed

PS.. I’ve done a hand upload of the vpp-ext-deps package for rpm so centos 
builds SHOULD be working again…at least until
vpp-ext-deps rev’s up (unless we get a fix in place first)



On Jul 1, 2019, at 11:52 AM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

It seems the centos package expired in packagecloud. Ed is working on the 
problem.

Florin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13421): https://lists.fd.io/g/vpp-dev/message/13421
Mute This Topic: https://lists.fd.io/mt/32277567/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] centos jobs failing

2019-07-01 Thread Ed Kern via Lists.Fd.Io
adding thomas directly just to cover my centos base…


TLDR  vpp-ext-deps makefile issues lead to failures resulting in immediate job 
failure…..working on workaround while
someone comes up with a real fix.




Packagecloud.io  retention on master APPEARS to be 
about a month (unless someone hand removed things and didn’t mention it)

vpp-ext-deps-19.08-6.x86_64.rpm/deb was no longer in the repo as of last 12 
hours or so.

On the next verify run (not finding the prebuilt in package cloud) simply built
vpp-ext-deps*.deb
and at the end of that job uploaded it back to package cloud and ‘deb-life’ 
builds continued as normal

The scripts that build/install  (that are only run when vpp-ext-deps needs to 
be hand built (aka not in package cloud))
vpp-ext-deps.rpm
fail…..and have continued to fail ever since..


Ed

PS.. I’ve done a hand upload of the vpp-ext-deps package for rpm so centos 
builds SHOULD be working again…at least until
vpp-ext-deps rev’s up (unless we get a fix in place first)


On Jul 1, 2019, at 11:52 AM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

It seems the centos package expired in packagecloud. Ed is working on the 
problem.

Florin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13416): https://lists.fd.io/g/vpp-dev/message/13416
Mute This Topic: https://lists.fd.io/mt/32277567/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp queue backup

2019-06-26 Thread Ed Kern via Lists.Fd.Io

transient latency spike caused the build queue to spike with some jobs left 
hanging.  
I cleaned up the turds it left behind and the queue appears to be on its way 
back down
(in the 20’s down from the 70’s)


just an fyi..

Ed-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13381): https://lists.fd.io/g/vpp-dev/message/13381
Mute This Topic: https://lists.fd.io/mt/32215894/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] gcc8 test runner

2019-06-10 Thread Ed Kern via Lists.Fd.Io

Apologies on not sending this update earlier…

As an AI from the last vpp call I added the job

vpp-verify-gcc8-master-ubuntu1804  to the verify set on 5/29 and it
has been running ever since.

On all the failures I have tracked they also failed on gcc7 path.

At this point I am in full agreement with dwallace in that the problems we saw
before were NOT specific/isolated to gcc8 as they appeared.   

So im declaring 'post hoc ergo propter hoc' at this point.

dave:   Id like to quickly discuss this in the vpp call tomorrow to either 
continue gcc8 testing or
to just roll back to it as primary for ubuntu bionic builds.

thanks,

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13249): https://lists.fd.io/g/vpp-dev/message/13249
Mute This Topic: https://lists.fd.io/mt/32006113/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] x86 ubuntu bionic gcc repointed from gcc-8 to gcc7

2019-05-22 Thread Ed Kern via Lists.Fd.Io

While debugging both a csit compilation problem and also some recent failures 
in bionic verify and merge jobs
I have re-pointed gcc from gcc8 back down to gcc7 to see if that mitigates the 
problem.

If it does not I will be rebuilding the X86 ubuntu image without gcc-8 (and its 
required packages) installed to
see if that helps the issue.

This is done after consulting with pmikus and dave wallace.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13116): https://lists.fd.io/g/vpp-dev/message/13116
Mute This Topic: https://lists.fd.io/mt/31722332/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] gcc8 now default on all bionic builds x86 and arm

2019-05-14 Thread Ed Kern via Lists.Fd.Io

Just to close the AI from the vpp call this morning I’ve upgraded (and made 
default) the gcc used
for vpp builds to gcc8.

Don’t expect any issue and last few builds with the new image has been fine.

So hopefully this is just an FYI that folks can happily ignore..

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13027): https://lists.fd.io/g/vpp-dev/message/13027
Mute This Topic: https://lists.fd.io/mt/31624052/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] arm builds failing..and not running

2019-05-13 Thread Ed Kern via Lists.Fd.Io
it is still a problem..(or i have no reason to think it isn’t)…
that job went through because I re-rebuilt the image to take in the package 
cloud fix
but to retain the older versions of software..(including but again not pointing 
the finger at gcc specifically )

So ill need someone to work with on why with the latest ubuntu 18.04 list of 
packages the arm builds fail.

I have a ticket open….pending vanessa making the change you can use the job

https://jenkins.fd.io/sandbox/job/vpp-arm-verify-master-ubuntu1804/

which I am repointing to use that ’newer’ build image so you can easily see the 
failures and check possible fixes.

Ed

On May 13, 2019, at 9:41 AM, Sirshak Das 
mailto:sirshak@arm.com>> wrote:

Hi Ed,
Is this still a problem ?
I am looking at one of the recent patches with merge job, it seems to be 
working fine.
https://gerrit.fd.io/r/#/c/19532/

Thank you
Sirshak Das
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Ed Kern via 
Lists.Fd.Io
Sent: Friday, May 10, 2019 1:48 PM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] arm builds failing..and not running

TLDR:  Hopefully arm issues are resolved so jobs should correctly verify (they 
work in sandbox, production test still running)




Two different issues:

JNLP connect failures:   cause unconfirmed but certainly I was seeing latency 
issues between jenkins slave and master varying between
1ms and 2000ms.   These latency issues in the past have caused issues similar 
issues to what we have seen this morning.   I have opened
a ticket with the LF about the latency spikes and also inquired about what if 
any latency monitoring exists to track these problems in the future.



ARM-compile failure:   In resolving arm-merge issues earlier this week the arm 
build image was rebuilt.   As part of rebuilding (which is normal)
ubuntu packages (including gcc) were updated to the latest released versions.   
 There seem to be an issue with one of those updates (I’m not
going to say it is gcc, only that gcc was one of the packages that were 
updated)  that cause that panic during the build.
Hopefully one of the arm knowledgable folks can work with me to try and debug 
and work out this problem.  It is 100% reproducible after a simple
apt-get upgrade so I’m optimistic that this can be resolved without too much 
trouble.


Ill continue to watch throughout the weekend to make sure the latency problem 
doesn’t pop back up.


thanks,

Ed

PS thanks for dave wallace for the extra set of eyes and thoughts during the 
debug


On May 10, 2019, at 9:07 AM, Ed Kern via Lists.Fd.Io 
mailto:ejk=cisco@lists.fd.io>> wrote:



Two different issues currently with arm builds….
there is a failure to launch due to JNLP connect that has cropped up in the 
last day.

But when the run does actually start we are getting these failures on master…


09:02:13 [144/1770] Building C object 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o

09:02:13 FAILED: vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o

09:02:13 ccache /usr/lib/ccache/cc -DHAVE_MEMFD_CREATE 
-I/w/workspace/vpp-arm-verify-master-ubuntu1804/src -I. -Iinclude 
-Wno-address-of-packed-member -march=armv8-a+crc -g -O2 -DFORTIFY_SOURCE=2 
-fstack-protector -fPIC -Werror -fPIC   -DCLIB_MARCH_VARIANT=thunderx2t99 -Wall 
-fno-common -march=armv8.1-a+crc+crypto -mtune=thunderx2t99 
-DCLIB_N_PREFETCHES=8 -MD -MT 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o -MF 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o.d -o 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o   -c 
/w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c

09:02:13 /w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c: In 
function ‘punt_dispatch_node_fn_thunderx2t99’:

09:02:13 
/w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c:280:1: 
internal compiler error: Segmentation fault

09:02:13  }

09:02:13  ^

09:02:13 Please submit a full bug report,

09:02:13 with preprocessed source if appropriate.




So Ill keep working on the connect issue…but even if those calm down or are 
worked out Im not sure things are going to pass regardless.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12984): https://lists.fd.io/g/vpp-dev/message/12984
Mute This Topic: https://lists.fd.io/mt/31577929/675649
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com<mailto:e...@cisco.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13009): https://lists.fd.io/g/vpp-dev/message/13009
Mute This Topic: https://lists.fd.io/mt/31577929/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] arm builds failing..and not running

2019-05-10 Thread Ed Kern via Lists.Fd.Io
TLDR:  Hopefully arm issues are resolved so jobs should correctly verify (they 
work in sandbox, production test still running)




Two different issues:

JNLP connect failures:   cause unconfirmed but certainly I was seeing latency 
issues between jenkins slave and master varying between
1ms and 2000ms.   These latency issues in the past have caused issues similar 
issues to what we have seen this morning.   I have opened
a ticket with the LF about the latency spikes and also inquired about what if 
any latency monitoring exists to track these problems in the future.



ARM-compile failure:   In resolving arm-merge issues earlier this week the arm 
build image was rebuilt.   As part of rebuilding (which is normal)
ubuntu packages (including gcc) were updated to the latest released versions.   
 There seem to be an issue with one of those updates (I’m not
going to say it is gcc, only that gcc was one of the packages that were 
updated)  that cause that panic during the build.
Hopefully one of the arm knowledgable folks can work with me to try and debug 
and work out this problem.  It is 100% reproducible after a simple
apt-get upgrade so I’m optimistic that this can be resolved without too much 
trouble.


Ill continue to watch throughout the weekend to make sure the latency problem 
doesn’t pop back up.


thanks,

Ed

PS thanks for dave wallace for the extra set of eyes and thoughts during the 
debug

On May 10, 2019, at 9:07 AM, Ed Kern via Lists.Fd.Io 
mailto:ejk=cisco@lists.fd.io>> wrote:



Two different issues currently with arm builds….
there is a failure to launch due to JNLP connect that has cropped up in the 
last day.

But when the run does actually start we are getting these failures on master…


09:02:13 [144/1770] Building C object 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o
09:02:13 FAILED: vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o
09:02:13 ccache /usr/lib/ccache/cc -DHAVE_MEMFD_CREATE 
-I/w/workspace/vpp-arm-verify-master-ubuntu1804/src -I. -Iinclude 
-Wno-address-of-packed-member -march=armv8-a+crc -g -O2 -DFORTIFY_SOURCE=2 
-fstack-protector -fPIC -Werror -fPIC   -DCLIB_MARCH_VARIANT=thunderx2t99 -Wall 
-fno-common -march=armv8.1-a+crc+crypto -mtune=thunderx2t99 
-DCLIB_N_PREFETCHES=8 -MD -MT 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o -MF 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o.d -o 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o   -c 
/w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c
09:02:13 /w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c: In 
function ‘punt_dispatch_node_fn_thunderx2t99’:
09:02:13 
/w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c:280:1: 
internal compiler error: Segmentation fault
09:02:13  }
09:02:13  ^
09:02:13 Please submit a full bug report,
09:02:13 with preprocessed source if appropriate.



So Ill keep working on the connect issue…but even if those calm down or are 
worked out Im not sure things are going to pass regardless.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12984): https://lists.fd.io/g/vpp-dev/message/12984
Mute This Topic: https://lists.fd.io/mt/31577929/675649
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com<mailto:e...@cisco.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12993): https://lists.fd.io/g/vpp-dev/message/12993
Mute This Topic: https://lists.fd.io/mt/31577929/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] arm builds failing..and not running

2019-05-10 Thread Ed Kern via Lists.Fd.Io


Two different issues currently with arm builds….
there is a failure to launch due to JNLP connect that has cropped up in the 
last day.

But when the run does actually start we are getting these failures on master…


09:02:13 [144/1770] Building C object 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o
09:02:13 FAILED: vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o
09:02:13 ccache /usr/lib/ccache/cc -DHAVE_MEMFD_CREATE 
-I/w/workspace/vpp-arm-verify-master-ubuntu1804/src -I. -Iinclude 
-Wno-address-of-packed-member -march=armv8-a+crc -g -O2 -DFORTIFY_SOURCE=2 
-fstack-protector -fPIC -Werror -fPIC   -DCLIB_MARCH_VARIANT=thunderx2t99 -Wall 
-fno-common -march=armv8.1-a+crc+crypto -mtune=thunderx2t99 
-DCLIB_N_PREFETCHES=8 -MD -MT 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o -MF 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o.d -o 
vlib/CMakeFiles/vlib_thunderx2t99.dir/punt_node.c.o   -c 
/w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c
09:02:13 /w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c: In 
function ‘punt_dispatch_node_fn_thunderx2t99’:
09:02:13 
/w/workspace/vpp-arm-verify-master-ubuntu1804/src/vlib/punt_node.c:280:1: 
internal compiler error: Segmentation fault
09:02:13  }
09:02:13  ^
09:02:13 Please submit a full bug report,
09:02:13 with preprocessed source if appropriate.



So Ill keep working on the connect issue…but even if those calm down or are 
worked out Im not sure things are going to pass regardless.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12984): https://lists.fd.io/g/vpp-dev/message/12984
Mute This Topic: https://lists.fd.io/mt/31577929/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] centos ninja failures

2019-05-08 Thread Ed Kern via Lists.Fd.Io


On May 8, 2019, at 8:36 AM, Thomas F Herbert 
mailto:therb...@redhat.com>> wrote:


OK,  ninja build failures are intermittent?


ok..havent splinted way back in build history intermittent is not the word I 
would choose.


Other changes are not hitting this issue at all..

This specific change
https://gerrit.fd.io/r/#/c/19350/

seems to have a 100% hit rate.



I am trying to understand how to isolate the problem.

Can it be triggered by doing retest while a current build is in progress?

not clear to me what you mean here since there is no make test running for 
centos here…


Ed



--Tom

On 05/08/2019 10:25 AM, Ed Kern via Lists.Fd.Io wrote:

starting new thread because it really has nothing to do with previous thread…

So I am not going to write:
a.  its related to your code
b. its not the build infra


But I will say there have been good centos verify runs…

BEFORE your patches
BETWEEN various revisions of your changes
BETWEEN retries of your changes
AFTER you stopped trying to retry your changes

With multiple different patches…

So not really having anything to track on the build infra side (nor is this a 
voting issue)  Im starting
a new thread and including tom to see if he has any centos guesses on why we 
are only seeing
problems on centos.

Ed





On May 8, 2019, at 2:46 AM, Yu, Ping 
<mailto:ping...@intel.com> wrote:

Recently centOS build always fail and patch review is blocked even retrying 
several times.


16:17:11 -- Configuring done
16:17:12 -- Generating done
16:17:12 -- Build files have been written to: 
/w/workspace/vpp-verify-master-centos7/build-root/rpmbuild/vpp-19.08/build-root/build-vpp-native/vpp
16:17:12 ninja: error: manifest 'build.ninja' still dirty after 100 tries
16:17:12
16:17:12 make[4]: *** [Makefile:695: vpp-build] Error 1
16:17:12 make[4]: Leaving directory 
'/w/workspace/vpp-verify-master-centos7/build-root/rpmbuild/vpp-19.08/build-root'
16:17:12 make[3]: *** [Makefile:931: install-packages] Error 1
16:17:12 make[3]: Leaving directory 
'/w/workspace/vpp-verify-master-centos7/build-root/rpmbuild/vpp-19.08/build-root'
16:17:12 error: Bad exit status from /var/tmp/rpm-tmp.8TtZeQ (%build)
16:17:12
16:17:12
16:17:12 RPM build errors:
16:17:12 Bad exit status from /var/tmp/rpm-tmp.8TtZeQ (%build)
16:17:12 make[2]: *** [RPM] Error 1
16:17:12 make[2]: Leaving directory 
`/w/workspace/vpp-verify-master-centos7/extras/rpm'
16:17:12 make[1]: *** [pkg-rpm] Error 2
16:17:12 make[1]: Leaving directory `/w/workspace/vpp-verify-master-centos7'
16:17:12 make: *** [verify] Error 2
16:17:12 Build step 'Execute shell' marked build as failure


-Original Message-
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Vratko Polak -X (vrpolak - PANTHEON 
TECHNOLOGIES at Cisco) via Lists.Fd.Io
Sent: Tuesday, May 7, 2019 4:21 PM
To: Ed Kern (ejk) <mailto:e...@cisco.com>; Klement Sekera -X 
(ksekera - PANTHEON TECHNOLOGIES at Cisco) 
<mailto:ksek...@cisco.com>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] false verified + 1 for gerrit jobs



reduce the number of retries (or possibly completely)


+1 for completely.

Vratko.

-Original Message-
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<mailto:vpp-dev@lists.fd.io> On Behalf Of Ed Kern via 
Lists.Fd.Io
Sent: Tuesday, 2019-May-07 00:34
To: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco) 
<mailto:ksek...@cisco.com>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] false verified + 1 for gerrit jobs

This is a known issue with how the retry mechanic interacts (badly) with gerrit 
occasionally.  This odds of this happening were a bit higher over the last
couple of days specific to centos retries.   This is tied to the JNLP changes
made to csit and how the initial connection is made.   While the change
has improved JNLP issues across the board there are a couple new nits
that ill have to debug.

At this point though I’m not going to touch/debug further until the csit 
jobs/reports are completed.

Im actually hopeful that ill be able to reduce the number of retries (or 
possibly completely) in the long run with the HAProxy bypass in place.

Ed






On May 6, 2019, at 11:18 AM, Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES 
at Cisco) <mailto:ksek...@cisco.com> wrote:

Hi all,

I noticed a job getting a +1 even though some of the builds failed ...

https://gerrit.fd.io/r/#/c/18444/

please note patch set 9 

fd.io<http://fd.io> JJB
Patch Set 9: Verified-1 Build Failed
https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ :
FAILURE No problems were identified. If you know why this problem
occurred, please add a suitable Cause for it. (
https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ )
Logs:
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-

[vpp-dev] centos ninja failures

2019-05-08 Thread Ed Kern via Lists.Fd.Io
starting new thread because it really has nothing to do with previous thread…

So I am not going to write:
a.  its related to your code
b. its not the build infra


But I will say there have been good centos verify runs…

BEFORE your patches
BETWEEN various revisions of your changes
BETWEEN retries of your changes
AFTER you stopped trying to retry your changes

With multiple different patches…

So not really having anything to track on the build infra side (nor is this a 
voting issue)  Im starting
a new thread and including tom to see if he has any centos guesses on why we 
are only seeing
problems on centos.

Ed



> On May 8, 2019, at 2:46 AM, Yu, Ping  wrote:
> 
> Recently centOS build always fail and patch review is blocked even retrying 
> several times. 
> 
> 
> 16:17:11 -- Configuring done
> 16:17:12 -- Generating done
> 16:17:12 -- Build files have been written to: 
> /w/workspace/vpp-verify-master-centos7/build-root/rpmbuild/vpp-19.08/build-root/build-vpp-native/vpp
> 16:17:12 ninja: error: manifest 'build.ninja' still dirty after 100 tries
> 16:17:12 
> 16:17:12 make[4]: *** [Makefile:695: vpp-build] Error 1
> 16:17:12 make[4]: Leaving directory 
> '/w/workspace/vpp-verify-master-centos7/build-root/rpmbuild/vpp-19.08/build-root'
> 16:17:12 make[3]: *** [Makefile:931: install-packages] Error 1
> 16:17:12 make[3]: Leaving directory 
> '/w/workspace/vpp-verify-master-centos7/build-root/rpmbuild/vpp-19.08/build-root'
> 16:17:12 error: Bad exit status from /var/tmp/rpm-tmp.8TtZeQ (%build)
> 16:17:12 
> 16:17:12 
> 16:17:12 RPM build errors:
> 16:17:12 Bad exit status from /var/tmp/rpm-tmp.8TtZeQ (%build)
> 16:17:12 make[2]: *** [RPM] Error 1
> 16:17:12 make[2]: Leaving directory 
> `/w/workspace/vpp-verify-master-centos7/extras/rpm'
> 16:17:12 make[1]: *** [pkg-rpm] Error 2
> 16:17:12 make[1]: Leaving directory `/w/workspace/vpp-verify-master-centos7'
> 16:17:12 make: *** [verify] Error 2
> 16:17:12 Build step 'Execute shell' marked build as failure
> 
> 
> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Vratko 
> Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
> Sent: Tuesday, May 7, 2019 4:21 PM
> To: Ed Kern (ejk) ; Klement Sekera -X (ksekera - PANTHEON 
> TECHNOLOGIES at Cisco) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] false verified + 1 for gerrit jobs
> 
>> reduce the number of retries (or possibly completely)
> 
> +1 for completely.
> 
> Vratko.
> 
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Ed Kern via 
> Lists.Fd.Io
> Sent: Tuesday, 2019-May-07 00:34
> To: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco) 
> 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] false verified + 1 for gerrit jobs
> 
> This is a known issue with how the retry mechanic interacts (badly) with 
> gerrit occasionally.  This odds of this happening were a bit higher over the 
> last
> couple of days specific to centos retries.   This is tied to the JNLP changes
> made to csit and how the initial connection is made.   While the change
> has improved JNLP issues across the board there are a couple new nits
> that ill have to debug.
> 
> At this point though I’m not going to touch/debug further until the csit 
> jobs/reports are completed.
> 
> Im actually hopeful that ill be able to reduce the number of retries (or 
> possibly completely) in the long run with the HAProxy bypass in place.
> 
> Ed
> 
> 
> 
> 
>> On May 6, 2019, at 11:18 AM, Klement Sekera -X (ksekera - PANTHEON 
>> TECHNOLOGIES at Cisco)  wrote:
>> 
>> Hi all,
>> 
>> I noticed a job getting a +1 even though some of the builds failed ...
>> 
>> https://gerrit.fd.io/r/#/c/18444/
>> 
>> please note patch set 9 
>> 
>> fd.io JJB
>> Patch Set 9: Verified-1 Build Failed
>> https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ :
>> FAILURE No problems were identified. If you know why this problem 
>> occurred, please add a suitable Cause for it. ( 
>> https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ )
>> Logs:
>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-arm-verify-mas
>> ter-ubuntu1804/2459
>> https://jenkins.fd.io/job/vpp-beta-verify-master-ubuntu1804/6891/ :
>> FAILURE No problems were identified. If you know why this problem 
>> occurred, please add a suitable Cause for it. ( 
>> https://jenkins.fd.io/job/vpp-beta-verify-master-ubuntu1804/6891/ )
>> Logs:
>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-beta-verify-ma
>> ster-ubuntu1804/6891
>

Re: [vpp-dev] false verified + 1 for gerrit jobs

2019-05-06 Thread Ed Kern via Lists.Fd.Io
This is a known issue with how the retry mechanic interacts (badly) with gerrit
occasionally.  This odds of this happening were a bit higher over the last
couple of days specific to centos retries.   This is tied to the JNLP changes
made to csit and how the initial connection is made.   While the change
has improved JNLP issues across the board there are a couple new nits
that ill have to debug.

At this point though I’m not going to touch/debug further until the csit 
jobs/reports
are completed.

Im actually hopeful that ill be able to reduce the number of retries (or 
possibly completely)
in the long run with the HAProxy bypass in place.

Ed




> On May 6, 2019, at 11:18 AM, Klement Sekera -X (ksekera - PANTHEON 
> TECHNOLOGIES at Cisco)  wrote:
> 
> Hi all,
> 
> I noticed a job getting a +1 even though some of the builds failed ...
> 
> https://gerrit.fd.io/r/#/c/18444/
> 
> please note patch set 9 
> 
> fd.io JJB
> Patch Set 9: Verified-1 Build Failed
> https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ :
> FAILURE No problems were identified. If you know why this problem
> occurred, please add a suitable Cause for it. (
> https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ )
> Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-arm-verify-master-ubuntu1804/2459
> https://jenkins.fd.io/job/vpp-beta-verify-master-ubuntu1804/6891/ :
> FAILURE No problems were identified. If you know why this problem
> occurred, please add a suitable Cause for it. (
> https://jenkins.fd.io/job/vpp-beta-verify-master-ubuntu1804/6891/ )
> Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-beta-verify-master-ubuntu1804/6891
> https://jenkins.fd.io/job/vpp-csit-verify-device-master-1n-skx/788/ :
> SUCCESS (skipped) Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-device-master-1n-skx/788
> https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/13041/ :
> SUCCESS Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-make-test-docs-verify-master/13041
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/19027/ :
> SUCCESS Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1604/19027
> https://jenkins.fd.io/job/vpp-verify-master-clang/6731/ : SUCCESS
> Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-clang/6731
> https://jenkins.fd.io/job/vpp-docs-verify-master/15343/ : SUCCESS
> Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-docs-verify-master/15343
> https://jenkins.fd.io/job/vpp-verify-master-centos7/18764/ : NOT_BUILT
> Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/18764
> 7:11 PM
> fd.io JJB
> Patch Set 9: Verified+1 Build Successful
> https://jenkins.fd.io/job/vpp-verify-master-centos7/18764/ : SUCCESS
> Logs:
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/18764
> 7:15 PM
> 
> Thanks,
> Klement

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12942): https://lists.fd.io/g/vpp-dev/message/12942
Mute This Topic: https://lists.fd.io/mt/31522291/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] jenkins restart for jnlp csit errors..

2019-05-03 Thread Ed Kern via Lists.Fd.Io
jenkins restart complete…will be watching jobs the next few hours to make
sure the ha-proxy bypass doesn’t cause other issues..

have a nice day…

Ed



> On May 3, 2019, at 10:54 AM, Ed Kern via Lists.Fd.Io 
>  wrote:
> 
> jenkins has been in shutdown mode for over an our..im going to hand kill 
> remaining jobs so we can get this restart done.
> 
> FYI
> 
> :)
> 
> 
> Ed-=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12922): https://lists.fd.io/g/vpp-dev/message/12922
> Mute This Topic: https://lists.fd.io/mt/31485753/675649
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [e...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12923): https://lists.fd.io/g/vpp-dev/message/12923
Mute This Topic: https://lists.fd.io/mt/31485753/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] jenkins restart for jnlp csit errors..

2019-05-03 Thread Ed Kern via Lists.Fd.Io
jenkins has been in shutdown mode for over an our..im going to hand kill 
remaining jobs so we can get this restart done.

FYI

:)


Ed-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12922): https://lists.fd.io/g/vpp-dev/message/12922
Mute This Topic: https://lists.fd.io/mt/31485753/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TESTING in progress: May see job failures over the next 2 hours

2019-05-02 Thread Ed Kern via Lists.Fd.Io
done testing….depending on how the csit jobs run (or stop) this evening we may 
have to restart
jenkins friday am to have a jenkins configuration change become active.

Ill follow up tomorrow am.

Ed




> On May 2, 2019, at 12:06 PM, Ed Kern via Lists.Fd.Io 
>  wrote:
> 
> testing is continuing on and off but to this point I don’t think we have seen 
> job failures only
> jobs occasionally queuing ….
> 
> debug continues
> 
> Ed
> 
> 
> 
>> On May 2, 2019, at 10:19 AM, Ed Kern via Lists.Fd.Io 
>>  wrote:
>> 
>> 
>> In an attempt to work around the LF   HA-proxy to debug csit job 
>> connectivity issues I have to
>> ‘hardwire’ connections (or attempt to do so at any rate) directly back to 
>> the jenkins IP.
>> 
>> No real idea if it will work or if there will be any side impact…ill be hand 
>> cranking a job or two
>> in an attempt to test….
>> 
>> 
>> Ill send another note if I finish early or if I need to extend past two 
>> hours.
>> 
>> thanks,
>> 
>> Ed
>> 
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#12911): https://lists.fd.io/g/vpp-dev/message/12911
>> Mute This Topic: https://lists.fd.io/mt/31456357/675649
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [e...@cisco.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12913): https://lists.fd.io/g/vpp-dev/message/12913
> Mute This Topic: https://lists.fd.io/mt/31456357/675649
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [e...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12915): https://lists.fd.io/g/vpp-dev/message/12915
Mute This Topic: https://lists.fd.io/mt/31456357/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TESTING in progress: May see job failures over the next 2 hours

2019-05-02 Thread Ed Kern via Lists.Fd.Io
testing is continuing on and off but to this point I don’t think we have seen 
job failures only
jobs occasionally queuing ….

debug continues

Ed



> On May 2, 2019, at 10:19 AM, Ed Kern via Lists.Fd.Io 
>  wrote:
> 
> 
> In an attempt to work around the LF   HA-proxy to debug csit job connectivity 
> issues I have to
> ‘hardwire’ connections (or attempt to do so at any rate) directly back to the 
> jenkins IP.
> 
> No real idea if it will work or if there will be any side impact…ill be hand 
> cranking a job or two
> in an attempt to test….
> 
> 
> Ill send another note if I finish early or if I need to extend past two hours.
> 
> thanks,
> 
> Ed
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12911): https://lists.fd.io/g/vpp-dev/message/12911
> Mute This Topic: https://lists.fd.io/mt/31456357/675649
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [e...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12913): https://lists.fd.io/g/vpp-dev/message/12913
Mute This Topic: https://lists.fd.io/mt/31456357/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] TESTING in progress: May see job failures over the next 2 hours

2019-05-02 Thread Ed Kern via Lists.Fd.Io

In an attempt to work around the LF   HA-proxy to debug csit job connectivity 
issues I have to
‘hardwire’ connections (or attempt to do so at any rate) directly back to the 
jenkins IP.

No real idea if it will work or if there will be any side impact…ill be hand 
cranking a job or two
in an attempt to test….


Ill send another note if I finish early or if I need to extend past two hours.

thanks,

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12911): https://lists.fd.io/g/vpp-dev/message/12911
Mute This Topic: https://lists.fd.io/mt/31456357/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP 19.04 centos7 RPM package dependency errors

2019-04-11 Thread Ed Kern via Lists.Fd.Io


On Apr 11, 2019, at 2:22 PM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

Are we building rpms on a host that has mbedtls installed?

Yes the centos container has mbedtls installed….has for quite some time since 
it is a listed
prerequisite straight out of the Makefile.

Ed




Just uninstalling it should disable the tlsmbedtls plugin, which is what we 
pretty much want.

Florin

On Apr 11, 2019, at 1:00 PM, Dave Wallace 
mailto:dwallac...@gmail.com>> wrote:

Tom/Billy,

Do you know what the current status is with the VPP rpm's on master/19.04?

I'm trying to install the VPP 19.04 packages from 
packagecloud.io on a centos7 Vagrant/virtualbox VM 
(using .../vpp/extras/vagrant/Vagrantfile) to verify that the packages are 
correct.  The 19.01 packages install fine, but the 19.04 and master packages 
fail with the following dependency errors:

--> Finished Dependency Resolution
Error: Package: vpp-plugins-19.04-rc1~b4.x86_64 (fdio_1904)
   Requires: libmbedtls.so.10()(64bit)
Error: Package: vpp-devel-19.04-rc1~b4.x86_64 (fdio_1904)
   Requires: /usr/bin/python3
Error: Package: vpp-19.04-rc1~b4.x86_64 (fdio_1904)
   Requires: /usr/bin/python3
Error: Package: vpp-plugins-19.04-rc1~b4.x86_64 (fdio_1904)
   Requires: libmbedx509.so.0()(64bit)
Error: Package: vpp-plugins-19.04-rc1~b4.x86_64 (fdio_1904)
   Requires: libmbedcrypto.so.2()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Thanks,
-daw-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12768): https://lists.fd.io/g/vpp-dev/message/12768
Mute This Topic: https://lists.fd.io/mt/31034762/675152
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[fcoras.li...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12769): https://lists.fd.io/g/vpp-dev/message/12769
Mute This Topic: https://lists.fd.io/mt/31034762/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12771): https://lists.fd.io/g/vpp-dev/message/12771
Mute This Topic: https://lists.fd.io/mt/31034762/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] jenkins maven failure

2019-04-10 Thread Ed Kern via Lists.Fd.Io
i looked into this and see nothing that leads me believe this is a build infra 
issue.

vanessa is looking into this now..

On Apr 10, 2019, at 7:50 AM, Dave Wallace 
mailto:dwallac...@gmail.com>> wrote:

Ben,

Have you opened a helpd...@fd.io ticket for this issue 
yet?  If not, please do.

This looks like an LF infra issue uploading the build logs.  Same error 
occurred for https://gerrit.fd.io/r/18748

Thanks,
-daw-

On 4/10/2019 9:11 AM, Benoit Ganne (bganne) via Lists.Fd.Io wrote:

Hi all,

It looks like we have some Jenkins issue. Any idea on how to fix that?

See for example 
https://jenkins.fd.io/job/vpp-checkstyle-verify-master/6719/console

14:56:21 Apr 10, 2019 12:56:21 PM 
org.apache.commons.httpclient.auth.AuthChallengeProcessor selectAuthScheme
14:56:21 INFO: basic authentication scheme selected
14:56:21 Apr 10, 2019 12:56:21 PM 
org.apache.commons.httpclient.HttpMethodDirector processWWWAuthChallenge
14:56:21 INFO: No credentials available for BASIC 'Sonatype Nexus Repository 
Manager'@nexus.fd.io:443
14:56:21 [ERROR] Could not upload file: HTTP/1.1 401 Unauthorized
14:56:21 [ERROR] Failed to execute goal 
org.sonatype.plugins:maven-upload-plugin:0.0.1:upload-file (publish-site) on 
project logs: Could not upload file: HTTP/1.1 401 Unauthorized -> [Help 1]
14:56:21 [ERROR]
14:56:21 [ERROR] To see the full stack trace of the errors, re-run Maven with 
the -e switch.
14:56:21 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
14:56:21 [ERROR]
14:56:21 [ERROR] For more information about the errors and possible solutions, 
please read the following articles:
14:56:21 [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Best
ben




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12745): https://lists.fd.io/g/vpp-dev/message/12745
Mute This Topic: https://lists.fd.io/mt/31018599/675079
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dwallac...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12747): https://lists.fd.io/g/vpp-dev/message/12747
Mute This Topic: https://lists.fd.io/mt/31018599/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12748): https://lists.fd.io/g/vpp-dev/message/12748
Mute This Topic: https://lists.fd.io/mt/31018599/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FYI arm cluster testing

2019-04-05 Thread Ed Kern via Lists.Fd.Io

As of a couple minutes ago I enabled eligibility on the new arm cluster build 
nodes and
disabled eligibility on the old nodes (to force the new ones to be used).

This is currently just a test that ill be watching but if you see abnormal arm 
failures please
let me know soon as you can.

If things look good today I’ll leave it in this state through tuesday/wed 
before ‘returning’
the old arm build nodes for other purposes.

thanks,

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12717): https://lists.fd.io/g/vpp-dev/message/12717
Mute This Topic: https://lists.fd.io/mt/30924053/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] openjdk 11 now installed in xenial/bionic/centos build containers

2019-04-01 Thread Ed Kern via Lists.Fd.Io


While Its set to continue to use openjdk 8 by default I wanted folks to be 
aware in case any side effects
or potential issues crop up.

I would prefer to only have one installed (for container size if nothing else)  
does anyone know of any
requirement on keeping openjdk 8 around and the default java version?


thanks,

Ed-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12686): https://lists.fd.io/g/vpp-dev/message/12686
Mute This Topic: https://lists.fd.io/mt/30861618/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] follow-up: Jenkins executors offline

2019-03-12 Thread Ed Kern via Lists.Fd.Io
Looks like another transient latency spike plugged up the drain…

while I was trying to diagnose further magic happened (as in while it was 
nothing I changed or did) and
it went away.

While it may not appear to look like it the queue seems to be returning to 
normal.

Ed



On Mar 12, 2019, at 7:02 AM, Dave Barach via Lists.Fd.Io 
mailto:dbarach=cisco@lists.fd.io>> wrote:

I’ve tried pinging the LF “emergency” alias, but I haven’t received 
confirmation that my message went anywhere. Looking for back-channel ways to 
get someone working on the problem. Stay tuned...

Dave
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12495): https://lists.fd.io/g/vpp-dev/message/12495
Mute This Topic: https://lists.fd.io/mt/30401083/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12497): https://lists.fd.io/g/vpp-dev/message/12497
Mute This Topic: https://lists.fd.io/mt/30401083/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] OpenSuse builds failing for stable/1901

2019-02-27 Thread Ed Kern via Lists.Fd.Io

when I saw that paul  that lead to a few searches (that for me) ended with

This is is not a real bug. The problem is that cmake check that it knows the 
version of boost that you are using and so every time that boost creates a new 
release, the cmake guys need to update the list of supported version. Since 
boost 1.66 was release after cmake 3.10, it is not in the list of supported 
boost versions. Everything should still work though, you just get these 
annoying warnings.

But again..if you find something else or that its actually serious I’m happy to 
listen since these warnings/error/messages are on all branches


Ed




On Feb 27, 2019, at 2:00 PM, Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:

Great!

I'm not convinced that it's fully fixed.  See your log: 
https://jenkins.fd.io/job/vpp-verify-1901-osleap15/81/consoleFull
I interpret these logs as we do not have feature parity across the various 
distros at the moment.

Paul

15:21:06 -- Found Threads: TRUE
15:21:06 CMake Warning at /usr/share/cmake/Modules/FindBoost.cmake:801 
(message):
15:21:06   New Boost version may have incorrect or missing dependencies and 
imported
15:21:06   targets
15:21:06 Call Stack (most recent call first):
15:21:06   /usr/share/cmake/Modules/FindBoost.cmake:906 
(_Boost_COMPONENT_DEPENDENCIES)
15:21:06   /usr/share/cmake/Modules/FindBoost.cmake:1544 
(_Boost_MISSING_DEPENDENCIES)
15:21:06   CMakeLists.txt:27 (find_package)
15:21:06
15:21:06
15:21:06 CMake Warning at /usr/share/cmake/Modules/FindBoost.cmake:801 
(message):
15:21:06   New Boost version may have incorrect or missing dependencies and 
imported
15:21:06   targets
15:21:06 Call Stack (most recent call first):
15:21:06   /usr/share/cmake/Modules/FindBoost.cmake:906 
(_Boost_COMPONENT_DEPENDENCIES)
15:21:06   /usr/share/cmake/Modules/FindBoost.cmake:1544 
(_Boost_MISSING_DEPENDENCIES)
15:21:06   CMakeLists.txt:27 (find_package)
15:21:06
15:21:06
15:21:06 -- Could NOT find Boost
15:21:06 -- Configuring done
15:21:06 -- Generating done

On Wed, Feb 27, 2019 at 3:38 PM Ed Warnicke 
mailto:hagb...@gmail.com>> wrote:
Paul,

Looks like it was a ships in the night effect on a fix going into 
stable/1901... rebase *appears* to have fixed it:

https://gerrit.fd.io/r/#/c/17884/

Ed

On Wed, Feb 27, 2019 at 2:14 PM Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:
Ok.

Let me know if I can do anything to help!

On Wed, Feb 27, 2019 at 3:09 PM Ed Kern (ejk) 
mailto:e...@cisco.com>> wrote:
thats a perfectly good and reasonable change and I support it..

But that’s not the problem that the other ed is having right now.

Ed



On Feb 27, 2019, at 12:47 PM, Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:

Ed,

There is an issue.  I am testing a fix.
See: https://gerrit.fd.io/r/#/c/17917/

Paul

On Wed, Feb 27, 2019 at 2:23 PM Ed Warnicke 
mailto:hagb...@gmail.com>> wrote:
Paul,

Good first thing to check... I've pushed
https://gerrit.fd.io/r/#/c/17916/

to cause the opensuse jobs to cat the /etc/os-release as you suggested, and 
rechecked: https://gerrit.fd.io/r/#/c/17884/
to see what happens :)

Ed

On Wed, Feb 27, 2019 at 12:48 PM Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:
Can we confirm that they are truly osleap15?

An osleap box reports correctly for me:
docker run --shm-size=1024m -it opensuse/leap:leap-15 /bin/bash
ee4938c281e4:/ # cat /etc/os-release
NAME="openSUSE Leap"
VERSION="15.0 Beta"
ID="opensuse"
ID_LIKE="suse"
VERSION_ID="15.0"
PRETTY_NAME="openSUSE Leap 15.0 Beta"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:15.0"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/";

On Wed, Feb 27, 2019 at 1:12 PM Edward Warnicke 
mailto:hagb...@gmail.com>> wrote:
In cherry picking some fixes back to stable/1901, I've found that the builds 
for 1901 for OpenSuse Leap seem to be failing during install-deps:

https://jenkins.fd.io/job/vpp-verify-1901-osleap15/77/console


10:50:21 No provider of 'libboost_thread1_68_0-devel-1.68.0' found.


Which looks like there is some upstream issue with a dependency no longer being 
available for libboost.

Digging deeper, it would appear that we are tripping on SUSE_NAME=Tumbleweed:

https://github.com/FDio/vpp/blob/stable/1901/Makefile#L131

ifeq ($(OS_ID),opensuse)
ifeq ($(SUSE_NAME),Tumbleweed)
RPM_SUSE_DEVEL_DEPS = libboost_headers1_68_0-devel-1.68.0  
libboost_thread1_68_0-devel-1.68.0 gcc
RPM_SUSE_PYTHON_DEPS += python2-ply python2-virtualenv
endif
ifeq ($(SUSE_ID),15.0)
RPM_SUSE_DEVEL_DEPS = libboost_headers-devel libboost_thread-devel gcc6
else
RPM_SUSE_DEVEL_DEPS += libboost_headers1_68_0-devel-1.68.0 gcc6
RPM_SUSE_PYTHON_DEPS += python-virtualenv
endif
endif

So for some reason, the servers are reporting SUSE_NAME=Tumbelweed, and being 
asked to install the wrong packages even though they are OSLEAP 15 boxes.  
Note: this same Makefile fragment is identical on master where the 
corresponding jobs are succeeding.

Re: [vpp-dev] OpenSuse builds failing for stable/1901

2019-02-27 Thread Ed Kern via Lists.Fd.Io
thats a perfectly good and reasonable change and I support it..

But that’s not the problem that the other ed is having right now.

Ed



On Feb 27, 2019, at 12:47 PM, Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:

Ed,

There is an issue.  I am testing a fix.
See: https://gerrit.fd.io/r/#/c/17917/

Paul

On Wed, Feb 27, 2019 at 2:23 PM Ed Warnicke 
mailto:hagb...@gmail.com>> wrote:
Paul,

Good first thing to check... I've pushed
https://gerrit.fd.io/r/#/c/17916/

to cause the opensuse jobs to cat the /etc/os-release as you suggested, and 
rechecked: https://gerrit.fd.io/r/#/c/17884/
to see what happens :)

Ed

On Wed, Feb 27, 2019 at 12:48 PM Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:
Can we confirm that they are truly osleap15?

An osleap box reports correctly for me:
docker run --shm-size=1024m -it opensuse/leap:leap-15 /bin/bash
ee4938c281e4:/ # cat /etc/os-release
NAME="openSUSE Leap"
VERSION="15.0 Beta"
ID="opensuse"
ID_LIKE="suse"
VERSION_ID="15.0"
PRETTY_NAME="openSUSE Leap 15.0 Beta"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:15.0"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/";

On Wed, Feb 27, 2019 at 1:12 PM Edward Warnicke 
mailto:hagb...@gmail.com>> wrote:
In cherry picking some fixes back to stable/1901, I've found that the builds 
for 1901 for OpenSuse Leap seem to be failing during install-deps:

https://jenkins.fd.io/job/vpp-verify-1901-osleap15/77/console


10:50:21 No provider of 'libboost_thread1_68_0-devel-1.68.0' found.


Which looks like there is some upstream issue with a dependency no longer being 
available for libboost.

Digging deeper, it would appear that we are tripping on SUSE_NAME=Tumbleweed:

https://github.com/FDio/vpp/blob/stable/1901/Makefile#L131

ifeq ($(OS_ID),opensuse)
ifeq ($(SUSE_NAME),Tumbleweed)
RPM_SUSE_DEVEL_DEPS = libboost_headers1_68_0-devel-1.68.0  
libboost_thread1_68_0-devel-1.68.0 gcc
RPM_SUSE_PYTHON_DEPS += python2-ply python2-virtualenv
endif
ifeq ($(SUSE_ID),15.0)
RPM_SUSE_DEVEL_DEPS = libboost_headers-devel libboost_thread-devel gcc6
else
RPM_SUSE_DEVEL_DEPS += libboost_headers1_68_0-devel-1.68.0 gcc6
RPM_SUSE_PYTHON_DEPS += python-virtualenv
endif
endif

So for some reason, the servers are reporting SUSE_NAME=Tumbelweed, and being 
asked to install the wrong packages even though they are OSLEAP 15 boxes.  
Note: this same Makefile fragment is identical on master where the 
corresponding jobs are succeeding.

Do any of the OpenSuse folks have ideas as to what could be happening here?

Ed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12366): https://lists.fd.io/g/vpp-dev/message/12366
Mute This Topic: https://lists.fd.io/mt/30154873/1594641
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[pvi...@vinciconsulting.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12371): https://lists.fd.io/g/vpp-dev/message/12371
Mute This Topic: https://lists.fd.io/mt/30154873/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12372): https://lists.fd.io/g/vpp-dev/message/12372
Mute This Topic: https://lists.fd.io/mt/30154873/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] OpenSuse builds failing for stable/1901

2019-02-27 Thread Ed Kern via Lists.Fd.Io
just so you know…it does already echo the version id and os id in the console 
log


16:47:00 + OS_VERSION_ID=15.0
16:47:00 + echo OS_ID: opensuse
16:47:00 OS_ID: opensuse
16:47:00 + echo OS_VERSION_ID: 15.0
16:47:00 OS_VERSION_ID: 15.0
16:47:00 + hostname
16:47:00 1f2fe1a3f253
16:47:00 + export CCACHE_DIR=/tm


Ed



On Feb 27, 2019, at 11:48 AM, Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:

Can we confirm that they are truly osleap15?

An osleap box reports correctly for me:
docker run --shm-size=1024m -it opensuse/leap:leap-15 /bin/bash
ee4938c281e4:/ # cat /etc/os-release
NAME="openSUSE Leap"
VERSION="15.0 Beta"
ID="opensuse"
ID_LIKE="suse"
VERSION_ID="15.0"
PRETTY_NAME="openSUSE Leap 15.0 Beta"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:15.0"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/";

On Wed, Feb 27, 2019 at 1:12 PM Edward Warnicke 
mailto:hagb...@gmail.com>> wrote:
In cherry picking some fixes back to stable/1901, I've found that the builds 
for 1901 for OpenSuse Leap seem to be failing during install-deps:

https://jenkins.fd.io/job/vpp-verify-1901-osleap15/77/console


10:50:21 No provider of 'libboost_thread1_68_0-devel-1.68.0' found.


Which looks like there is some upstream issue with a dependency no longer being 
available for libboost.

Digging deeper, it would appear that we are tripping on SUSE_NAME=Tumbleweed:

https://github.com/FDio/vpp/blob/stable/1901/Makefile#L131

ifeq ($(OS_ID),opensuse)
ifeq ($(SUSE_NAME),Tumbleweed)
RPM_SUSE_DEVEL_DEPS = libboost_headers1_68_0-devel-1.68.0  
libboost_thread1_68_0-devel-1.68.0 gcc
RPM_SUSE_PYTHON_DEPS += python2-ply python2-virtualenv
endif
ifeq ($(SUSE_ID),15.0)
RPM_SUSE_DEVEL_DEPS = libboost_headers-devel libboost_thread-devel gcc6
else
RPM_SUSE_DEVEL_DEPS += libboost_headers1_68_0-devel-1.68.0 gcc6
RPM_SUSE_PYTHON_DEPS += python-virtualenv
endif
endif

So for some reason, the servers are reporting SUSE_NAME=Tumbelweed, and being 
asked to install the wrong packages even though they are OSLEAP 15 boxes.  
Note: this same Makefile fragment is identical on master where the 
corresponding jobs are succeeding.

Do any of the OpenSuse folks have ideas as to what could be happening here?

Ed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12366): https://lists.fd.io/g/vpp-dev/message/12366
Mute This Topic: https://lists.fd.io/mt/30154873/1594641
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[pvi...@vinciconsulting.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12367): https://lists.fd.io/g/vpp-dev/message/12367
Mute This Topic: https://lists.fd.io/mt/30154873/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12370): https://lists.fd.io/g/vpp-dev/message/12370
Mute This Topic: https://lists.fd.io/mt/30154873/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] OpenSuse builds failing for stable/1901

2019-02-27 Thread Ed Kern via Lists.Fd.Io
when i pull your change
i see a different makefile then what you pasted below….

ifeq ($(SUSE_ID),15.0)
RPM_SUSE_DEVEL_DEPS = libboost_headers1_68_0-devel-1.68.0  
libboost_thread1_68_0-devel-1.68.0 gcc6
RPM_SUSE_PYTHON_DEPS += python2-ply python2-virtualenv
else
RPM_SUSE_DEVEL_DEPS +


so that will fail…

Ed




On Feb 27, 2019, at 12:23 PM, Edward Warnicke 
mailto:hagb...@gmail.com>> wrote:

Paul,

Good first thing to check... I've pushed
https://gerrit.fd.io/r/#/c/17916/

to cause the opensuse jobs to cat the /etc/os-release as you suggested, and 
rechecked: https://gerrit.fd.io/r/#/c/17884/
to see what happens :)

Ed

On Wed, Feb 27, 2019 at 12:48 PM Paul Vinciguerra 
mailto:pvi...@vinciconsulting.com>> wrote:
Can we confirm that they are truly osleap15?

An osleap box reports correctly for me:
docker run --shm-size=1024m -it opensuse/leap:leap-15 /bin/bash
ee4938c281e4:/ # cat /etc/os-release
NAME="openSUSE Leap"
VERSION="15.0 Beta"
ID="opensuse"
ID_LIKE="suse"
VERSION_ID="15.0"
PRETTY_NAME="openSUSE Leap 15.0 Beta"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:15.0"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/";

On Wed, Feb 27, 2019 at 1:12 PM Edward Warnicke 
mailto:hagb...@gmail.com>> wrote:
In cherry picking some fixes back to stable/1901, I've found that the builds 
for 1901 for OpenSuse Leap seem to be failing during install-deps:

https://jenkins.fd.io/job/vpp-verify-1901-osleap15/77/console


10:50:21 No provider of 'libboost_thread1_68_0-devel-1.68.0' found.


Which looks like there is some upstream issue with a dependency no longer being 
available for libboost.

Digging deeper, it would appear that we are tripping on SUSE_NAME=Tumbleweed:

https://github.com/FDio/vpp/blob/stable/1901/Makefile#L131

ifeq ($(OS_ID),opensuse)
ifeq ($(SUSE_NAME),Tumbleweed)
RPM_SUSE_DEVEL_DEPS = libboost_headers1_68_0-devel-1.68.0  
libboost_thread1_68_0-devel-1.68.0 gcc
RPM_SUSE_PYTHON_DEPS += python2-ply python2-virtualenv
endif
ifeq ($(SUSE_ID),15.0)
RPM_SUSE_DEVEL_DEPS = libboost_headers-devel libboost_thread-devel gcc6
else
RPM_SUSE_DEVEL_DEPS += libboost_headers1_68_0-devel-1.68.0 gcc6
RPM_SUSE_PYTHON_DEPS += python-virtualenv
endif
endif

So for some reason, the servers are reporting SUSE_NAME=Tumbelweed, and being 
asked to install the wrong packages even though they are OSLEAP 15 boxes.  
Note: this same Makefile fragment is identical on master where the 
corresponding jobs are succeeding.

Do any of the OpenSuse folks have ideas as to what could be happening here?

Ed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12366): https://lists.fd.io/g/vpp-dev/message/12366
Mute This Topic: https://lists.fd.io/mt/30154873/1594641
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[pvi...@vinciconsulting.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12368): https://lists.fd.io/g/vpp-dev/message/12368
Mute This Topic: https://lists.fd.io/mt/30154873/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12369): https://lists.fd.io/g/vpp-dev/message/12369
Mute This Topic: https://lists.fd.io/mt/30154873/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [csit-dev] Jenkins infra outage - 100% jobs queued, VPN access very slow

2019-02-27 Thread Ed Kern via Lists.Fd.Io
update:

mnaser  (vex host )  did a ‘live migration’  of the 
jenkins.fd.io<http://jenkins.fd.io>  ’server’  taking RTT from >6 seconds back 
down
to the @100ms range.   As a result jobs are starting up again and the queue is 
grinding through the backlog and new
jobs.

Im not currently seeing the same delay on/towards other hosted services  
gerrit/nexus etc.

Ill be watching it closely the next few hours but please let me know if you see 
any unusual errors.

thanks,

Ed



On Feb 27, 2019, at 8:34 AM, Ed Kern via Lists.Fd.Io 
mailto:ejk=cisco@lists.fd.io>> wrote:

fyi to vpp-dev as well..


debugging with vexxhost

Begin forwarded message:

From: "Peter Mikus via Lists.Fd.Io" 
mailto:pmikus=cisco@lists.fd.io>>
Subject: [csit-dev] Jenkins infra outage - 100% jobs queued, VPN access very 
slow
Date: February 27, 2019 at 8:32:50 AM MST
To: "helpd...@fd.io<mailto:helpd...@fd.io>" 
mailto:helpd...@fd.io>>
Cc: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
Reply-To: pmi...@cisco.com<mailto:pmi...@cisco.com>

Hello team,

We are observing jobs queueing in Jenkins.fd.io<http://jenkins.fd.io/> and also 
the VPN access to infra is very slow.

Can you please investigate?

ping 10.30.51.50
PING 10.30.51.50 (10.30.51.50) 56(84) bytes of data.
64 bytes from 10.30.51.50: icmp_seq=1 ttl=62 time=9961 ms
64 bytes from 10.30.51.50: icmp_seq=2 ttl=62 time=8928 ms
64 bytes from 10.30.51.50: icmp_seq=3 ttl=62 time=7907 ms
64 bytes from 10.30.51.50: icmp_seq=4 ttl=62 time=6887 ms
64 bytes from 10.30.51.50: icmp_seq=5 ttl=62 time=5863 ms
64 bytes from 10.30.51.50: icmp_seq=6 ttl=62 time=4834 ms
64 bytes from 10.30.51.50: icmp_seq=7 ttl=62 time=3809 ms
64 bytes from 10.30.51.50: icmp_seq=8 ttl=62 time=2790 ms
64 bytes from 10.30.51.50: icmp_seq=9 ttl=62 time=1762 ms

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#3348): https://lists.fd.io/g/csit-dev/message/3348
Mute This Topic: https://lists.fd.io/mt/30153144/675649
Group Owner: csit-dev+ow...@lists.fd.io<mailto:csit-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/csit-dev/unsub  
[e...@cisco.com<mailto:e...@cisco.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12363): https://lists.fd.io/g/vpp-dev/message/12363
Mute This Topic: https://lists.fd.io/mt/30153159/675649
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com<mailto:e...@cisco.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12364): https://lists.fd.io/g/vpp-dev/message/12364
Mute This Topic: https://lists.fd.io/mt/30153159/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] [csit-dev] Jenkins infra outage - 100% jobs queued, VPN access very slow

2019-02-27 Thread Ed Kern via Lists.Fd.Io
fyi to vpp-dev as well..


debugging with vexxhost

Begin forwarded message:

From: "Peter Mikus via Lists.Fd.Io" 
mailto:pmikus=cisco@lists.fd.io>>
Subject: [csit-dev] Jenkins infra outage - 100% jobs queued, VPN access very 
slow
Date: February 27, 2019 at 8:32:50 AM MST
To: "helpd...@fd.io" 
mailto:helpd...@fd.io>>
Cc: csit-...@lists.fd.io
Reply-To: pmi...@cisco.com

Hello team,

We are observing jobs queueing in Jenkins.fd.io and also 
the VPN access to infra is very slow.

Can you please investigate?

ping 10.30.51.50
PING 10.30.51.50 (10.30.51.50) 56(84) bytes of data.
64 bytes from 10.30.51.50: icmp_seq=1 ttl=62 time=9961 ms
64 bytes from 10.30.51.50: icmp_seq=2 ttl=62 time=8928 ms
64 bytes from 10.30.51.50: icmp_seq=3 ttl=62 time=7907 ms
64 bytes from 10.30.51.50: icmp_seq=4 ttl=62 time=6887 ms
64 bytes from 10.30.51.50: icmp_seq=5 ttl=62 time=5863 ms
64 bytes from 10.30.51.50: icmp_seq=6 ttl=62 time=4834 ms
64 bytes from 10.30.51.50: icmp_seq=7 ttl=62 time=3809 ms
64 bytes from 10.30.51.50: icmp_seq=8 ttl=62 time=2790 ms
64 bytes from 10.30.51.50: icmp_seq=9 ttl=62 time=1762 ms

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#3348): https://lists.fd.io/g/csit-dev/message/3348
Mute This Topic: https://lists.fd.io/mt/30153144/675649
Group Owner: csit-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/csit-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12363): https://lists.fd.io/g/vpp-dev/message/12363
Mute This Topic: https://lists.fd.io/mt/30153159/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] osleap job failing

2019-02-26 Thread Ed Kern via Lists.Fd.Io


yup…opensuse removed that specific package from the repo…

you need a fix along the lines of

https://gerrit.fd.io/r/#/c/17698/
 or
https://gerrit.fd.io/r/#/c/17800/

these are NOT infra problems.

Ed




> On Feb 26, 2019, at 6:30 AM, Klement Sekera via Lists.Fd.Io 
>  wrote:
> 
> Hello,
> I'm facing an issue with osleap job, it very frequently fails with
> 
> 12:56:59 'indent' is already installed.
> 12:56:59 No update candidate for 'indent-2.2.11-lp150.1.5.x86_64'. The
> highest available version is already installed.
> 12:56:59 'python3-rpm-macros' not found in package names. Trying
> capabilities.
> 12:56:59 'python-rpm-macros' providing 'python3-rpm-macros' is already
> installed.
> 12:56:59 'libboost_headers1_68_0-devel-1.68.0' not found in package
> names. Trying capabilities.
> 12:56:59 No provider of 'libboost_headers1_68_0-devel-1.68.0' found.
> 12:56:59 'libboost_thread1_68_0-devel-1.68.0' not found in package
> names. Trying capabilities.
> 12:56:59 No provider of 'libboost_thread1_68_0-devel-1.68.0' found.
> 12:56:59 make: *** [Makefile:315: install-dep] Error 104
> 12:56:59 Build step 'Execute shell' marked build as failure
> 
> any idea what's wrong?
> 
> Thanks,
> Klement
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12343): https://lists.fd.io/g/vpp-dev/message/12343
> Mute This Topic: https://lists.fd.io/mt/30140641/675649
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [e...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12344): https://lists.fd.io/g/vpp-dev/message/12344
Mute This Topic: https://lists.fd.io/mt/30140641/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] opensuse15 aka opens use leap15 build failures

2019-02-14 Thread Ed Kern via Lists.Fd.Io


Due to a upstream version reporting change we saw some build failures on 
osleap15 verify jobs.

Ive hopefully worked around this (again) so those jobs should be back on track.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12263): https://lists.fd.io/g/vpp-dev/message/12263
Mute This Topic: https://lists.fd.io/mt/29844347/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp-verify-master-centos failures

2019-02-01 Thread Ed Kern via Lists.Fd.Io


> On Feb 1, 2019, at 7:06 AM, Andrew 👽 Yourtchenko  wrote:
> 
> Can we retrieve a high watermark of the container memory usage during
> a job run ?
> 

So my answer to that is ‘I have no idea’

The memory allocation from my ‘automated’ point of view happens during make 
pkg-deb or
make test  (for example).  Looking at the memory before or after those commands 
are
run is pointless because they are low/nil.

The way i have seen allocations in the past is just by running builds by hand 
so I have
a separate terminal attached to monitor.

This ‘works’ with the exception of the oom killer that will sometimes shoot 
things down if
there is huge memory spike in the ‘middle’.  Ive seen this with some java bits.


> Then we could take that, multiply by 2, for sanity verify that it is
> not larger than the previous 3x times 3 (i.e. 9x)and verify if it hits
> the previously configured limit of 3x, and if it does, then install a
> new 3x number, and if needs to, decrease the number of concurrently
> running jobs accordingly, and send the notification about that.
> 
> This would a manual process to rest in a simple and relatively safe
> fashion, what do you think ?
> 

would still be a manual process to change the number but sure..   

If someone had a slick way to see max memory usage during any
section of a ‘make ’  that would be awesome.

Ed



> --a
> 
> On 2/1/19, Ed Kern via Lists.Fd.Io  wrote:
>> Request with numbers has been made.  Not a ci-man change so it requires
>> vanessa, but she
>> is typically super fast turning around these changes, so hopefully in a
>> couple hours.
>> 
>> Apologies for the trouble.   We have seen a 4-6x increase (depending on OS)
>> in the last 5 months
>> and so it finally started pinching my memory reservations of 'everything it
>> needs x3’.
>> 
>> Ed
>> 
>> 
>> 
>> 
>>> On Jan 31, 2019, at 6:26 PM, Florin Coras  wrote:
>>> 
>>> It seems centos verify jobs are failing with errors of the type:
>>> 
>>> 00:27:16
>>> FAILED: vnet/CMakeFiles/vnet.dir/span/node.c.o
>>> 
>>> 00:27:16 ccache /opt/rh/devtoolset-7/root/bin/cc -DWITH_LIBSSL=1
>>> -Dvnet_EXPORTS -I/w/workspace/vpp-verify-master-centos7/src -I. -Iinclude
>>> -Wno-address-of-packed-member -march=corei7 -mtune=corei7-avx -g -O2
>>> -DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror -fPIC   -Wall -MD -MT
>>> vnet/CMakeFiles/vnet.dir/span/node.c.o -MF
>>> vnet/CMakeFiles/vnet.dir/span/node.c.o.d -o
>>> vnet/CMakeFiles/vnet.dir/span/node.c.o   -c
>>> /w/workspace/vpp-verify-master-centos7/src/vnet/span/node.c
>>> 
>>> I suspect this may be a memory issue. Could someone with ci superpowers
>>> try increasing it for the centos containers?
>>> 
>>> Thanks,
>>> Florin
>> 
>> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12125): https://lists.fd.io/g/vpp-dev/message/12125
Mute This Topic: https://lists.fd.io/mt/29613366/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp-verify-master-centos failures

2019-02-01 Thread Ed Kern via Lists.Fd.Io
Request with numbers has been made.  Not a ci-man change so it requires 
vanessa, but she
is typically super fast turning around these changes, so hopefully in a couple 
hours.

Apologies for the trouble.   We have seen a 4-6x increase (depending on OS) in 
the last 5 months
and so it finally started pinching my memory reservations of 'everything it 
needs x3’.

Ed




> On Jan 31, 2019, at 6:26 PM, Florin Coras  wrote:
> 
> It seems centos verify jobs are failing with errors of the type: 
> 
> 00:27:16 
> FAILED: vnet/CMakeFiles/vnet.dir/span/node.c.o 
> 
> 00:27:16 ccache /opt/rh/devtoolset-7/root/bin/cc -DWITH_LIBSSL=1 
> -Dvnet_EXPORTS -I/w/workspace/vpp-verify-master-centos7/src -I. -Iinclude 
> -Wno-address-of-packed-member -march=corei7 -mtune=corei7-avx -g -O2 
> -DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror -fPIC   -Wall -MD -MT 
> vnet/CMakeFiles/vnet.dir/span/node.c.o -MF 
> vnet/CMakeFiles/vnet.dir/span/node.c.o.d -o 
> vnet/CMakeFiles/vnet.dir/span/node.c.o   -c 
> /w/workspace/vpp-verify-master-centos7/src/vnet/span/node.c
> 
> I suspect this may be a memory issue. Could someone with ci superpowers try 
> increasing it for the centos containers?
> 
> Thanks,
> Florin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12123): https://lists.fd.io/g/vpp-dev/message/12123
Mute This Topic: https://lists.fd.io/mt/29613366/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] PEP8 expert needed

2019-01-29 Thread Ed Kern via Lists.Fd.Io
I’m sure someone beat me to it but if not..

https://gerrit.fd.io/r/17146

Ed



On Jan 29, 2019, at 12:42 PM, Damjan Marion via Lists.Fd.Io 
mailto:dmarion=me@lists.fd.io>> wrote:



Can somebody with python skills take care for this checkstyle errors, not sure 
why they started popping out now...

Thanks!

19:13:35 /w/workspace/vpp-checkstyle-verify-master/test/test_syslog.py:132:17: 
E117 over-indented
19:13:35 self.logger.error(ppp("invalid packet:", capture[0]))
19:13:35 ^
19:13:35 /w/workspace/vpp-checkstyle-verify-master/test/test_syslog.py:187:17: 
E117 over-indented
19:13:35 self.logger.error(ppp("invalid packet:", capture[0]))
19:13:35 ^

--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12050): https://lists.fd.io/g/vpp-dev/message/12050
Mute This Topic: https://lists.fd.io/mt/29585853/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12051): https://lists.fd.io/g/vpp-dev/message/12051
Mute This Topic: https://lists.fd.io/mt/29585853/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp-verify-master-clang now bionic based

2019-01-22 Thread Ed Kern via Lists.Fd.Io

As discussed in the vpp call this morning I have moved this from from running 
on xenial to bionic.

Ran cleanly on the sandbox…but please let me know if anyone sees issues in 
production.

Ed


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11973): https://lists.fd.io/g/vpp-dev/message/11973
Mute This Topic: https://lists.fd.io/mt/29422416/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] we get something bad in https://gerrit.fd.io/r/#/c/16195/ thats toxic to bionic

2018-11-27 Thread Ed Kern via Lists.Fd.Io

While I hate…no LOATHE trying to debug build failures on days like today with 
gerrit misfiring and
the lf upgrading everything in sight I noticed a trend thats bothering me

since the subject mentioning gerrit was merged (and even that merge failed to 
build on bionic)

we have had no good bionic merge builds and we are 2 pass out of 18 on bionic 
verify runs
where the majority of the failures (throwing out gerrit or other issues where 
all builds are breaking)
all simply timeout after MPLS disabled in the test area..


17:51:07 
==
17:51:07 MPLS disabled
17:51:07 
==
17:51:07 MPLS Disabled  
  OK


[cid:8E719649-510F-4DAC-B17F-ABBBE5A3AAA3@lan]


19:23:06 MPLS disabled
19:23:06 
==
19:23:06 MPLS Disabled  
  OK


[cid:AE1754CD-87E5-4567-8054-D356811300ED@lan]


https://jenkins.fd.io/view/vpp/job/vpp-beta-verify-master-ubuntu1804/3628/console

this one throwing a couple tracebacks before hanging on the same test case

https://jenkins.fd.io/view/vpp/job/vpp-beta-verify-master-ubuntu1804/3629/console


So while 16195 and its test case changes might be totally innocent, I would 
still appreciate it if you could take a look
either before or after jenkins upgrade..

thanks,

Ed




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11437): https://lists.fd.io/g/vpp-dev/message/11437
Mute This Topic: https://lists.fd.io/mt/28374843/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Build failing on Fedora

2018-11-26 Thread Ed Kern via Lists.Fd.Io
Burt,

Just an fyi and a comment…

fyi on the build containers we install cmake  (this predated cmakeY being in 
the Makefile)
cmake version 2.8.12.2

cmake3 is installed as part of install dep
cmake3 version 3.12.2

My build will fail (centos)  without cmake3
CMake Error at CMakeLists.txt:14 (cmake_minimum_required):
  CMake 3.5 or higher is required.  You are running version 2.8.12.2

on my centos container I can get a good build without cmake being installed at 
all..(only cmake3)

Also (because Ive fallen into this hole before) your dnf change wont work 
because it requires dnf..
If you really want to use dnf you’ll need to install it with yum (or some other 
way) first then switch over to all dnf
based installs.

Ed



On Nov 24, 2018, at 2:42 PM, Burt Silverman 
mailto:bur...@gmail.com>> wrote:

I agree re cmake3, it looks like the correct package is cmake, not cmake3. 
Maybe something like this is needed (for the Fedora case) since dnf has been 
around a long time now.

diff --git a/Makefile b/Makefile
index e0c710fd..3c8d7c31 100644
--- a/Makefile
+++ b/Makefile
@@ -82,13 +82,13 @@ else
DEB_DEPENDS += libssl-dev
 endif

-RPM_DEPENDS  = redhat-lsb glibc-static java-1.8.0-openjdk-devel yum-utils
+RPM_DEPENDS  = redhat-lsb glibc-static java-1.8.0-openjdk-devel dnf
 RPM_DEPENDS += apr-devel
 RPM_DEPENDS += numactl-devel
 RPM_DEPENDS += check check-devel
 RPM_DEPENDS += boost boost-devel
 RPM_DEPENDS += selinux-policy selinux-policy-devel
-RPM_DEPENDS += cmake3 ninja-build
+RPM_DEPENDS += cmake ninja-build

 ifeq ($(OS_ID)-$(OS_VERSION_ID),fedora-25)
RPM_DEPENDS += subunit subunit-devel
@@ -300,9 +300,9 @@ ifeq ($(OS_ID),rhel)
 else ifeq ($(OS_ID),centos)
@sudo -E yum install $(CONFIRM) centos-release-scl-rh
 endif
-   @sudo -E yum groupinstall $(CONFIRM) $(RPM_DEPENDS_GROUPS)
-   @sudo -E yum install $(CONFIRM) $(RPM_DEPENDS)
-   @sudo -E debuginfo-install $(CONFIRM) glibc openssl-libs mbedtls-devel 
zlib
+   @sudo -E dnf group install $(CONFIRM) $(RPM_DEPENDS_GROUPS)
+   @sudo -E dnf install $(CONFIRM) $(RPM_DEPENDS)
+   @sudo -E dnf debuginfo-install $(CONFIRM) glibc openssl-libs 
mbedtls-devel zlib openssl-libs
 else ifeq ($(filter opensuse-tumbleweed,$(OS_ID)),$(OS_ID))
@sudo -E zypper refresh
@sudo -E zypper install -y $(RPM_SUSE_DEPENDS)
(END)


Burt
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11392): https://lists.fd.io/g/vpp-dev/message/11392
Mute This Topic: https://lists.fd.io/mt/28281426/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11414): https://lists.fd.io/g/vpp-dev/message/11414
Mute This Topic: https://lists.fd.io/mt/28281426/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] API changes to trigger email

2018-11-13 Thread Ed Kern via Lists.Fd.Io
Should be able to…

Not that I have the permissions to do it…

following
https://gerrit-review.googlesource.com/Documentation/user-notify.html#project

My first thought is you would want to end up with something along the lines of

[notify “API”]
email = f...@project1.io
email = f...@project2.io
filter = path:^.*\.api
header = to:
type = submitted_changes


submitted_changes for merges or new_changes  if you wanted to see potential 
changes

My GUESS is that you would need at least a vpp committer to satisfy "To edit 
the project level notify settings, ensure the project owner has Push permission 
already granted for the refs/meta/config branch.”

If it turns out you need something from the LF feel free to open a ticket with 
them or just let me know and ill open it..

Hope this helps,

Ed

P.S.  This should SEND the email…of course additional hoops may need to be 
navigated depending on the receiver..  Non member trying to post type bounces.


On Nov 13, 2018, at 5:41 AM, Ole Troan 
mailto:otr...@employees.org>> wrote:

Eds, et al,

At some point in the past we discussed adding a trigger in gerrit to generate 
an email alerting adjacency projects about VPP API changes.
Basically any change in a .api file.

Is that something possible to do?

Best regards,
Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11230): https://lists.fd.io/g/vpp-dev/message/11230
Mute This Topic: https://lists.fd.io/mt/28122200/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [tsc] Unstable VPP Builds

2018-10-26 Thread Ed Kern via Lists.Fd.Io

TLDR  Any action that would trigger verification in the last 15 hours needs to 
be hit with recheck



The ci-man bug mentioned below impacted the checkstyle job causing it to finish 
with result UNSTABLE.
It takes a SUCCESS result from the checkstyle job to trigger/initiate the 
normal set of verify jobs.  So
these jobs were never even attempted on patches/changes in the last 15 hours or 
so.   This issue while
it did also impact merge jobs did NOT stop packages from being built or pushed 
that I can see.  There 
will simply not be build logs for merge jobs in that time spawn.

Ed




> On Oct 26, 2018, at 11:35 AM, Vanessa Valderrama 
>  wrote:
> 
> We received a ticket that VPP builds were unstable.  This issue has been
> resolved.
> 
> The root cause was a change to the global-macros.yaml file
> postbuildscript.  This doesn't work on the VPP containers because it
> uses the lf-infra-ship-logs builder which can't run on the VPP
> containers at this time.
> 
> We didn't anticipate this being an issue prior to merging the change if
> we did I would have made sure the community was notified in advance.  I
> apologize for the inconvenience. I'll be working closely with Ed Kern
> before any other changes in preparation for global-jjb are merged.
> 
> Thank you,
> Vanessa
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#873): https://lists.fd.io/g/tsc/message/873
> Mute This Topic: https://lists.fd.io/mt/27741962/675649
> Group Owner: tsc+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/tsc/unsub  [e...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11003): https://lists.fd.io/g/vpp-dev/message/11003
Mute This Topic: https://lists.fd.io/mt/27742249/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] build queue backed up a bit foo

2018-09-10 Thread Ed Kern via Lists.Fd.Io
after a couple hours of testing on the sandbox and getting good builds I went 
with
option 1 for now.  (Should be getting merged as I type).

If we run into any more issues Ill remove the job from the main gerrit trigger 
so
builds will only be started via comment.

Ed



On Sep 10, 2018, at 12:48 PM, Florin Coras 
mailto:fcoras.li...@gmail.com>> wrote:

If 1 makes the arm jobs pass and supposedly run much faster, I’d be fine with 
that as well.

Florin

On Sep 10, 2018, at 7:48 AM, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

On Mon, 2018-09-10 at 14:29 +, Ed Kern (ejk) wrote:
At least three different possible actions to take at this point:  (outside of 
fixing the issue)

1. remove make test attempt from arm build (return it to the way it was before 
a week ago).
2. lower timeout further my first thought would be in the 75 minute range (from 
120)
I don't like this option mainly because it would still imply that developers 
will have to wait for 75 minutes to see a patch verified...
The worst is that the job is a non-voting one hence having it running does add 
any meaningful insight into the build result.
I'm in favour for the issue being resolved in the sandbox and eventually moved 
to production when stable.

3. remove job altogether.
I believe this should be the option to purse unless it could be fixed quickly.
As it is it only causes delays to the overall build / review / merging 
process...


Im happy to push a patch that accomplishes the above or other options I haven’t 
thought of
for this mail.

Just let me know..

Ed



On Sep 10, 2018, at 4:31 AM, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:
Last example (today) can be found here

10:47:06 Not running extended tests (some tests will be skipped)
10:47:06 Perform 4 attempts to pass the suite...
10:47:10 *** Error in `python': double free or corruption (out):
0x75d483f0 ***
12:27:54 Build timed out (after 120 minutes). Marking the build as failed.

Patch: https://gerrit.fd.io/r/#/c/14744/

Other jobs have finished more than one and half-hour ago and the patch cannot be
marked Verified+1 because Jenkins is still waiting for the ARM job to complete
(timeout = 120 minutes). IMHO it makes the overall patch submission and review
very painful for authors and commiters.

I would recommend (it has been done in the past for other jobs) to disable this
job in production, move it again to sandbox, get it fixed and eventually moved
again to production...

- Marco

On Fri, 2018-09-07 at 12:04 -0700, Florin Coras wrote:
ARM jobs have not been working for some days now. That’s why their result is
skipped. Timeout is 2h but probably we should drop that even further …

Florin

On Sep 7, 2018, at 11:59 AM, Ole Troan 
mailto:otr...@employees.org>> wrote:
Trying out a change in:
https://gerrit.fd.io/r/#/c/14732/

All others succeed but ARM this doesn’t look too good.
Stuck apparently.

https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1604/2157/console
20:16:04

==

20:16:04
ARP Test Case

20:16:04

==

20:16:07
*** Error in `python': free(): invalid pointer: 0x553263a8 ***

20:16:09
ARP  OK

20:16:09
ARP Duplicates   OK

20:16:09
test_arp_incomplete (test_neighbor.ARPTestCase)  OK

20:16:09
ARP Static   OK

20:16:15
ARP reply with VRRP virtual src hw addr  OK

20:16:15
GARP OK

20:16:15
MPLS OK


Cheers,
Ole


On 7 Sep 2018, at 18:50, Ole Troan 
mailto:otr...@employees.org>> wrote:

Ed,

Let me take a closer look at these.
It appears if VPP is slow to start it might not have created the socket
yet. Let me try to put in a retry loop and see if that fixes verify.

Cheers
Ole

On 7 Sep 2018, at 18:06, Ed Kern via Lists.Fd.Io <
ejk=cisco@lists.fd.io<mailto:ejk=cisco@lists.fd.io>> wrote:

make test failures due to the below causing pretty consistent failures.
Note:  For whatever reason the failures are not 100%.
The failures and with the retries on nonconcurrent merge jobs may lead
to long build queues.

These are not infra issues but ill be keeping an eye on it.


Ed



13:08:25
Using /var/cache/vpp/python/virtualenv/lib/python2.7/site-packages

13:08:25
Finished processing dependencies for vpp-papi==1.6.1

13:08:27
Traceback (most recent call last):

13:08:27
File "sanity_run_vpp.py", line 21, in 

13:08:27
  tc.setUpClass()

13:08:27
File "/w/workspace/vpp-merge-master-ubuntu1604/test/framework.py", line
394, in setUpClass

13

Re: [vpp-dev] build queue backed up a bit foo

2018-09-10 Thread Ed Kern via Lists.Fd.Io
At least three different possible actions to take at this point:  (outside of 
fixing the issue)

1. remove make test attempt from arm build (return it to the way it was before 
a week ago).
2. lower timeout further my first thought would be in the 75 minute range (from 
120)
3. remove job altogether.

Im happy to push a patch that accomplishes the above or other options I haven’t 
thought of
for this mail.

Just let me know..

Ed



On Sep 10, 2018, at 4:31 AM, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

Last example (today) can be found here

10:47:06 Not running extended tests (some tests will be skipped)
10:47:06 Perform 4 attempts to pass the suite...
10:47:10 *** Error in `python': double free or corruption (out):
0x75d483f0 ***
12:27:54 Build timed out (after 120 minutes). Marking the build as failed.

Patch: https://gerrit.fd.io/r/#/c/14744/

Other jobs have finished more than one and half-hour ago and the patch cannot be
marked Verified+1 because Jenkins is still waiting for the ARM job to complete
(timeout = 120 minutes). IMHO it makes the overall patch submission and review
very painful for authors and commiters.

I would recommend (it has been done in the past for other jobs) to disable this
job in production, move it again to sandbox, get it fixed and eventually moved
again to production...

- Marco

On Fri, 2018-09-07 at 12:04 -0700, Florin Coras wrote:
ARM jobs have not been working for some days now. That’s why their result is
skipped. Timeout is 2h but probably we should drop that even further …

Florin

On Sep 7, 2018, at 11:59 AM, Ole Troan 
mailto:otr...@employees.org>> wrote:
Trying out a change in:
https://gerrit.fd.io/r/#/c/14732/

All others succeed but ARM this doesn’t look too good.
Stuck apparently.

https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1604/2157/console
20:16:04

==

20:16:04
ARP Test Case

20:16:04

==

20:16:07
*** Error in `python': free(): invalid pointer: 0x553263a8 ***

20:16:09
ARP  OK

20:16:09
ARP Duplicates   OK

20:16:09
test_arp_incomplete (test_neighbor.ARPTestCase)  OK

20:16:09
ARP Static   OK

20:16:15
ARP reply with VRRP virtual src hw addr  OK

20:16:15
GARP OK

20:16:15
MPLS OK


Cheers,
Ole


On 7 Sep 2018, at 18:50, Ole Troan  wrote:

Ed,

Let me take a closer look at these.
It appears if VPP is slow to start it might not have created the socket
yet. Let me try to put in a retry loop and see if that fixes verify.

Cheers
Ole

On 7 Sep 2018, at 18:06, Ed Kern via Lists.Fd.Io <
ejk=cisco@lists.fd.io> wrote:

make test failures due to the below causing pretty consistent failures.
Note:  For whatever reason the failures are not 100%.
The failures and with the retries on nonconcurrent merge jobs may lead
to long build queues.

These are not infra issues but ill be keeping an eye on it.


Ed



13:08:25
Using /var/cache/vpp/python/virtualenv/lib/python2.7/site-packages

13:08:25
Finished processing dependencies for vpp-papi==1.6.1

13:08:27
Traceback (most recent call last):

13:08:27
File "sanity_run_vpp.py", line 21, in 

13:08:27
  tc.setUpClass()

13:08:27
File "/w/workspace/vpp-merge-master-ubuntu1604/test/framework.py", line
394, in setUpClass

13:08:27
  cls.statistics = VPPStats(socketname=cls.tempdir+'/stats.sock')

13:08:27
File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_stats.py", line 117, in
__init__

13:08:27
IOError

13:08:27
***

13:08:27
* Sanity check failed, cannot run vpp

13:08:27
***

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10432): https://lists.fd.io/g/vpp-dev/message/10432
Mute This Topic: https://lists.fd.io/mt/25308161/675193
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsubb  
[otr...@employees.org<mailto:otr...@employees.org>
]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10435): https://lists.fd.io/g/vpp-dev/message/10435
Mute This Topic: https://lists.fd.io/mt/25308161/675193
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsubb  
[otr...@employees.org<mailto:otr...@employees.org>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=

[vpp-dev] build queue backed up a bit foo

2018-09-07 Thread Ed Kern via Lists.Fd.Io
make test failures due to the below causing pretty consistent failures.

Note:  For whatever reason the failures are not 100%.

The failures and with the retries on nonconcurrent merge jobs may lead

to long build queues.


These are not infra issues but ill be keeping an eye on it.



Ed




13:08:25 Using /var/cache/vpp/python/virtualenv/lib/python2.7/site-packages
13:08:25 Finished processing dependencies for vpp-papi==1.6.1
13:08:27 Traceback (most recent call last):
13:08:27   File "sanity_run_vpp.py", line 21, in 
13:08:27 tc.setUpClass()
13:08:27   File "/w/workspace/vpp-merge-master-ubuntu1604/test/framework.py", 
line 394, in setUpClass
13:08:27 cls.statistics = VPPStats(socketname=cls.tempdir+'/stats.sock')
13:08:27   File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_stats.py", line 117, 
in __init__
13:08:27 IOError
13:08:27 ***
13:08:27 * Sanity check failed, cannot run vpp
13:08:27 ***

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10432): https://lists.fd.io/g/vpp-dev/message/10432
Mute This Topic: https://lists.fd.io/mt/25308161/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp-arm-verify-master-ubuntu1604 failing

2018-09-06 Thread Ed Kern via Lists.Fd.Io


On Sep 6, 2018, at 3:42 PM, Matthew Smith 
mailto:mgsm...@netgate.com>> wrote:


Hi,

The jenkins job vpp-arm-verify-master-ubuntu1604 seems to have failed every 
time it has run over the last 36 hours or so. Is that a known issue?

Yes.

Its actually been failing for about a week now.  Since the change was merged to 
have the arm jobs run make test.

Having said that….voting for the job has been off for a few days so it 
shouldn’t be impacting from a gerrit voting perspective.


https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1604/buildTimeTrend

Is vpp-dev the appropriate place to report issues like this?

Well it works…Im honestly not sure if there is a “better” one.  Ill let others 
answer that.


Or is there some other email alias that will go directly to whomever might need 
to kick jenkins?


While I have a deep seeded loathing for jenkins (on par with folks who think vi 
is a real editor) this is not a jenkins issue.
Jenkins has 99 problems but this…..patch….ain't one.

ill cc juraj directly so he can relay status on either arm make test working or 
a revert.

Ed



Thanks,
-Matt

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10416): https://lists.fd.io/g/vpp-dev/message/10416
Mute This Topic: https://lists.fd.io/mt/25265335/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10417): https://lists.fd.io/g/vpp-dev/message/10417
Mute This Topic: https://lists.fd.io/mt/25265335/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Ubuntu 18.04 LTS (Bionic Beaver) packages

2018-07-19 Thread Ed Kern via Lists.Fd.Io
this has been a work in progress for a couple months now with no real end in 
sight at this time..

You can/should be able to pick them up from packagecloud in the meantime.


curl -L https://packagecloud.io/fdio/master/gpgkey |sudo apt-key add -

and into a file say /etc/apt/sources.list.d/99fd.io.list drop 
the line
deb [trusted=yes] https://packagecloud.io/fdio/master/ubuntu/ bionic main

that should hold you over.
Ed


On Jul 19, 2018, at 2:32 AM, Michal Cmarada 
mailto:michal.cmar...@pantheon.tech>> wrote:

Hi Ed,

Now I can see the packages for Bionic on Nexus, however there still seems to be 
a problem with Packages file [0] in repo. Seems that the file is always empty.
Can you check that with LF IT staff.

Thanks

Michal

[0] 
https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.trusty.main/Packages

From: Ed Warnicke mailto:hagb...@gmail.com>>
Sent: Tuesday, July 17, 2018 3:19 PM
To: Michal Čmarada 
mailto:michal.cmar...@pantheon.tech>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Ubuntu 18.04 LTS (Bionic Beaver) packages

Michal,
I am delighted to know you are excited about vpp on Bionic :)  I am 
currently working with the LF IT staff to fix the issues with the Bionic repo, 
they are on the nexus side, not the VPP side.   Thank you for raising the 
issue.  It helps :)

Ed



On July 17, 2018 at 6:47:46 AM, Michal Cmarada 
(michal.cmar...@pantheon.tech) wrote:

Hi vpp-devs,

I was wondering if 18.07 release of VPP will also bring the support for Ubuntu 
18.04 LTS (Bionic Beaver). Right now I can see some folders in nexus repository 
marked as bionic, but no packages are present.

Thanks

Michal
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9857): https://lists.fd.io/g/vpp-dev/message/9857
Mute This Topic: https://lists.fd.io/mt/23540780/464962
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
[hagb...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9883): https://lists.fd.io/g/vpp-dev/message/9883
Mute This Topic: https://lists.fd.io/mt/23540780/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9885): https://lists.fd.io/g/vpp-dev/message/9885
Mute This Topic: https://lists.fd.io/mt/23540780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Revert of gerrit 13408 / "vppinfra: AVX2 blend" in progress..

2018-07-13 Thread Ed Kern via Lists.Fd.Io
that would make a lot more sense to me +1


Ed



On Jul 13, 2018, at 8:44 AM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:

Makes sense, but it’s a pain when the wheels fall off.

Perhaps we should make the virl job non-voting as soon as Maciek thinks we have 
all P0 test gaps covered?

D.

From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Ed Kern via 
Lists.Fd.Io
Sent: Friday, July 13, 2018 10:11 AM
To: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
mailto:vrpo...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Revert of gerrit 13408 / "vppinfra: AVX2 blend" in 
progress..




On Jul 13, 2018, at 5:37 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES 
at Cisco) via Lists.Fd.Io 
mailto:vrpolak=cisco@lists.fd.io>> wrote:

> Not sure what exactly happened

It went like this:

0. Patch set 2 is uploaded.
1. Both clang and virl jobs (among others) are triggered.
2. Both jobs fail (virl on the usual NFS mount symptom).
3. Virl job has naginator, so its result is reported as NOT_BUILT.
4. Clang is reported as FAILURE, so vote is -1.
5. Naginator triggers another run of virl job (only).
6. This time virl job is SUCCESS.
7. Jenkins sees no falure in all 1 job from latest trigger, so votes +1.


This is more or less accurate.


I am not sure we can explain Jenkins that
results from previous trigger rounds still aply (if not superseded).

new incoming vote is an overwrite thats just what it does.



I recommend to disable naginator on the virl job
and rely on methods which trigger all verify jobs.


There are no automatic functions to re-trigger all jobs.  So your suggesting 
here is return to a ton of manual
rechecks for multiple different possible triggers.

The above behavior is not new.  It is just rare.

If A job fails twice faster than B job fails and then passes you will end up in 
this state.

In this case the clang job is especially prone to this because it is a short 
duration job that when it
fails it does so quickly..  (2,5,6 minutes)   So it can actually fail twice 
(properly) within 10 minutes.
The virl job can take 30 minutes (because it does a build first) before it gets 
to the point where
we see intermittent failures.  So your looking at @80 minutes to get a proper 
vote from the virl
job (intermittent failure to success).

I can remove the retry on virl jobs if that is what the committers would like 
me to do.   But with the virl
jobs on the road to deprecation, and hopefully some traction fixing the 
LF/vexxhost network issues
causing so many hudson/jnlp errors, I wouldn’t be voting for this path.

Ed



Vratko.

From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan Marion 
via Lists.Fd.Io
Sent: Friday, 2018-July-13 10:59
To: Marco Varlese mailto:mvarl...@suse.de>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Revert of gerrit 13408 / "vppinfra: AVX2 blend" in 
progress..


Version 1 of the same patch failed properly, for the same error message, then I 
removed
permute inline functions and left blend as i missed that same error happens 
also for blend.

Not sure what exactly happened, but it doesn't look like it is not configured 
for voting as it worked properly for PatchSet 1 ...

Reason why this is failing on clang is that clang guys use 
__builtin_shufflevector for multiple
intel intrinsics, and that builtin insist on immediate value for some 
parameters, and it refuses to use
inline function argument, even if that argument is constant.

I will need to redo this with __asm__ ()

--
Damjan



On 13 Jul 2018, at 09:49, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

Hi Dave & allm

Sorry about that,I actually merged the patch. However, it was indeed Verified+1 
:(

I have gone now to see the actual CLANG build failure and wondering: is it 
possible that job is not configured as voting/gating job in Jenkins so the 
verification process succeeds anyway?


- Marco

On Thu, 2018-07-12 at 22:49 +, Dave Barach via Lists.Fd.Io wrote:
Revert complete... HTH... Dave

From: Dave Barach (dbarach)
Sent: Thursday, July 12, 2018 5:01 PM
To: Damjan Marion (damarion) mailto:damar...@cisco.com>>; 
Florin Coras (fcoras) mailto:fco...@cisco.com>>; 'Marco 
Varlese' mailto:mvarl...@suse.de>>; Ed Kern (ejk) 
mailto:e...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Revert of gerrit 13408 / "vppinfra: AVX2 blend" in progress..
Importance: High

Folks,

Unfortunately, the AVX2 blend patch causes 100% clang validation failures on 
unrelated patches. The clang validation job actually failed on the original 
patch, but somehow fd.io<http://fd.io/> JJB voted +1 anyhow.

See https://gerrit.fd.io/r/#/c/13457 – revert, https://gerrit.fd.io

Re: [vpp-dev] Revert of gerrit 13408 / "vppinfra: AVX2 blend" in progress..

2018-07-13 Thread Ed Kern via Lists.Fd.Io


On Jul 13, 2018, at 5:37 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES 
at Cisco) via Lists.Fd.Io 
mailto:vrpolak=cisco@lists.fd.io>> wrote:

> Not sure what exactly happened

It went like this:

0. Patch set 2 is uploaded.
1. Both clang and virl jobs (among others) are triggered.
2. Both jobs fail (virl on the usual NFS mount symptom).
3. Virl job has naginator, so its result is reported as NOT_BUILT.
4. Clang is reported as FAILURE, so vote is -1.
5. Naginator triggers another run of virl job (only).
6. This time virl job is SUCCESS.
7. Jenkins sees no falure in all 1 job from latest trigger, so votes +1.


This is more or less accurate.

I am not sure we can explain Jenkins that
results from previous trigger rounds still aply (if not superseded).

new incoming vote is an overwrite thats just what it does.


I recommend to disable naginator on the virl job
and rely on methods which trigger all verify jobs.


There are no automatic functions to re-trigger all jobs.  So your suggesting 
here is return to a ton of manual
rechecks for multiple different possible triggers.

The above behavior is not new.  It is just rare.

If A job fails twice faster than B job fails and then passes you will end up in 
this state.

In this case the clang job is especially prone to this because it is a short 
duration job that when it
fails it does so quickly..  (2,5,6 minutes)   So it can actually fail twice 
(properly) within 10 minutes.
The virl job can take 30 minutes (because it does a build first) before it gets 
to the point where
we see intermittent failures.  So your looking at @80 minutes to get a proper 
vote from the virl
job (intermittent failure to success).

I can remove the retry on virl jobs if that is what the committers would like 
me to do.   But with the virl
jobs on the road to deprecation, and hopefully some traction fixing the 
LF/vexxhost network issues
causing so many hudson/jnlp errors, I wouldn’t be voting for this path.

Ed


Vratko.

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan Marion 
via Lists.Fd.Io
Sent: Friday, 2018-July-13 10:59
To: Marco Varlese mailto:mvarl...@suse.de>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Revert of gerrit 13408 / "vppinfra: AVX2 blend" in 
progress..


Version 1 of the same patch failed properly, for the same error message, then I 
removed
permute inline functions and left blend as i missed that same error happens 
also for blend.

Not sure what exactly happened, but it doesn't look like it is not configured 
for voting as it worked properly for PatchSet 1 ...

Reason why this is failing on clang is that clang guys use 
__builtin_shufflevector for multiple
intel intrinsics, and that builtin insist on immediate value for some 
parameters, and it refuses to use
inline function argument, even if that argument is constant.

I will need to redo this with __asm__ ()

--
Damjan


On 13 Jul 2018, at 09:49, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

Hi Dave & allm

Sorry about that,I actually merged the patch. However, it was indeed Verified+1 
:(

I have gone now to see the actual CLANG build failure and wondering: is it 
possible that job is not configured as voting/gating job in Jenkins so the 
verification process succeeds anyway?


- Marco

On Thu, 2018-07-12 at 22:49 +, Dave Barach via Lists.Fd.Io wrote:
Revert complete... HTH... Dave

From: Dave Barach (dbarach)
Sent: Thursday, July 12, 2018 5:01 PM
To: Damjan Marion (damarion) mailto:damar...@cisco.com>>; 
Florin Coras (fcoras) mailto:fco...@cisco.com>>; 'Marco 
Varlese' mailto:mvarl...@suse.de>>; Ed Kern (ejk) 
mailto:e...@cisco.com>>
Cc: vpp-dev@lists.fd.io
Subject: Revert of gerrit 13408 / "vppinfra: AVX2 blend" in progress..
Importance: High

Folks,

Unfortunately, the AVX2 blend patch causes 100% clang validation failures on 
unrelated patches. The clang validation job actually failed on the original 
patch, but somehow fd.io JJB voted +1 anyhow.

See https://gerrit.fd.io/r/#/c/13457 – revert, https://gerrit.fd.io/r/#/c/13408 
- original patch.

Thanks... Dave


-=-=-=-=-=-=-=-=-=-=-=-

Links: You receive all messages sent to this group.



View/Reply Online (#9830): https://lists.fd.io/g/vpp-dev/message/9830

Mute This Topic: https://lists.fd.io/mt/23297899/675056

Group Owner: vpp-dev+ow...@lists.fd.io

Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[mvarl...@suse.de]

-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9837): https://lists.fd.io/g/vpp-dev/message/9837
Mute This Topic: https://lists.fd.io/mt/23297899/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=

Re: [csit-dev] [vpp-dev] VPP make test gives false positive

2018-06-22 Thread Ed Kern via Lists.Fd.Io
Hey,

responding because I dont see one from klement.

looks like he has two gerrits pending

https://gerrit.fd.io/r/#/c/13188/

to fix broken interfaces.. (this passed verification so trying to get this one 
merged)

and

https://gerrit.fd.io/r/#/c/13186/

for the retries in error..this is not passing verification but i have my 
fingers crossed that it will once
13188 is merged.

Hoping to get this behind us before the weekend..

Ed



On Jun 22, 2018, at 3:03 AM, Jan Gelety via Lists.Fd.Io 
mailto:jgelety=cisco@lists.fd.io>> wrote:

Hello,

VPP make test gives false positive results at the moment - there are cca 110 
tests failed but build is marked successful.

The last correct run on ubuntu is [1].

The first false positive run on ubutnu is [2] – trigerred by Florin’s patch [3].

It seems that the first failing test is test_ip4_irb.TestIpIrb:

00:22:48.177 
==
00:22:53.685 VAPI test
00:22:53.685 
==
00:22:53.685 run C VAPI tests   
  SKIP
00:22:53.685 run C++ VAPI tests 
  SKIP
00:22:53.685
00:22:53.685 
==
00:22:53.685 ERROR: setUpClass (test_ip4_irb.TestIpIrb)
00:22:53.685 
--
00:22:53.685 Traceback (most recent call last):
00:22:53.685   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/test_ip4_irb.py", line 57, in 
setUpClass
00:22:53.685 cls.create_loopback_interfaces(range(1))
00:22:53.685   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/framework.py", line 571, in 
create_loopback_interfaces
00:22:53.685 setattr(cls, intf.name, intf)
00:22:53.685   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/vpp_interface.py", line 91, in 
name
00:22:53.685 return self._name
00:22:53.685 AttributeError: 'VppLoInterface' object has no attribute '_name'
00:22:53.685

And all following tests failes with connection failur during test case setup:

00:22:53.689 
==
00:22:53.689 ERROR: setUpClass (test_dvr.TestDVR)
00:22:53.689 
--
00:22:53.689 Traceback (most recent call last):
00:22:53.689   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/framework.py", line 362, in 
setUpClass
00:22:53.689 cls.vapi.connect()
00:22:53.689   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/vpp_papi_provider.py", line 
141, in connect
00:22:53.689 self.vpp.connect(self.name, self.shm_prefix)
00:22:53.689   File "build/bdist.linux-x86_64/egg/vpp_papi.py", line 699, in 
connect
00:22:53.689 async)
00:22:53.689   File "build/bdist.linux-x86_64/egg/vpp_papi.py", line 670, in 
connect_internal
00:22:53.689 raise IOError(2, 'Connect failed')
00:22:53.689 IOError: [Errno 2] Connect failed
00:22:53.689

But result is OK:
00:22:53.689 Ran 156 tests in 162.407s
00:22:53.689
00:22:53.689 FAILED (errors=114, skipped=118)
00:22:53.691 1 test(s) failed, 3 attempt(s) left
00:22:53.697 Running tests using custom test runner
00:22:53.697 Active filters: file=None, class=None, function=None
00:22:53.697 0 out of 0 tests match specified filters
00:22:53.697 Not running extended tests (some tests will be skipped)
00:22:53.697
00:22:53.697 Ran 0 tests in 0.000s
00:22:53.697
00:22:53.697 OK
00:22:53.697 0 test(s) failed, 2 attempt(s) left
00:22:53.757 Killing possible remaining process IDs:  20845 21195 21197
00:22:53.766 make[2]: Leaving directory 
'/w/workspace/vpp-verify-master-ubuntu1604/test'
00:22:53.767 make[1]: Leaving directory 
'/w/workspace/vpp-verify-master-ubuntu1604'
00:22:53.768 + '[' x == x1 ']'
00:22:53.768 + echo 
'***'
00:22:53.768 ***
00:22:53.768 + echo '* VPP BUILD SUCCESSFULLY COMPLETED'
00:22:53.768 * VPP BUILD SUCCESSFULLY COMPLETED
00:22:53.768 + echo 
'***'
00:22:53.768 ***

Could somebody (Klement, Ed Kern ?) find the root cause and fix it, please?

Thanks,
Jan

[1] https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/12196/console
[2] https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/12199/console
[3] https://gerrit.fd.io/r/#/c/13180/





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9682): https://lists.fd.io/g/vpp-dev/message/9682
Mute This Topic: https://lists.fd.io/mt/22590606/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=

Re: [vpp-dev] Verify consistently failing

2018-06-19 Thread Ed Kern via Lists.Fd.Io


I think what Ed meant to do is to speedup verify job without realizing
that test framework does a local install of vpp_papi package which is
part of the source tree.

this is correct.


So having cached virtualenv is a bad idea as
we see already.

dont agree…caching internal bits is certainly bad….I just didnt pick out in the 
rash
of 16 other packages getting installed that papi was getting compiled from the 
tree
as opposed to from local src.
The whole reason that patch and my cache exists is NOT for speed it was for
reliability. (and also not abusing dep code holders (GitHub,pypy,dpdk etc)).  I 
tracked
hundreds of failures where intermittent loss of ability to pull dep packages 
caused
entire build/verification failures.
That number has effectively dropped to zero.


I also wonder whether the caching script watches
changes in test/Makefile/PYTHON_DEPENDS and whether adding or changing
a dependency would pass a verify job.

With every verify run install-dep is run (with ubuntu16 test-dep is also run) 
this would/will
pick up any change


Ed, thoughts? I would suggest not to cache virtualenv at all.


For now im going to continue to cache but after doing so will nuke:
rm -f /var/cache/vpp/python/papi-install.done
rm -f 
/var/cache/vpp/python/virtualenv/lib/python2.7/site-packages/vpp_papi-*-py2.7.egg

the papi-install.done is nastier then I thought.  Even if ole had rev’d the 
papi package test-dep
would have not properly run do to the presence of that file.

On spot testing this will byte-compile with every test-dep on every verify run.
Ive rebuilt the cache image with these changes so we will see the results.  
(things should
already be passing, the only noticeable change will be the byte-compile of papi 
on
each and every test-dep)

Ed



Thanks,
Klement


On Tue, 2018-06-19 at 08:55 +0200, Ole Troan wrote:
Seems like my patch https://gerrit.fd.io/r/#/c/13013/
broke the verification job. I provided a fix, but for some strange
reason it seems like the verifiy build is stuck with the broken
version of the vpp_papi package.
This problem seems to persist even after I reverted 13013 with https:
//gerrit.fd.io/r/#/c/13104/

This is not reproducible locally (since make test uses the correct
python package from the build directory there).
Anyone knows how to reproduce the verify setup (or have an idea of
what's going on?)

Cheers,
Ole








-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9648): https://lists.fd.io/g/vpp-dev/message/9648
Mute This Topic: https://lists.fd.io/mt/22427622/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp project patch verification broken

2018-06-19 Thread Ed Kern via Lists.Fd.Io


On Jun 19, 2018, at 7:33 AM, Ole Troan 
mailto:otr...@employees.org>> wrote:

Dave, et al,

Yes, all these are indications of the verify job running a different (and 
cached) version of the Python VPP package than the one in build tree.

yup it certainly is...

That stopped working as soon as I changed the in-tree one (and the caching 
system picking up and apparently being stuck with a broken version of it). The 
fix is of course to always use the in-tree one.
question for you…did you rev the version number for vpp_papi from 1.4 with your 
changes?  (because if you rev’d and it still stuck with the old thats a larger 
problem)

I put that patch in as you have already seen for the python_deps because on at 
least four different occasions we had builds going bad intermittently because 
make test-sep pulls over sixteen (yes 16) different packages.
This was bad and it was breaking things.
So yes every three days I rebuild the cache (for make install-dep and make 
test-dep)  I build them into the base build image.
Any new requirements in-between runs of the cache (additions either of packages 
or version changes) are still picked up with each and every run (make 
install-dep and make test-dep are still run with each verify)

 So that local verify behavior matches jenkins one.

Well this may not be true for dpdk (since those packages are not built every 
time)  it should be true for anything else.  Id need to see your gerrit
to try and parse out why it verified and then failed post merge.



But we need someone with access to the verify build. While we’re at it, would 
be great to have that documented and accessible...


separate conversation (but its one im more than happy to have) on what you 
would have liked to have access to or documented to make this
problem faster or easier for you to track down and correct.


Im now going to poke into removing papi after test-dep so that we dont lose the 
benefits of having the external deps cached but forcing the build of what should
be built each and every time.  In this case the vpp-papi.

Ed




Cheers
Ole

On 19 Jun 2018, at 15:21, Dave Barach via Lists.Fd.Io 
mailto:dbarach=cisco@lists.fd.io>> wrote:

See, for example, https://gerrit.fd.io/r/#/c/13061.

This failure is almost certainly unrelated to the patch. We need to fix this 90 
seconds ago.

Thanks... Dave


13:02:17 
==
13:02:17 ERROR: IP ACL test
13:02:17 
--
13:02:17 Traceback (most recent call last):
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/test_classifier.py", line 310, 
in test_acl_ip
13:02:17 self.create_classify_table('ip', 
self.build_ip_mask(src_ip=''))
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/test_classifier.py", line 244, 
in create_classify_table
13:02:17 current_data_offset=data_offset)
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/vpp_papi_provider.py", line 
2206, in classify_add_del_table
13:02:17 'mask': mask})
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/vpp_papi_provider.py", line 
160, in api
13:02:17 reply = api_fn(**api_args)
13:02:17   File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 104, 
in __call__
13:02:17 return self._func(**kwargs)
13:02:17   File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 398, 
in f
13:02:17 return self._call_vpp(i, msg, multipart, **kwargs)
13:02:17   File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 591, 
in _call_vpp
13:02:17 b = msg.pack(kwargs)
13:02:17   File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_serializer.py", line 
320, in pack
13:02:17 b += self.packers[i].pack(data[a], kwargs)
13:02:17   File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_serializer.py", line 
171, in pack
13:02:17 b += self.packer.pack(e)
13:02:17   File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_serializer.py", line 
51, in pack
13:02:17 return self.packer.pack(data)
13:02:17 error: cannot convert argument to integer
13:02:17
13:02:17 
==
13:02:17 ERROR: Output IP ACL test
13:02:17 
--
13:02:17 Traceback (most recent call last):
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/test_classifier.py", line 340, 
in test_acl_ip_out
13:02:17 data_offset=0)
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/test_classifier.py", line 244, 
in create_classify_table
13:02:17 current_data_offset=data_offset)
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/vpp_papi_provider.py", line 
2206, in classify_add_del_table
13:02:17 'mask': mask})
13:02:17   File 
"/w/workspace/vpp-verify-master-ubuntu1604/test/vpp_papi_provider.py", line 
160, in api
13:02:17 r