Re: [vpp-dev] [csit-dev] [FD.io Helpdesk #36502] Jenkins jobs are not started

2017-02-10 Thread Ed Warnicke
Please let the community know when we find out the root cause :)

Ed

On Fri, Feb 10, 2017 at 7:10 AM, Vanessa Valderrama via RT <
fdio-helpd...@rt.linuxfoundation.org> wrote:

> Jan,
>
> This issue has been resolved.  Jenkins minions are building as
> expected.  The minions in a stuck build status have been removed.  The
> vendor is still performing a root cause analysis.  Again we apologize
> for the inconvenience.
>
> Thank you,
> Vanessa
>
> On Fri Feb 10 04:23:41 2017, valderrv wrote:
> > Jan,
> >
> > We are aware of the issue.  There is an issue with the vendor
> > affecting all tenants.  We've opened a high priority ticket with the
> > vendor.  I will update as soon as we have more details.
> >
> > Thank you,
> > Vanessa
> >
> >
> > On Fri Feb 10 03:56:23 2017, jgel...@cisco.com wrote:
> > > Hello,
> > >
> > > No new Jenkins job was not started in the last our and the build
> > > queue
> > > is increasing. Could you, please, have a look on it?
> > >
> > > Thanks,
> > > Jan
>
>
>
> ___
> csit-dev mailing list
> csit-...@lists.fd.io
> https://lists.fd.io/mailman/listinfo/csit-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] [FD.io Helpdesk #36502] Jenkins jobs are not started

2017-02-10 Thread Ed Warnicke via RT
Please let the community know when we find out the root cause :)

Ed

On Fri, Feb 10, 2017 at 7:10 AM, Vanessa Valderrama via RT <
fdio-helpd...@rt.linuxfoundation.org> wrote:

> Jan,
>
> This issue has been resolved.  Jenkins minions are building as
> expected.  The minions in a stuck build status have been removed.  The
> vendor is still performing a root cause analysis.  Again we apologize
> for the inconvenience.
>
> Thank you,
> Vanessa
>
> On Fri Feb 10 04:23:41 2017, valderrv wrote:
> > Jan,
> >
> > We are aware of the issue.  There is an issue with the vendor
> > affecting all tenants.  We've opened a high priority ticket with the
> > vendor.  I will update as soon as we have more details.
> >
> > Thank you,
> > Vanessa
> >
> >
> > On Fri Feb 10 03:56:23 2017, jgel...@cisco.com wrote:
> > > Hello,
> > >
> > > No new Jenkins job was not started in the last our and the build
> > > queue
> > > is increasing. Could you, please, have a look on it?
> > >
> > > Thanks,
> > > Jan
>
>
>
> ___
> csit-dev mailing list
> csit-...@lists.fd.io
> https://lists.fd.io/mailman/listinfo/csit-dev
>

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] VPP API Synchronization Question

2017-02-10 Thread Jon Loeliger
On Fri, Feb 10, 2017 at 11:23 AM, Dave Barach (dbarach) 
wrote:

> Dear Jon,
>
>
>
> If you send “please dump X” API message(s), followed by a control-ping
> message: when the control-ping reply appears, all of dump reply messages
> (if any) have appeared.
>
>
>
> That absolutely *does* work. See api_format.c:api_ip_add_del_route(...).
>
>
>
> In standard usage, the messages are received on a separate pthread. The
> api test tool uses the world’s crudest synchronization scheme.
>
>
> Contact me off-list if you can’t figure out what’s wrong.
>
>
>
> Thanks… Dave
>

Dave,

Spurred on by  your insistence that it does work, I went digging
around in my code a whole bunch more.  Specifically, I was trying
to instrument around my message-wait function some more.
Eventually, I came to realize that it was timing-out on almost every
message sent.  Naturally, that didn't seem right to me...

Debugging lead me to realize that I failed to call clib_time_init()
in our early initialization sequence, and all of the timer-based
tests in message wait were instantly failing, and thus the messages
were not truly waiting for the "result ready" condition like they should.

I still don't like the use of an essentially global, volatile here; but
at least it is working, as you indicated it would.

Is there a quick outline of how the API's threading model is set up
and expected to be used?  Where the vlib_global_main and vlib_mains
are established and such?

Thank you!
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Vpp in a container

2017-02-10 Thread Yichen Wang (yicwang)
Hi, Raghav,

The pros and cons of running VPP in a container are the general pros/cons of 
running applications in container, and shouldn’t be anything special. Folks can 
comment more about some specific pros/cons for VPP.

We were running VPP inside Docker in our project, and it seems to work 
properly. Regarding of the performance, people are measuring on having 
different setups, so it is hard to compare apple-to-apple. The thing I can tell 
is, there is not significant performance loss when compared to the data we are 
aware of so far when running VPP inside container.

Hope that helps.

Regards,
Yichen

From:  on behalf of "Raghav Kaushik (rakaushi)" 

Date: Thursday, February 9, 2017 at 17:40
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Vpp in a container

Hi Folks,

I’m trying to find some data about pros and cons of running VPP in a container.

Has this been tried before? Are there some performance number comparisons 
available ?

Any pointers will be much appreciated.

Thanks,
Raghav
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP API Synchronization Question

2017-02-10 Thread Dave Barach (dbarach)
Dear Jon,

If you send “please dump X” API message(s), followed by a control-ping message: 
when the control-ping reply appears, all of dump reply messages (if any) have 
appeared.

That absolutely does work. See api_format.c:api_ip_add_del_route(...).

In standard usage, the messages are received on a separate pthread. The api 
test tool uses the world’s crudest synchronization scheme.

Contact me off-list if you can’t figure out what’s wrong.

Thanks… Dave

From: Jon Loeliger [mailto:j...@netgate.com]
Sent: Friday, February 10, 2017 12:12 PM
To: vpp-dev 
Cc: Dave Barach (dbarach) ; Edward Warnicke 

Subject: VPP API Synchronization Question

Folks,

I have a stale cache of interface data in the layer above
my VPP API calls and I need to refresh it.  So I wrote
a vpp_intf_refresh_all() function.  It looks roughly like this:

vpp_intf_refresh_all() {
if (intf data is not dirty)
   return;
for each is_ipv6 in {0,1} {
   vpp_ip_dump(is_ipv6);
}

sleep(2)   // See commentary

for each is_ipv6 in {0,1} {
   vpp_ip_address_dump_all(is_ipv6);   // hits all IFs
}
vpp_sw_interface_dump()
intf data is now clean
}

My "details handlers" develop a few vectors of information
in almost the exact same way as the code in api_format.c does.
That is to say:

ip_dump/ip_details_t_handler -- form a vector of ip_details
with an entry for each if-index that is returned.
Note that there is no way to know how many interfaces
will be handled by the ip_details_t_handler function.
Let me say that differently: We have no way of knowing
when it is finished and will not be called again on
behalf of the original IP_DUMP request.

ip_address_dump/ip_address_details -- Using the vector of
ip_details formed during the ip_dump pass, iterate over
each IF and request its ip_address_dump to form another
vector of addresses on that specific interface.

Here's the thing:
If I remove the sleep(2), this code fails.
If I leave the sleep(2), this code works.

On the one hand, if there is enough time for all of the ip_details
to be handled, and the vector of ip_details to be formed, then
the next set of API calls, ip_address_dump, will work correctly.

On the other hand, if the API driving code is allowed to proceed
before the async replies to all the ip_dump requests are done,
then it will not have a proper ip_details vector and thus fail.

I've just described a classic asynchronous failure mode.
Soltions abound in other worlds.  What is the recommended
approach in this world?

So, why does VAT work?  Because it effectively serializes these
steps with enough time in between each one to allow all the async
behavior to be unnoticed, and not affect the next step.  But even
beyond that, it tries to detect this situation and tells the user
to do it differently.  From vl_api_address_details_t_handler():

  if (!details || vam->current_sw_if_index >= vec_len (details)
  || !details[vam->current_sw_if_index].present)
{
  errmsg ("ip address details arrived but not stored");
  errmsg ("ip_dump should be called first");
  return;
}

Sending a CONTROL_PING to flush the write-side of the API isn't
good enough.  Placing an arbitrary sleep() in the code is an
incredibly fragile approach.  OK, it's the wrong solution.

Is there some form of API synchronization that I missed somewhere?

Can we instroduce an actual WAIT_FOR_COMPLETION event into the
API message handling pipeline?  I'm just thinking something
that would be issued as an API call where my sleep(2) is,
and would cause the API handling side to stall until the reply
side is drained?

Reading code, eg, vl_api_ip_dump_t_handler(), I see that it just
iterates and drops messages into shmem queue.  So, yeah, knowing
when that reply send queue has drained will be hard.

OK, so, what if we added a "is_last_detail" (bool) flag to all
the *_details_t messages?  That way we can know when we are done
waiting for the results to come back?  I could at least write
a spin-until-last-messages-seen-or-timeout sort of watcher.

Thoughts?

jdl

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP API Synchronization Question

2017-02-10 Thread Jon Loeliger
Folks,

I have a stale cache of interface data in the layer above
my VPP API calls and I need to refresh it.  So I wrote
a vpp_intf_refresh_all() function.  It looks roughly like this:

vpp_intf_refresh_all() {
if (intf data is not dirty)
   return;
for each is_ipv6 in {0,1} {
 vpp_ip_dump(is_ipv6);
}

sleep(2) // See commentary

for each is_ipv6 in {0,1} {
 vpp_ip_address_dump_all(is_ipv6); // hits all IFs
}
vpp_sw_interface_dump()
intf data is now clean
}

My "details handlers" develop a few vectors of information
in almost the exact same way as the code in api_format.c does.
That is to say:

ip_dump/ip_details_t_handler -- form a vector of ip_details
with an entry for each if-index that is returned.
Note that there is no way to know how many interfaces
will be handled by the ip_details_t_handler function.
Let me say that differently: We have no way of knowing
when it is finished and will not be called again on
behalf of the original IP_DUMP request.

ip_address_dump/ip_address_details -- Using the vector of
ip_details formed during the ip_dump pass, iterate over
each IF and request its ip_address_dump to form another
vector of addresses on that specific interface.

Here's the thing:
If I remove the sleep(2), this code fails.
If I leave the sleep(2), this code works.

On the one hand, if there is enough time for all of the ip_details
to be handled, and the vector of ip_details to be formed, then
the next set of API calls, ip_address_dump, will work correctly.

On the other hand, if the API driving code is allowed to proceed
before the async replies to all the ip_dump requests are done,
then it will not have a proper ip_details vector and thus fail.

I've just described a classic asynchronous failure mode.
Soltions abound in other worlds.  What is the recommended
approach in this world?

So, why does VAT work?  Because it effectively serializes these
steps with enough time in between each one to allow all the async
behavior to be unnoticed, and not affect the next step.  But even
beyond that, it tries to detect this situation and tells the user
to do it differently.  From vl_api_address_details_t_handler():

  if (!details || vam->current_sw_if_index >= vec_len (details)
  || !details[vam->current_sw_if_index].present)
{
  errmsg ("ip address details arrived but not stored");
  errmsg ("ip_dump should be called first");
  return;
}

Sending a CONTROL_PING to flush the write-side of the API isn't
good enough.  Placing an arbitrary sleep() in the code is an
incredibly fragile approach.  OK, it's the wrong solution.

Is there some form of API synchronization that I missed somewhere?

Can we instroduce an actual WAIT_FOR_COMPLETION event into the
API message handling pipeline?  I'm just thinking something
that would be issued as an API call where my sleep(2) is,
and would cause the API handling side to stall until the reply
side is drained?

Reading code, eg, vl_api_ip_dump_t_handler(), I see that it just
iterates and drops messages into shmem queue.  So, yeah, knowing
when that reply send queue has drained will be hard.

OK, so, what if we added a "is_last_detail" (bool) flag to all
the *_details_t messages?  That way we can know when we are done
waiting for the results to come back?  I could at least write
a spin-until-last-messages-seen-or-timeout sort of watcher.

Thoughts?

jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP performance degradation with multiple nic polling

2017-02-10 Thread yusuf khan
Hi,

I am testing vpp performance for l3 routing. I am pumping traffic from
moongen which is sending packet at 10Gbps line rate with 84 bytes packet
size.
If i start vpp with single worker thread(in addition to main thread), vpp
is able to route almost at the line rate. Almost because i see some drop at
the receive of nic.
avg vector per node is 97 in this case.

Success case stats from moongen below...

Thread 1 vpp_wk_0 (lcore 11)
Time 122.6, average vectors/node 96.78, last 128 main loops 12.00 per node
256.00
  vector rates in 3.2663e6, out 3.2660e6, drop 1.6316e-2, punt 0.e0
Moongen
output--
[Device: id=5] TX: 11.57 Mpps, 8148 Mbit/s (1 Mbit/s with framing)
[Device: id=6] RX: 11.41 Mpps, 8034 Mbit/s (9860 Mbit/s with framing)


But when i start vpp with 2 worker threads , each polling seperate nic. i
see thre throught put almost reduce by 40%! The other thread is not
receiving any packets its just polling idle nic but impacting other thread?
Is polling pci bus causing contention? what could be the reason. in this
case avg vector per node is 256! some excerpt below...

Thread 2 vpp_wk_1 (lcore 24)
Time 70.9, average vectors/node 256.00, last 128 main loops 12.00 per node
256.00
  vector rates in 7.2937e6, out 7.2937e6, drop 0.e0, punt 0.e0
Moongen
output--
[Device: id=5] TX: 11.49 Mpps, 8088 Mbit/s (9927 Mbit/s with framing)
[Device: id=6] RX: 7.34 Mpps, 5167 Mbit/s (6342 Mbit/s with framing)

One more information, its dual port nic  82599ES on pci2 x8 bus.

Thanks ,
Yusuf
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP cannot find interface QLogic 57810

2017-02-10 Thread Martin Šuňal
I've just found that VPP has problem with QLogic interface.

Any idea if it is problem of VPP or DPDK?
Is it something what can be easy fixed?

I am thinking to try different version of NIC firmware..

root@frinxblade16:~# service vpp status
* vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Fri 2017-02-10 16:41:32 CET; 1min 22s ago
  Process: 3503 ExecStartPre=/sbin/modprobe igb_uio (code=exited, 
status=0/SUCCESS)
  Process: 3484 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
Main PID: 3521 (vpp_main)
Tasks: 3
   Memory: 36.0M
  CPU: 1min 21.730s
   CGroup: /system.slice/vpp.service
   `-3521 /usr/bin/vpp -c /etc/vpp/startup.conf

Feb 10 16:41:32 frinxblade16 systemd[1]: Starting vector packet processing 
engine...
Feb 10 16:41:32 frinxblade16 systemd[1]: Started vector packet processing 
engine.
Feb 10 16:41:32 frinxblade16 vpp[3521]: vlib_plugin_early_init:213: plugin path 
/usr/lib/vpp_plugins
Feb 10 16:41:32 frinxblade16 vpp[3521]: /usr/bin/vpp[3521]: 
dpdk_bind_devices_to_uio:871: Unsupported Ethernet PCI device 0x14e4:0x168e 
found at PCI address :01:00.1
Feb 10 16:41:32 frinxblade16 /usr/bin/vpp[3521]: dpdk_bind_devices_to_uio:871: 
Unsupported Ethernet PCI device 0x14e4:0x168e found at PCI address :01:00.1
Feb 10 16:41:32 frinxblade16 vpp[3521]: EAL: Detected 56 lcore(s)
Feb 10 16:41:32 frinxblade16 vpp[3521]: EAL: No free hugepages reported in 
hugepages-1048576kB
Feb 10 16:41:32 frinxblade16 vpp[3521]: EAL: Probing VFIO support...
Feb 10 16:41:32 frinxblade16 vnet[3521]: EAL: Probing VFIO support...
Feb 10 16:41:32 frinxblade16 vnet[3521]: dpdk_lib_init:304: DPDK drivers found 
no ports...

Thank you,
Martin Šuňal
Technical Leader

Frinx s.r.o.
Mlynské Nivy 48 / 821 09 Bratislava / Slovakia
+421 2 20 91 01 41 / msu...@frinx.io / 
www.frinx.io
[frinx_logo]

From: Martin Šuňal
Sent: Friday, February 10, 2017 12:21 PM
To: 'vpp-dev@lists.fd.io' 
Subject: VPP cannot find interface QLogic 57810

Hello,

I have a problem that VPP cannot find QLogic 57810 interface.

I use Ubuntu 16.04 LTS and VPP 17.01 which was installed like this:

echo "deb [trusted=yes] 
https://nexus.fd.io/content/repositories/fd.io.ubuntu.xenial.main/ ./" | sudo 
tee -a /etc/apt/sources.list.d/99fd.io.list

sudo apt update

sudo apt install vpp vpp-lib vpp-dpdk-dkms

I have 2 QLogic interfaces on a server and I want to put interface "eno2" into 
VPP.
Here are some outputs:

root@frinxblade10:~# uname -a
Linux frinxblade10 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux

root@frinxblade10:~# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eno1:  mtu 1500 qdisc mq portid 
b4e10f8a6e56 state UP group default qlen 1000
link/ether b4:e1:0f:8a:6e:56 brd ff:ff:ff:ff:ff:ff
inet 10.10.193.20/24 brd 10.10.193.255 scope global eno1
   valid_lft forever preferred_lft forever
inet6 fe80::b6e1:fff:fe8a:6e56/64 scope link
   valid_lft forever preferred_lft forever
3: eno2:  mtu 1500 qdisc noop portid b4e10f8a6e59 state 
DOWN group default qlen 1000
link/ether b4:e1:0f:8a:6e:59 brd ff:ff:ff:ff:ff:ff

root@frinxblade10:~# ethtool -i eno2
driver: bnx2x
version: 1.712.30-0
firmware-version: FFV7.12.19 bc 7.12.5
expansion-rom-version:
bus-info: :01:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

root@frinxblade10:~# lshw -class network -businfo
Bus info  Device  Class  Description

pci@:01:00.0  eno1networkNetXtreme 
II BCM57810 10 Gigabit Ethernet
pci@:01:00.1  eno2networkNetXtreme 
II BCM57810 10 Gigabit Ethernet

vpp# sh pci
Address  Socket VID:PID Link Speed Driver  Product Name
:01:00.0   014e4:168e   5.0 GT/s x8bnx2x   QLogic 57810 
10 Gigabit Ethernet
:01:00.1   014e4:168e   5.0 GT/s x8bnx2x   QLogic 57810 
10 Gigabit Ethernet

vpp# sh int
  Name   Idx   State  Counter  Count
local00down


I also tried to add this into /etc/vpp/startup.conf
dpdk {
dev :01:00.1
}
and it did not change anything.

No errors in /tmp/vpp.log

Any idea?

Thank you,
Martin Šuňal
Technical Leader

Frinx s.r.o.
Mlynské Nivy 48 / 821 09 Bratislava / Slovakia
+421 2 20 91 01 41 / msu...@frinx.io

[vpp-dev] VPP cannot find interface QLogic 57810

2017-02-10 Thread Martin Šuňal
Hello,

I have a problem that VPP cannot find QLogic 57810 interface.

I use Ubuntu 16.04 LTS and VPP 17.01 which was installed like this:

echo "deb [trusted=yes] 
https://nexus.fd.io/content/repositories/fd.io.ubuntu.xenial.main/ ./" | sudo 
tee -a /etc/apt/sources.list.d/99fd.io.list

sudo apt update

sudo apt install vpp vpp-lib vpp-dpdk-dkms

I have 2 QLogic interfaces on a server and I want to put interface "eno2" into 
VPP.
Here are some outputs:

root@frinxblade10:~# uname -a
Linux frinxblade10 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux

root@frinxblade10:~# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eno1:  mtu 1500 qdisc mq portid 
b4e10f8a6e56 state UP group default qlen 1000
link/ether b4:e1:0f:8a:6e:56 brd ff:ff:ff:ff:ff:ff
inet 10.10.193.20/24 brd 10.10.193.255 scope global eno1
   valid_lft forever preferred_lft forever
inet6 fe80::b6e1:fff:fe8a:6e56/64 scope link
   valid_lft forever preferred_lft forever
3: eno2:  mtu 1500 qdisc noop portid b4e10f8a6e59 state 
DOWN group default qlen 1000
link/ether b4:e1:0f:8a:6e:59 brd ff:ff:ff:ff:ff:ff

root@frinxblade10:~# ethtool -i eno2
driver: bnx2x
version: 1.712.30-0
firmware-version: FFV7.12.19 bc 7.12.5
expansion-rom-version:
bus-info: :01:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

root@frinxblade10:~# lshw -class network -businfo
Bus info  Device  Class  Description

pci@:01:00.0  eno1networkNetXtreme II BCM57810 10 Gigabit 
Ethernet
pci@:01:00.1  eno2networkNetXtreme II BCM57810 10 Gigabit 
Ethernet

vpp# sh pci
Address  Socket VID:PID Link Speed Driver  Product Name
:01:00.0   014e4:168e   5.0 GT/s x8bnx2x   QLogic 57810 
10 Gigabit Ethernet
:01:00.1   014e4:168e   5.0 GT/s x8bnx2x   QLogic 57810 
10 Gigabit Ethernet

vpp# sh int
  Name   Idx   State  Counter  Count
local00down


I also tried to add this into /etc/vpp/startup.conf
dpdk {
dev :01:00.1
}
and it did not change anything.

No errors in /tmp/vpp.log

Any idea?

Thank you,
Martin Šuňal
Technical Leader

Frinx s.r.o.
Mlynské Nivy 48 / 821 09 Bratislava / Slovakia
+421 2 20 91 01 41 / msu...@frinx.io / 
www.frinx.io
[frinx_logo]

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Vpp in a container

2017-02-10 Thread Raghav Kaushik (rakaushi)
Hi Folks,

I’m trying to find some data about pros and cons of running VPP in a container.

Has this been tried before? Are there some performance number comparisons 
available ?

Any pointers will be much appreciated.

Thanks,
Raghav
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Failing Out-of-tree Builds

2017-02-10 Thread Jon Loeliger
On Thu, Feb 9, 2017 at 11:16 PM, Akshaya Nadahalli (anadahal) <
anada...@cisco.com> wrote:

> Hi Jon,
>
> fib_urpf_list.h needs to included inside the source file and need not be
> installed in /usr/include. Thanks for raising this. I will send out a patch
> for this.
>
> Regards,
> Akshaya N
>

 Awesome!  Thank you!

jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] libpneum compilation flags

2017-02-10 Thread Burt Silverman
Hi Gabriel,

I do not fully understand the mechanisms for all processors, and being that
is the case, perhaps you can add a lot of comments to configure.ac. I
believe that the getconf command uses information from
glibc/sysdeps/x86/cacheinfo.c, which uses information from the CPUID
instruction of the processor, and tabulated information based on same; for
the intel x86 case. I am under the impression that there is no ARM support,
as there is none under glibc/sysdeps, and based upon
https://bugzilla.redhat.com/show_bug.cgi?id=1190638. (Of course, I am
referring to native compilation. And I don't have an ARM here to test.)

The bottom line is that, in my opinion, it would not hurt to add some hints
and clues using comments in configure.ac so that somebody has a reasonably
easy chance to figure out where the settings are originating from, for all
interesting cases. And remember that the reader should not have to depend
on searching through git commit comments for that purpose.

Thanks, Gabriel.

Burt

On Fri, Feb 10, 2017 at 5:52 AM, Gabriel Ganne 
wrote:

> Hi,
>
>
> I am currently working on a patch to auto-detect the cache line size with
> a m4 macro in autoconf (https://gerrit.fd.io/r/#/c/4576/)
>
> Part of it updates cache.h tests so that it raises an error if the cache
> line length is undefined.
>
>
> I am stuck at the libpneum compilation: it's a python module, so it has
> its compilation flags defined in setup.py.
>
> This means that the vpp flags are not propagated to the libpneum, and I
> believe this is a bug.
>
> I think the python api and the libpneum will get big changes soon anyway
> ...
>
>
> How do you think I should handle this ?
>
>
> Best regards,
>
>
> Gabriel Ganne
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #36502] Jenkins jobs are not started

2017-02-10 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Vanessa,

Thank you very much.

Regards,
Jan

-Original Message-
From: Vanessa Valderrama via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Friday, February 10, 2017 15:10
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #36502] Jenkins jobs are not started

Jan,

This issue has been resolved.  Jenkins minions are building as expected.  The 
minions in a stuck build status have been removed.  The vendor is still 
performing a root cause analysis.  Again we apologize for the inconvenience.

Thank you,
Vanessa

On Fri Feb 10 04:23:41 2017, valderrv wrote:
> Jan,
> 
> We are aware of the issue.  There is an issue with the vendor 
> affecting all tenants.  We've opened a high priority ticket with the 
> vendor.  I will update as soon as we have more details.
> 
> Thank you,
> Vanessa
> 
> 
> On Fri Feb 10 03:56:23 2017, jgel...@cisco.com wrote:
> > Hello,
> >
> > No new Jenkins job was not started in the last our and the build 
> > queue is increasing. Could you, please, have a look on it?
> >
> > Thanks,
> > Jan



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [FD.io Helpdesk #36502] Jenkins jobs are not started

2017-02-10 Thread Jan Gelety -X via RT
Hello Vanessa,

Thank you very much.

Regards,
Jan

-Original Message-
From: Vanessa Valderrama via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Friday, February 10, 2017 15:10
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #36502] Jenkins jobs are not started

Jan,

This issue has been resolved.  Jenkins minions are building as expected.  The 
minions in a stuck build status have been removed.  The vendor is still 
performing a root cause analysis.  Again we apologize for the inconvenience.

Thank you,
Vanessa

On Fri Feb 10 04:23:41 2017, valderrv wrote:
> Jan,
> 
> We are aware of the issue.  There is an issue with the vendor 
> affecting all tenants.  We've opened a high priority ticket with the 
> vendor.  I will update as soon as we have more details.
> 
> Thank you,
> Vanessa
> 
> 
> On Fri Feb 10 03:56:23 2017, jgel...@cisco.com wrote:
> > Hello,
> >
> > No new Jenkins job was not started in the last our and the build 
> > queue is increasing. Could you, please, have a look on it?
> >
> > Thanks,
> > Jan




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] [FD.io Helpdesk #36502] Jenkins jobs are not started

2017-02-10 Thread Vanessa Valderrama via RT
Jan,

This issue has been resolved.  Jenkins minions are building as
expected.  The minions in a stuck build status have been removed.  The
vendor is still performing a root cause analysis.  Again we apologize
for the inconvenience.

Thank you,
Vanessa

On Fri Feb 10 04:23:41 2017, valderrv wrote:
> Jan,
> 
> We are aware of the issue.  There is an issue with the vendor
> affecting all tenants.  We've opened a high priority ticket with the
> vendor.  I will update as soon as we have more details.
> 
> Thank you,
> Vanessa
> 
> 
> On Fri Feb 10 03:56:23 2017, jgel...@cisco.com wrote:
> > Hello,
> >
> > No new Jenkins job was not started in the last our and the build
> > queue
> > is increasing. Could you, please, have a look on it?
> >
> > Thanks,
> > Jan



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] libpneum compilation flags

2017-02-10 Thread Gabriel Ganne
Hi Ole,


Sure, I'll wait for your CFFI patch.

It does not seem ready yet, but ping me if you wish me to test it once you 
think it is.


Thanks,


Gabriel Ganne


From: otr...@employees.org 
Sent: Friday, February 10, 2017 12:14:52 PM
To: Gabriel Ganne
Cc: vpp-dev@lists.fd.io; Damjan Marion (damarion)
Subject: Re: [vpp-dev] libpneum compilation flags

Gabriel,

> I am currently working on a patch to auto-detect the cache line size with a 
> m4 macro in autoconf (https://gerrit.fd.io/r/#/c/4576/)
> Part of it updates cache.h tests so that it raises an error if the cache line 
> length is undefined.
>
> I am stuck at the libpneum compilation: it's a python module, so it has its 
> compilation flags defined in setup.py.
> This means that the vpp flags are not propagated to the libpneum, and I 
> believe this is a bug.
> I think the python api and the libpneum will get big changes soon anyway ...
>
> How do you think I should handle this ?

Can you wait a few days? Or try with my CFFI patch?
The idea is to remove the C/Python vpp_api.so completely, and make the Python 
module pure Python. And libpenum completely built from within the VPP make 
system.

Cheers,
Ole
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] libpneum compilation flags

2017-02-10 Thread otroan
Gabriel,

> I am currently working on a patch to auto-detect the cache line size with a 
> m4 macro in autoconf (https://gerrit.fd.io/r/#/c/4576/)
> Part of it updates cache.h tests so that it raises an error if the cache line 
> length is undefined.
> 
> I am stuck at the libpneum compilation: it's a python module, so it has its 
> compilation flags defined in setup.py.
> This means that the vpp flags are not propagated to the libpneum, and I 
> believe this is a bug.
> I think the python api and the libpneum will get big changes soon anyway ...
> 
> How do you think I should handle this ?

Can you wait a few days? Or try with my CFFI patch?
The idea is to remove the C/Python vpp_api.so completely, and make the Python 
module pure Python. And libpenum completely built from within the VPP make 
system.

Cheers,
Ole


signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] libpneum compilation flags

2017-02-10 Thread Gabriel Ganne
Hi,


I am currently working on a patch to auto-detect the cache line size with a m4 
macro in autoconf (https://gerrit.fd.io/r/#/c/4576/)

Part of it updates cache.h tests so that it raises an error if the cache line 
length is undefined.


I am stuck at the libpneum compilation: it's a python module, so it has its 
compilation flags defined in setup.py.

This means that the vpp flags are not propagated to the libpneum, and I believe 
this is a bug.

I think the python api and the libpneum will get big changes soon anyway ...


How do you think I should handle this ?


Best regards,


Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #36502] Jenkins jobs are not started

2017-02-10 Thread Vanessa Valderrama via RT
Jan,

We are aware of the issue.  There is an issue with the vendor affecting all 
tenants.  We've opened a high priority ticket with the vendor.  I will update 
as soon as we have more details.

Thank you,
Vanessa


On Fri Feb 10 03:56:23 2017, jgel...@cisco.com wrote:
> Hello,
> 
> No new Jenkins job was not started in the last our and the build queue
> is increasing. Could you, please, have a look on it?
> 
> Thanks,
> Jan



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [discuss] NOTIFICATION: Jenkins queue backed up

2017-02-10 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
+ fdio dev lists

-Original Message-
From: discuss-boun...@lists.fd.io [mailto:discuss-boun...@lists.fd.io] On 
Behalf Of Vanessa Valderrama
Sent: 10 lutego 2017 09:40
To: disc...@lists.fd.io; t...@lists.fd.io; infra-steer...@lists.fd.io; 
rgri...@linuxfoundation.org; Andrew Grimberg ; 
Trishan de Lanerolle 
Subject: [discuss] NOTIFICATION: Jenkins queue backed up

The Jenkins queue is backing up due to instances not being instantiated.  I've 
opened a high priority ticket with the vendor as well as investigating the 
issue.  I will provide an update as soon as I have more information.  I 
apologize for the inconvenience.

Thank you,
Vanessa

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] Jenkins jobs are not started

2017-02-10 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello,

No new Jenkins job was not started in the last our and the build queue is 
increasing. Could you, please, have a look on it?

Thanks,
Jan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev