[vpp-dev] how to find vlib_buffer_t.data size(capacity) @ runtime?

2017-12-07 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Hi,

I discovered that the packet generator does not always respect the default 
vlib_buffer_t.data size as defined in buffer.h:

#define VLIB_BUFFER_DATA_SIZE   (2048)

It derives the required buffer size from the individual packet sizes from the 
pcap file - at least that's what happens in 'make test'. In my case it's 256 
bytes.

My question is - what is the easiest way to determine the actual allocated 
vlib_buffer_t.data space at runtime? I want to be able to append some data to a 
buffer but first I would like to make sure that it fits...

Thanks,
Klement


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Building and running sample plugin

2017-12-07 Thread Kinsella, Ray


You can find previous guidance on how to build it here.

https://docs.fd.io/vpp/17.10/sample_plugin_doc.html

Ray K

On 06/12/2017 19:44, Pradeep Patel (pradpate) wrote:


I am trying to build and run sample plugin using make option. I see 
sample plugin so gets created but failed due to undefined symbol: 
sample_main. Any pointers with be helpful.


Regards,
Pradeep

>make build SAMPLE_PLUGIN=yes

> make run SAMPLE_PLUGIN=yes

vagrant@localhost:/vpp$ make run SAMPLE_PLUGIN=yes

WARNING: STARTUP_CONF not defined or file doesn't exist.

Running with minimal startup config:  unix { interactive cli-listen 
/run/vpp/cli.sock gid 1000 }


vlib_plugin_early_init:356: plugin path 
/vpp/build-root/install-vpp_debug-native/sample-plugin/lib64/vpp_plugins:/vpp/build-root/install-vpp_debug-native/vpp/lib64/vpp_plugins


load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)

load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane 
Development Kit (DPDK))


load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)

load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)

load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)


load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)

load_one_plugin:114: Plugin disabled (default): ixge_plugin.so

load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)

load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid 
Deployment on IPv4 Infrastructure (RFC5969))


load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory 
Interface (experimetal))


load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address 
Translation)


load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)

load_one_plugin:142: 
/vpp/build-root/install-vpp_debug-native/sample-plugin/lib64/vpp_plugins/sample_plugin.so: 
undefined symbol: sample_main


load_one_plugin:143: Failed to load plugin 'sample_plugin.so'

Aborted

Makefile:434: recipe for target 'run' failed

make: *** [run] Error 134

vagrant@localhost:/vpp$ vim ^C



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] i40e in a sorry state?

2017-12-07 Thread Kinsella, Ray

perfect, happy to have helped.


On 06/12/2017 13:46, Jon Loeliger wrote:

that something else was happening.  I dug a little deeper and found an


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] The feasibility of C++ gRPC with libvcl_ldpreload

2017-12-07 Thread Keith Burns
Peter,

As you might be aware we've been focused on showcasing VPP integrated with
the Ligato project for KubeCon this week.

Once folks are back next week there's quite a bit of technical debt we need
to address.

There's also interest from other parties to use LIBVCL (not LDP per se),
and I suspect after KubeCon sparks some interest there'll be more requests.

I'm thinking the smartest way forward is to manage this via JIRA.

If you could raise a feature request and assign it to me, we can then put
everything in one place and prioritise/have a repository for folks who want
to help to pick up tasks.

On Dec 6, 2017 12:25 PM, "Peter Palmár"  wrote:

> Hi,
>
> we are testing the VPP TCP stack by using the following combination: A C++
> application based on C++ gRPC with libvcl_ldpreload.
>
> We use greeter_server and greeter_client from grpc/examples/cpp/helloworld
> taken from https://github.com/grpc/grpc.
>
> The server and client use the eventfd()/eventfd2() system call which is
> not implemented in libvcl_ldpreload;
> this seems to be a reason why the communication between the server and
> client does not work.
>
> Could you please let me know whether I am right and if so, whether/when an
> implementation of eventfd is planned to be added to libvcl_ldpreload?
>
> The attached file contains the client output.
>
> Regards,
> Peter
>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] how to find vlib_buffer_t.data size(capacity) @ runtime?

2017-12-07 Thread Dave Barach (dbarach)
Interpret b->current_data, b->current_length, the buffer freelist index, and 
the related vlib_buffer_free_list_t structure. 

In most cases, b->packet_data is actually VLIB_BUFFER_DATA_SIZE (2048) bytes 
long. Look at the related vlib_buffer_free_list_t to know for sure. 

Current_data is a SIGNED offset into b->packet_data[0]. It can be negative by 
as much as VLIB_BUFFER_PRE_DATA_SIZE. Typically, device drivers write the first 
octet of packet data into b->packet_data[0], but devices / device driver 
writers may place data at arbitrary [positive] offsets into b->packet_data.   

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Sent: Thursday, December 7, 2017 8:06 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] how to find vlib_buffer_t.data size(capacity) @ runtime?

Hi,

I discovered that the packet generator does not always respect the default 
vlib_buffer_t.data size as defined in buffer.h:

#define VLIB_BUFFER_DATA_SIZE   (2048)

It derives the required buffer size from the individual packet sizes from the 
pcap file - at least that's what happens in 'make test'. In my case it's 256 
bytes.

My question is - what is the easiest way to determine the actual allocated 
vlib_buffer_t.data space at runtime? I want to be able to append some data to a 
buffer but first I would like to make sure that it fits...

Thanks,
Klement


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] Is there a planned VPP 17.10.01?

2017-12-07 Thread Billy McFall
I see a handful of merges on stable/1710 and was just looking to see if
there is a scheduled date or plan for a VPP 17.10.01 release? OR are these
merges just there in-case there is a VPP 17.10.01 release in the future?

Thanks,
Billy McFall

-- 
*Billy McFall*
Networking Group
CTO Office
*Red Hat*
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] ipsec gre tunnel forwarding fail

2017-12-07 Thread ??????

Hi,

We have a problem when test IPSEC GRE tunnnel,I want to  send a stream into 
VPP,and forwarding via ipsec-gre interface,
But VPP hang up, Maybe our configure has something wrong or someting wrong with 
ipsec-gre fowarding ,  can you help me?  

our configure as follow:

 VPP1
 create host-interface name eth2 mac 00:0c:29:6d:b0:82
  set interface state host-eth2 up
  set interface ip address host-eth2 12.1.1.1/24
  ipsec sa add 10 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58
  ipsec sa add 20 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58 
  create ipsec gre tunnel src 12.1.1.1 dst 12.1.1.2 local-sa 10 remote-sa 20
  set interface state ipsec-gre0 up
  create bridge-domain 1
  create host-interface name eth3 mac 00:0c:29:6d:b0:8c
  set interface state host-eth3 up
  set interface l2 bridge host-eth3 1
  set interface l2 bridge ipsec-gre0 1
  
  
  vpp2
create host-interface name eth2 mac 2c:53:4a:03:93:31
  create host-interface name eth3 mac 08:57:00:e8:b9:b5
  set interface state host-eth2 up
  set interface state host-eth3 up
  set interface ip address host-eth3 12.1.1.2/24
  ipsec sa add 10 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58
  ipsec sa add 20 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58 
  create ipsec gre tunnel src 12.1.1.2 dst 12.1.1.1 local-sa 20 remote-sa 10 
  set interface state ipsec-gre0 up
  create bridge-domain 1 
  set interface l2 bridge  host-eth2 1 
  set interface l2 bridge  ipsec-gre0  1 


when I send a stream into VPP1 host-eth3, VPP1 hang up, and the call stack as 
follow:

VPP# /home/li/vpp18.01/build-data/../src/vnet/fib/ip4_fib.h:107 (ip4_fib_get) 
assertion `! pool_is_free (ip4_main.v4_fibs, _e)' fails
(gdb) c
Continuing.

(gdb) bt
#0  0x2b7e3c0a0c37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x2b7e3c0a4028 in __GI_abort () at abort.c:89
#2  0x00406e5b in os_panic () at 
/home/li/vpp18.01/build-data/../src/vpp/vnet/main.c:294
#3  0x2b7e3b9beb98 in debugger () at 
/home/li/vpp18.01/build-data/../src/vppinfra/error.c:84
#4  0x2b7e3b9bef9f in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, 
fmt=0x2b7e3b5baf40 "%s:%d (%s) assertion `%s' fails") at 
/home/li/vpp18.01/build-data/../src/vppinfra/error.c:143
#5  0x2b7e3b1395a6 in ip4_fib_get (index=1) at 
/home/li/vpp18.01/build-data/../src/vnet/fib/ip4_fib.h:107
#6  0x2b7e3b13afea in ip4_lookup_inline (vm=0x2b7e3af457e0 
, node=0x2b7e3d1bb580, frame=0x2b7e3d63dbc0, 
lookup_for_responses_to_locally_received_packets=0) at 
/home/li/vpp18.01/build-data/../src/vnet/ip/ip4_forward.c:353
#7  0x2b7e3b13b483 in ip4_lookup (vm=0x2b7e3af457e0 , 
node=0x2b7e3d1bb580, frame=0x2b7e3d63dbc0)
at /home/li/vpp18.01/build-data/../src/vnet/ip/ip4_forward.c:465
#8  0x2b7e3acc6df0 in dispatch_node (vm=0x2b7e3af457e0 , 
node=0x2b7e3d1bb580, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x2b7e3d63dbc0, 
last_time_stamp=10282141928898)
at /home/li/vpp18.01/build-data/../src/vlib/main.c:1010
#9  0x2b7e3acc73d3 in dispatch_pending_node (vm=0x2b7e3af457e0 
, pending_frame_index=9, 
last_time_stamp=10282141928898) at 
/home/li/vpp18.01/build-data/../src/vlib/main.c:1160
#10 0x2b7e3acc9583 in vlib_main_or_worker_loop (vm=0x2b7e3af457e0 
, is_main=1)
at /home/li/vpp18.01/build-data/../src/vlib/main.c:1629
#11 0x2b7e3acc9632 in vlib_main_loop (vm=0x2b7e3af457e0 )
at /home/li/vpp18.01/build-data/../src/vlib/main.c:1648
#12 0x2b7e3acc9d7b in vlib_main (vm=0x2b7e3af457e0 , 
input=0x2b7e3d034fb0)
at /home/li/vpp18.01/build-data/../src/vlib/main.c:1806
#13 0x2b7e3ad0c321 in thread0 (arg=47821154965472) at 
/home/li/vpp18.01/build-data/../src/vlib/unix/main.c:617
#14 0x2b7e3b9d3570 in clib_calljmp () at 
/home/li/vpp18.01/build-data/../src/vppinfra/longjmp.S:128
#15 0x7ffcb902dd70 in ?? ()
#16 0x2b7e3ad0c7cb in vlib_unix_main (argc=4, argv=0x7ffcb902f008) at 
/home/li/vpp18.01/build-data/../src/vlib/unix/main.c:681
#17 0x00406b37 in main (argc=4, argv=0x7ffcb902f008) at 
/home/li/vpp18.01/build-data/../src/vpp/vnet/main.c:233
(gdb) 

Thanks,
Xyxue


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] memory issues

2017-12-07 Thread 薛欣颖

Hi,

Thank you for your reply.  And there's another question : After delete the 
static routing , the RSS is not falling. Is this normal?

Thanks,
Xyxue


 
From: Dave Barach (dbarach)
Date: 2017-12-06 20:38
To: Luke, Chris; 薛欣颖; vpp-dev
Subject: RE: [vpp-dev] memory issues
Before we crank up the vppinfra memory leakfinder, etc. etc.: cat /proc/`pidof 
vpp`/maps and have a hard stare at the output.
 
Configure one step at a time, looking for significant changes in the address 
space layout. 
 
HTH… Dave
 
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Tuesday, December 5, 2017 9:58 PM
To: 薛欣颖 ; vpp-dev 
Subject: Re: [vpp-dev] memory issues
 
I agree 5g is large, but I do not think this is the FIB. The default heap maxes 
out much sooner than that. Something else is going on.
 
For DPDK, “show dpdk buffer” and otherwise “show physmem”.
 
Chris.
 
From: 薛欣颖 
Date: Tuesday, December 5, 2017 at 20:06
To: Chris Luke , vpp-dev 
Subject: Re: Re: [vpp-dev] memory issues
 
 
Hi Chris,

I see what you mean. I have two other questions: 
1. 200k static routing use 5g memory is also  large , how can I configure it 
use less physical memory?
2. How can I check the packet buffer memory?

BTW, do you have the test similar with 'the memory size 200k static routing 
use'?

Thanks,
Xyxue


 
From: Luke, Chris
Date: 2017-12-05 21:43
To: 薛欣颖; vpp-dev
Subject: Re: [vpp-dev] memory issues
You’re misreading top. “Virt” only means the virtual memory footprint of the 
process. This includes unused heap, shared libraries, anonymous mmap() regions 
etc. “RSS” is the resident-in-memory size. It’s actually using 5G.
 
“show memory” also only shows the heap usage, it does not include packet buffer 
memory.
 
Chris.
 
From:  on behalf of 薛欣颖 
Date: Tuesday, December 5, 2017 at 00:51
To: vpp-dev 
Subject: [vpp-dev] memory issues
 
 
Hi guys,

I am using vpp v18.01-rc0~241-g4c9f2a8.
I configured 200K static routing. When I 'show memory' in VPP, '150+k used'. 
But in my machine ,used almost 15g. After del the static routing ,almost using 
16g memory.
More info is shown below:

VPP# show memory 
Thread 0 vpp_main
heap 0x7fffb58e9000, 1076983 objects, 110755k of 151671k used, 15386k free, 
13352k reclaimed, 16829k overhead, 1048572k capacity
User heap index=0:
heap 0x7fffb58e9000, 1076984 objects, 110755k of 151671k used, 15386k free, 
13352k reclaimed, 16829k overhead, 1048572k capacity
User heap index=1:
heap 0x77ed4000, 2 objects, 128k of 130k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=2:
heap 0x7fffb1e28000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 8188k capacity
User heap index=3:
heap 0x7fffb1628000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 8188k capacity
User heap index=4:
heap 0x7fffaf628000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 32764k capacity
User heap index=5:
heap 0x7fffaf528000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=6:
heap 0x7fffaf428000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=7:
heap 0x7fffaf328000, 2 objects, 120k of 122k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=8:
heap 0x7fffaf228000, 2 objects, 120k of 122k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=9:
heap 0x7fffa7228000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 131068k capacity
User heap index=10:
heap 0x7fff9f228000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 131068k capacity
User heap index=11:
heap 0x7fff9b228000, 2 objects, 16k of 18k used, 92 free, 0 reclaimed, 1k 
overhead, 65532k capacity
User heap index=12:
heap 0x7fff9b028000, 2 objects, 256k of 258k used, 92 free, 0 reclaimed, 1k 
overhead, 2044k capacity
User heap index=13:
heap 0x7fff9ae28000, 2 objects, 240k of 242k used, 92 free, 0 reclaimed, 1k 
overhead, 2044k capacity
User heap index=14:
heap 0x7fff9ad28000, 5 objects, 8k of 10k used, 168 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=15:
heap 0x7fff9ac28000, 5 objects, 8k of 10k used, 168 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=16:
heap 0x7fff9ab28000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=17:
heap 0x7fff9a128000, 2 objects, 1k of 3k used, 88 free, 0 reclaimed, 1k 
overhead, 10236k capacity
User heap index=18:
heap 0x7fff9a028000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=19:
heap 0x7fff99f28000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=20:
heap 0x7fff99e28000, 2 objects, 2k of 4k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
  
User heap index=21:   

Re: [vpp-dev] Is there a planned VPP 17.10.01?

2017-12-07 Thread Florin Coras
The second option :-)

Cheers, 
Florin

> On Dec 7, 2017, at 2:27 PM, Billy McFall  wrote:
> 
> I see a handful of merges on stable/1710 and was just looking to see if there 
> is a scheduled date or plan for a VPP 17.10.01 release? OR are these merges 
> just there in-case there is a VPP 17.10.01 release in the future?
> 
> Thanks,
> Billy McFall
> 
> -- 
> Billy McFall 
> Networking Group 
> CTO Office
> Red Hat
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-07 Thread nikhil ap
Hi Dave,

It works if I run, "make is_build_tool=yes tools-install" in .../build-root
but if I specify the platform, I still see the same issue if I try to
cross-compile tools with  make PLATFORM=x86_64 TAG=x86_64_debug
is_build_tool=yes tools-install

It is hitting this configuration in ../src/configure.ac

AM_COND_IF([CROSSCOMPILE],
[
  AC_PATH_PROG([VPPAPIGEN], [vppapigen], [no])
  if test "$VPPAPIGEN" = "no"; then
AC_MSG_ERROR([Externaly built vppapigen is needed when
cross-compiling...])
  fi
],[


On Tue, Dec 5, 2017 at 8:53 PM, Dave Barach (dbarach) 
wrote:

> See also “bootstrap.sh...”
>
>
>
> $ make V=0 is_build_tool=yes tools-install
>
>
>
> Thanks… Dave
>
>
>
> *From:* nikhil ap [mailto:niks3...@gmail.com]
> *Sent:* Tuesday, December 5, 2017 9:11 AM
> *To:* Dave Barach (dbarach) 
> *Cc:* vpp-dev@lists.fd.io
>
> *Subject:* Re: [vpp-dev] Build error when trying to cross-compile vpp
>
>
>
> Hi Dave,
>
>
>
> I added a file x86_64.mk in .../build-data/plaforms/ with the following
> content:
>
>
>
> x86_64_arch = x86_64
>
> x86_64_os = rumprun-netbsd
>
> x86_64_target = x86_64-rumprun-netbsd
>
> x86_64_native_tools = vppapigen
>
> x86_64_uses_dpdk = yes
>
>
>
> and in the TLD I did a "make PLATFORM=x86_64 TAG=x86_64_debug bootstrap"
> but I am still seeing that vppapigen is not getting built. Any clues?
>
>
>
> Thanks,
>
> Nikhil
>
>
>
>
>
> On Tue, Dec 5, 2017 at 7:05 PM, Dave Barach (dbarach) 
> wrote:
>
> Dear Nikhil,
>
>
>
> The first step in adding a new platform: construct .../build-data/plaforms/
> xxx.mk. There are several examples.
>
>
>
> Note the rule:
>
>
>
> xxx_native_tools = vppapigen
>
>
>
> This rule builds the missing build-host tool.
>
>
>
> Then:
>
>
>
> “make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.
>
>
>
> Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.
>
>
>
> In the past, we’ve used it to self-compile full toolchains, and to use the
> resulting toolchains to cross-compile embedded Linux images with squashfs /
> unionfs disk images.
>
>
>
> All of the mechanisms are there to do interesting things, but since we
> seldom do those things anymore you can expect a certain amount of trouble.
>
>
>
> Thanks… Dave
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *nikhil ap
> *Sent:* Tuesday, December 5, 2017 6:05 AM
> *To:* vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] Build error when trying to cross-compile vpp
>
>
>
> After a bit more digging around the make file, I did this:
>
>
>
>  make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap
>
>
>
> checking build system type... x86_64-pc-linux-gnu
>
> checking host system type... x86_64-rumprun-netbsd
>
> checking whether we are cross compiling... yes
>
>
>
> However, I am still seeing this error:
>
>
>
> checking for vppapigen... no
>
> configure: error: Externaly built vppapigen is needed when
> cross-compiling...
>
> Makefile:635: recipe for target 'tools-configure' failed
>
> make[1]: *** [tools-configure] Error 1
>
>
>
> What is the issue?
>
>
>
> On Tue, Dec 5, 2017 at 3:55 PM, nikhil ap  wrote:
>
> Hi All,
>
>
>
> I am trying to cross-compile vpp. The make doesn't expose a way to pass
> the --host parameter required to configure and build using cross
> compilation.
>
>
>
> Initially, I did the following:
>
>
>
> CC=x86_64-rumprun-netbsd-gcc make bootstrap, but I saw the following error
>
>
>
> *If you meant to cross compile, use `--host'.*
>
> *See `config.log' for more details*
>
>
>
> As a work-around based on the config.log, I did this following
>
>
>
> /src/configure (Stripped other output ) --build=x86_64-linux-gnu
> --host=x86_64-rumprun-netbsd --target=x86_64-linux-gnu
>
>
>
> However,  I saw the following error:
>
> checking for vppapigen... no
>
> configure: error: Externaly built vppapigen is needed when
> cross-compiling...
>
>
>
> Is there a way to cleanly cross-compile?
>
>
>
>
> --
>
> Regards,
>
> Nikhil
>
>
>
>
>
> --
>
> Regards,
>
> Nikhil
>
>
>
>
>
> --
>
> Regards,
>
> Nikhil
>



-- 
Regards,
Nikhil
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev