Re: [vpp-dev] VM unable to boot after restart

2018-01-26 Thread Dipesh Gorashia (dipeshg)
I am able to boot successfully after increasing VM memory from 2GB to 4GB.
AFAIK, I did not change any huge pages related configs.
Is there a way to check if any of the default config changed or to reset to 
“default” config?

Thanks,
Dipesh


From:  on behalf of "Dipesh Gorashia (dipeshg)" 

Date: Friday, January 26, 2018 at 5:11 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VM unable to boot after restart

All,

I am unable to reboot my Ubuntu (16.4.3) VM that ran VPP.
The system had to be reboot because it ran out of memory even though VPP was 
stopped.

The VPP code it ran was pulled earlier this week.
A snippet of the log messages from terminal are attached.

Did anyone see this issue before? What should I do to avoid this in future?

Thanks,
Dipesh


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] recovering from a crash with the C shared memory API

2018-01-26 Thread Florin Coras
Hi Matt, 

I tried reproducing this with vpp + vat. Is this a fair equivalent scenario?

1. Start vpp and attach vpp_api_test and send some msg
2. Restart vpp and stop vat
3. Restart vat and send message. 

The thing is, off of master, this works for me. 

Thanks, 
Florin

> On Jan 26, 2018, at 2:31 PM, Matthew Smith  wrote:
> 
> 
> Hi all,
> 
> I have a few applications that use the shared memory API. I’m running these 
> on CentOS 7.4, and starting VPP using systemd. If VPP happens to crash or be 
> intentionally restarted, those applications never seem to recover their API 
> connection. They notice that the original VPP process died and try to call 
> vl_client_disconnect_from_vlib(). That call tries to send API messages to 
> cleanly shut down its connection. The application will time out waiting for a 
> response, write a message like:
> 
> 'vl_client_disconnect:301: peer unresponsive, give up
> 
> and eventually consider itself disconnected. When it tries to reconnect, it 
> hangs for a while (100 seconds on the last occurrence I checked on) and then 
> prints messages like:
> 
> vl_map_shmem:619: region init fail
> connect_to_vlib_internal:394: vl_client_api map rv -2
> 
> The client keeps on trying and continues seeing those same errors. If the 
> client is restarted, it sees the same errors after restart. It doesn’t 
> recover until VPP is restarted with the client stopped. Once that happens, 
> the client can be started again and successfully connect.
> 
> The VPP systemd service file that is installed with RPMs built via ‘make 
> pkg-rpm' has the following:
> 
> [Service]
> ExecStartPre=-/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api
> 
> When systemd starts VPP, it removes these files which the still-running 
> client applications have run shm_open/mmap on. I am guessing that when those 
> clients try to disconnect with vl_client_disconnect_from_vlib(), they are 
> stomping on something in shared memory that subsequently keeps them from 
> being able to connect. If I comment that command from the systemd service 
> definition, the problem behavior I described above disappears. The 
> applications write one ‘peer unresponsive’ message and then they reconnect to 
> the API successfully and all is (relatively) well. This also is the case if I 
> don’t start VPP with systemd/systemctl and just run /usr/bin/vpp directly.
> 
> Does anyone have any thoughts on whether it would be ok to remove that 
> command from the systemd service file? Or is there some other better way to 
> deal with VPP crashing from the perspective of a client to the shared memory 
> API?
> 
> Thanks!
> -Matt
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] recovering from a crash with the C shared memory API

2018-01-26 Thread Matthew Smith

Hi all,

I have a few applications that use the shared memory API. I’m running these on 
CentOS 7.4, and starting VPP using systemd. If VPP happens to crash or be 
intentionally restarted, those applications never seem to recover their API 
connection. They notice that the original VPP process died and try to call 
vl_client_disconnect_from_vlib(). That call tries to send API messages to 
cleanly shut down its connection. The application will time out waiting for a 
response, write a message like:

'vl_client_disconnect:301: peer unresponsive, give up

and eventually consider itself disconnected. When it tries to reconnect, it 
hangs for a while (100 seconds on the last occurrence I checked on) and then 
prints messages like:

vl_map_shmem:619: region init fail
connect_to_vlib_internal:394: vl_client_api map rv -2

The client keeps on trying and continues seeing those same errors. If the 
client is restarted, it sees the same errors after restart. It doesn’t recover 
until VPP is restarted with the client stopped. Once that happens, the client 
can be started again and successfully connect.

The VPP systemd service file that is installed with RPMs built via ‘make 
pkg-rpm' has the following:

[Service]
ExecStartPre=-/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api

When systemd starts VPP, it removes these files which the still-running client 
applications have run shm_open/mmap on. I am guessing that when those clients 
try to disconnect with vl_client_disconnect_from_vlib(), they are stomping on 
something in shared memory that subsequently keeps them from being able to 
connect. If I comment that command from the systemd service definition, the 
problem behavior I described above disappears. The applications write one ‘peer 
unresponsive’ message and then they reconnect to the API successfully and all 
is (relatively) well. This also is the case if I don’t start VPP with 
systemd/systemctl and just run /usr/bin/vpp directly.

Does anyone have any thoughts on whether it would be ok to remove that command 
from the systemd service file? Or is there some other better way to deal with 
VPP crashing from the perspective of a client to the shared memory API?

Thanks!
-Matt

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-26 Thread Sarkar, Kawshik
I tried to install honeycomb from the debian package and I got this message

honeycomb : Depends: vpp (= 17.04-release) but 18.01-rc0~208-ge695cb4 is to be 
installed
-bash: syntax error near unexpected token `('
sratliff@vpp-1:~$  Depends: vpp-plugins (= 17.04-release) but 
18.01-rc0~208-ge69
-bash: syntax error near unexpected token `('

I am running 18.01…..

Anyone can point me to the plugins or files I need to install?

Br
Kawshik

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of George Zhao
Sent: Thursday, January 25, 2018 1:54 PM
To: Dave Wallace ; vpp-dev@lists.fd.io; 
csit-...@lists.fd.io
Subject: Re: [vpp-dev] [csit-dev] VPP 18.01 Release artifacts are now available 
on nexus.fd.io

Congratulations, way to go.


From: csit-dev-boun...@lists.fd.io 
[mailto:csit-dev-boun...@lists.fd.io] On Behalf Of Dave Wallace
Sent: Wednesday, January 24, 2018 9:23 PM
To: vpp-dev@lists.fd.io; 
csit-...@lists.fd.io
Subject: [csit-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

Folks,

The VPP 18.01 Release artifacts are now available on nexus.fd.io

The ubuntu.xenial and centos packages can be installed following the recipe on 
the wiki: 
https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Thank you to all of the VPP community who have contributed to the 18.01 VPP 
Release.


Elvis has left the building!
-daw-


This electronic message and any files transmitted with it contains
information from iDirect, which may be privileged, proprietary
and/or confidential. It is intended solely for the use of the individual
or entity to whom they are addressed. If you are not the original
recipient or the person responsible for delivering the email to the
intended recipient, be advised that you have received this email
in error, and that any use, dissemination, forwarding, printing, or
copying of this email is strictly prohibited. If you received this email
in error, please delete it and immediately notify the sender.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP performance tuning guide

2018-01-26 Thread Li, Charlie
Thanks Ray.

Regards,
Charlie Li

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Kinsella, Ray
Sent: Thursday, January 25, 2018 10:14 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP performance tuning guide

Hi Charlie,


Please see this preso and associated paper from Kubecon - it is a pretty 
comprehensive guide on where to start.

https://wiki.fd.io/images/3/31/Benchmarking-sw-data-planes-Dec5_2017.pdf

Also consider replicating the CSIT test environment locally.
It is very well documented and you can then inspect the best way to benchmark 
VPP etc.

Ray K


On 24/01/2018 21:23, Li, Charlie wrote:
> Hi All,
> 
> I am looking for some guidance on how to tune the VPP performance, 
> specifically how to tune the IP Forwarding performance for small packets.
> 
> Appreciate if someone can point me to some documents or online resources on 
> this topic.
> 
> My setup is simple, just an XL710-QDA2 NIC card with two 40G ports and VPP is 
> configured to forward IP traffic between the two ports.
> 
> Basically I am using the default /etc/vpp/startup.conf (with PCI device 
> address and core numbers modified). The throughput for bi-directional traffic 
> with small packets is far below line rate. Throwing more cores does not seem 
> to improve.
> 
> Regards,
> Charlie Li
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Supporting vpp releases besides N and N-1

2018-01-26 Thread Ed Warnicke
I think you've hit on the right metric: retire those streams which are no
longer supported.

What do others think?

Ed

On Fri, Jan 26, 2018 at 3:53 PM Ed Kern (ejk)  wrote:

>
> Id like to clean up jenkins ‘stream’ in the vpp section removing
> unsupported releases.
> Note: Its more than just cosmetically ugly problem.
>
> I could go into more detail but it comes down to “If they are not
> supported, why have them included in the build infra?”
>
> This would mean the removal from the stream of:
> 1707
> 1704
> 1701
> 1609
> 1606
>
> Anyone want to stand up and represent why these should remain?
>
> Ed
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Graph Optimization

2018-01-26 Thread David Bainbridge
Very helpful and interesting. Thank you.

On Fri, Jan 26, 2018 at 11:16 AM Dave Barach (dbarach) 
wrote:

> Dear David,
>
>
>
> A bit of history. We worked on vpp for a decade before making any serious
> effort to multi-thread it. The first scheme that I tried was to break up
> the graph into reconfigurable pipeline stages. Effective partitioning of
> the graph is highly workload-dependent, and it can change in a heartbeat.
> the resulting system runs at the speed of the slowest pipeline stage.
>
>
>
> In terms of easily measured inter-thread handoff cost, it’s not awful. 2-3
> clocks/pkt. Handing vectors of packets between threads can cause a festival
> of cache coherence traffic, and it can easily undo the positive effects of
> ddio (packet data DMA into the cache hierarchy).
>
>
>
> We actually use the scheme you describe in a very fine-grained way: dual
> and quad loop graph dispatch functions process 2 or 4 packets at the same
> time. Until we run out of registers, a superscalar CPU can “do the same
> thing to 2 or 4 packets at the same time” pretty effectively. Including
> memory hierarchy stalls, vpp averages more than two instructions retired
> per clock cycle.
>
>
>
> At the graph node level, I can’t see how to leverage this technique.
> Presenting [identical] vectors to 2 (or more) nodes running on multiple
> threads would mean (1) the parallelized subgraph would run at the speed of
> the slowest node. (2) you’d pay the handoff costs already discussed above,
> and (3) you’d need an expensive algorithm to make sure that all vector
> replicas were finished before reentering sequential processing. (4) None of
> the graph nodes we’ve ever constructed are free of ordering constraints.
> Every node alters packet state in a meaningful way, or they wouldn’t be
> worth having. ()…
>
>
>
> We’ve had considerable success with flow-hashing across a set of identical
> graph replicas [worker threads], even when available hardware RSS hashing
> is not useful [think about NATted UDP traffic].
>
>
>
> Hope this is of some interest.
>
>
>
> Thanks… Dave
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *David Bainbridge
> *Sent:* Friday, January 26, 2018 12:39 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP Graph Optimization
>
>
>
> I have just started to read up on VPP/FD.io, and I have a question about
> graph optimization and was wondering if (as I suspect) this has already
> been thought about and either planned or decided against.
>
>
>
> The documentation I found on VPP essentially says that VPP uses batch
> processing and processes all packets in a vector on one step before
> proceeding to the next step. The claim is this provides overall better
> throughput because of instruction caching.
>
>
>
> I was wondering if optimization of the graph to understand where
> concurrency can be leveraged has been considered, as well as where you
> could process the vector by two steps with an offset. If this is possible,
> then steps could be pinned to cores and perhaps both concurrency and
> instruction caching could be leveraged.
>
>
>
> For example assume the following graph:
>
>
>
> [image: image.png]
>
>
>
> In this graph, steps B,C can be done concurrently as they don't "modify"
> the vector. Steps D, E can't be done concurrently, but as they don't
> require look back/forward they can be done in offset.
>
>
>
> What I am suggesting is, if there are enough cores, then steps could be
> pinned to cores to achieve the benefits of instruction caching, and after
> step A is complete, steps B,C could be done concurrently. After B,C are
> complete then D can be started and as D completes processing on a packet if
> can then be processed by E (i.e., the entire vector does not need to be
> processed by D before processing by E is started).
>
>
>
> I make no argument that this doesn't increase complexity and also
> introduces coordination costs that don't exists today. To be fair, offset
> processing could be viewed as splitting the original large vector into
> smaller vectors and processing the smaller vectors from start to finish
> (almost dynamic optimization based on dynamic vector resizing).
>
> Just curious to hear others thoughts and if some of this has been thought
> through or experimented with. As I said, just thinking off the cuff and
> wondering; not fully thought through.
>
>
>
> avèk respè,
>
> /david
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Graph Optimization

2018-01-26 Thread Dave Barach (dbarach)
Dear David,

 

A bit of history. We worked on vpp for a decade before making any serious 
effort to multi-thread it. The first scheme that I tried was to break up the 
graph into reconfigurable pipeline stages. Effective partitioning of the graph 
is highly workload-dependent, and it can change in a heartbeat. the resulting 
system runs at the speed of the slowest pipeline stage.

 

In terms of easily measured inter-thread handoff cost, it’s not awful. 2-3 
clocks/pkt. Handing vectors of packets between threads can cause a festival of 
cache coherence traffic, and it can easily undo the positive effects of ddio 
(packet data DMA into the cache hierarchy).

 

We actually use the scheme you describe in a very fine-grained way: dual and 
quad loop graph dispatch functions process 2 or 4 packets at the same time. 
Until we run out of registers, a superscalar CPU can “do the same thing to 2 or 
4 packets at the same time” pretty effectively. Including memory hierarchy 
stalls, vpp averages more than two instructions retired per clock cycle.

 

At the graph node level, I can’t see how to leverage this technique. Presenting 
[identical] vectors to 2 (or more) nodes running on multiple threads would mean 
(1) the parallelized subgraph would run at the speed of the slowest node. (2) 
you’d pay the handoff costs already discussed above, and (3) you’d need an 
expensive algorithm to make sure that all vector replicas were finished before 
reentering sequential processing. (4) None of the graph nodes we’ve ever 
constructed are free of ordering constraints. Every node alters packet state in 
a meaningful way, or they wouldn’t be worth having. ()… 

 

We’ve had considerable success with flow-hashing across a set of identical 
graph replicas [worker threads], even when available hardware RSS hashing is 
not useful [think about NATted UDP traffic]. 

 

Hope this is of some interest.

 

Thanks… Dave

 

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of David Bainbridge
Sent: Friday, January 26, 2018 12:39 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP Graph Optimization

 

I have just started to read up on VPP/FD.io, and I have a question about graph 
optimization and was wondering if (as I suspect) this has already been thought 
about and either planned or decided against.

 

The documentation I found on VPP essentially says that VPP uses batch 
processing and processes all packets in a vector on one step before proceeding 
to the next step. The claim is this provides overall better throughput because 
of instruction caching.

 

I was wondering if optimization of the graph to understand where concurrency 
can be leveraged has been considered, as well as where you could process the 
vector by two steps with an offset. If this is possible, then steps could be 
pinned to cores and perhaps both concurrency and instruction caching could be 
leveraged.

 

For example assume the following graph:

 



 

In this graph, steps B,C can be done concurrently as they don't "modify" the 
vector. Steps D, E can't be done concurrently, but as they don't require look 
back/forward they can be done in offset.

 

What I am suggesting is, if there are enough cores, then steps could be pinned 
to cores to achieve the benefits of instruction caching, and after step A is 
complete, steps B,C could be done concurrently. After B,C are complete then D 
can be started and as D completes processing on a packet if can then be 
processed by E (i.e., the entire vector does not need to be processed by D 
before processing by E is started).

 

I make no argument that this doesn't increase complexity and also introduces 
coordination costs that don't exists today. To be fair, offset processing could 
be viewed as splitting the original large vector into smaller vectors and 
processing the smaller vectors from start to finish (almost dynamic 
optimization based on dynamic vector resizing).

Just curious to hear others thoughts and if some of this has been thought 
through or experimented with. As I said, just thinking off the cuff and 
wondering; not fully thought through.

 

avèk respè,

/david

 



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP/AArch64 - fd.io

2018-01-26 Thread Maciek Konstantynowicz (mkonstan)
Thanks Tina!

Gabriel, Hongjun,

Tina reported on the last VPP call the kubeproxy tests are failing in vpp make 
test on Aarch64.
I just checked on x86, and kubeproxy NAT44 and NAT66 tests pass, but NAT46 and 
NAT64 fail.
Is this the issue that you’re hitting on Aarch64, or something else?

-Maciek

On 23 Jan 2018, at 16:23, Tina Tsou 
> wrote:

Dear Maciek,

As discussed at today’s VPP call, you may want to look into here.

https://wiki.fd.io/view/VPP/AArch64

Build, unit test, packaging

The following is tracked manually until hardware is integrated into upstream 
FD.io CI


Cmd Status  timing
make bootstrap  OK  0m45
make build  OK  11m45
make build-release  OK  14m56
make test   OK  26m7
make test-all   KO (kubeproxy)  36m15
make test-debug OK  22m32
make test-all-debug KO (kubeproxy)  33m29
Status on commit: 9cfb11787f24e90ad14697afefbb2dd5969b2951 (Mon Jan 8 01:29:34 
2018)
kubeproxy tests are broken on purpose: corresponding features are not fully 
implemented
Timing consideration on platform: Hierofalcon with Cortex-A57 & Fedora 26

  *   Might want to have a look at this patch which adds make config: 
https://gerrit.fd.io/r/#/c/9200/

Distro  Cmd Status
Fedora 26 (Server Edition)  make pkg-rpmOK
Ubuntu 17.10make pkg-debOK
Ubuntu 16.04.3 LTS  make pkg-debOK




Thank you,
Tina
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP Graph Optimization

2018-01-26 Thread David Bainbridge
I have just started to read up on VPP/FD.io, and I have a question about
graph optimization and was wondering if (as I suspect) this has already
been thought about and either planned or decided against.


The documentation I found on VPP essentially says that VPP uses batch
processing and processes all packets in a vector on one step before
proceeding to the next step. The claim is this provides overall better
throughput because of instruction caching.


I was wondering if optimization of the graph to understand where
concurrency can be leveraged has been considered, as well as where you
could process the vector by two steps with an offset. If this is possible,
then steps could be pinned to cores and perhaps both concurrency and
instruction caching could be leveraged.


For example assume the following graph:


[image: image.png]


In this graph, steps B,C can be done concurrently as they don't "modify"
the vector. Steps D, E can't be done concurrently, but as they don't
require look back/forward they can be done in offset.


What I am suggesting is, if there are enough cores, then steps could be
pinned to cores to achieve the benefits of instruction caching, and after
step A is complete, steps B,C could be done concurrently. After B,C are
complete then D can be started and as D completes processing on a packet if
can then be processed by E (i.e., the entire vector does not need to be
processed by D before processing by E is started).


I make no argument that this doesn't increase complexity and also
introduces coordination costs that don't exists today. To be fair, offset
processing could be viewed as splitting the original large vector into
smaller vectors and processing the smaller vectors from start to finish
(almost dynamic optimization based on dynamic vector resizing).

Just curious to hear others thoughts and if some of this has been thought
through or experimented with. As I said, just thinking off the cuff and
wondering; not fully thought through.


avèk respè,

/david
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VCL-LDPRELOAD with C++ gRPC

2018-01-26 Thread Dave Wallace

I apologize, this was meant to be a private communication.

Sorry for spamming the list.
-daw-

On 01/26/2018 11:01 AM, Dave Wallace wrote:

Jan,

It appears that we may have the opportunity for some cross-project 
collaboration here (or collateral damage). Peter has kicking the tires 
on VCL/LD_PRELOAD + host stack for a while now, but has yet to 
identify his use case.


Do you know what Peter is working on and where it fits into the 
overall priority list?


Thanks,
-daw-

On 01/26/2018 04:54 AM, Peter Palmár wrote:

Hi,

we have recently tested the VPP TCP stack/VCL-LDPRELOAD library with 
C++ gRPC (https://grpc.io/) and reported two bugs:

https://jira.fd.io/browse/VPP-1089
https://jira.fd.io/browse/VPP-1101

I would like to ask whether the way how the VCL-LDPRELOAD library is 
being developed is the same,
i.e., it's use-case driven and a full POSIX replacement via LDPRELOAD 
remains out of scope.


If so, does that mean that in order for the VPP TCP stack to become 
functional with gRPC, we can report bugs only, or, if appropriate, 
help to fix them?


Are you not going to be focused on the C++ gRPC by yourselves in the 
near feature?


Regards,
Peter



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] builtin UDP server broken in 18.01 ?

2018-01-26 Thread Florin Coras (fcoras)
Also, it should be noted that the patch changes the cli to run any of the 
builtin server/clients. To run a server/client one should do:

test echo server|client uri transport_proto://ip/port 

We now have support for tcp, udp and, thanks to Marco, sctp.

Florin

From: Andreas Schultz 
Date: Friday, January 26, 2018 at 8:09 AM
To: "vpp-dev@lists.fd.io" 
Cc: "Florin Coras (fcoras)" 
Subject: Re: builtin UDP server broken in 18.01 ?

Andreas Schultz 
> schrieb 
am Fr., 26. Jan. 2018 um 16:10 Uhr:
Hi,

I used to be able to do a

   builtin uri bind uri udp://0.0.0.0/1234

After upgrading to 18.01 this now failes with:

Correction, it still works in 18.01, commit  
"b384b543313b6b47a277c903e9d4fcd4343054fa: session: add support for memfd 
segments" seems to have broken it.

Andreas


  builtin uri bind: bind_uri_server returned -1

Any hints on how to fix that?

Regards
Andreas
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] builtin UDP server broken in 18.01 ?

2018-01-26 Thread Florin Coras (fcoras)
Sorry about that, it is fixed in this patch: https://gerrit.fd.io/r/#/c/10253/

Florin

From: Andreas Schultz 
Date: Friday, January 26, 2018 at 8:09 AM
To: "vpp-dev@lists.fd.io" 
Cc: "Florin Coras (fcoras)" 
Subject: Re: builtin UDP server broken in 18.01 ?

Andreas Schultz 
> schrieb 
am Fr., 26. Jan. 2018 um 16:10 Uhr:
Hi,

I used to be able to do a

   builtin uri bind uri udp://0.0.0.0/1234

After upgrading to 18.01 this now failes with:

Correction, it still works in 18.01, commit  
"b384b543313b6b47a277c903e9d4fcd4343054fa: session: add support for memfd 
segments" seems to have broken it.

Andreas


  builtin uri bind: bind_uri_server returned -1

Any hints on how to fix that?

Regards
Andreas
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] builtin UDP server broken in 18.01 ?

2018-01-26 Thread Andreas Schultz
Andreas Schultz  schrieb am Fr., 26. Jan.
2018 um 16:10 Uhr:

> Hi,
>
> I used to be able to do a
>
>builtin uri bind uri udp://0.0.0.0/1234
>
> After upgrading to 18.01 this now failes with:
>

Correction, it still works in 18.01, commit
"b384b543313b6b47a277c903e9d4fcd4343054fa: session: add support for memfd
segments" seems to have broken it.

Andreas


>   builtin uri bind: bind_uri_server returned -1
>
> Any hints on how to fix that?
>
> Regards
> Andreas
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VCL-LDPRELOAD with C++ gRPC

2018-01-26 Thread Dave Wallace

Jan,

It appears that we may have the opportunity for some cross-project 
collaboration here (or collateral damage). Peter has kicking the tires 
on VCL/LD_PRELOAD + host stack for a while now, but has yet to identify 
his use case.


Do you know what Peter is working on and where it fits into the overall 
priority list?


Thanks,
-daw-

On 01/26/2018 04:54 AM, Peter Palmár wrote:

Hi,

we have recently tested the VPP TCP stack/VCL-LDPRELOAD library with 
C++ gRPC (https://grpc.io/) and reported two bugs:

https://jira.fd.io/browse/VPP-1089
https://jira.fd.io/browse/VPP-1101

I would like to ask whether the way how the VCL-LDPRELOAD library is 
being developed is the same,
i.e., it's use-case driven and a full POSIX replacement via LDPRELOAD 
remains out of scope.


If so, does that mean that in order for the VPP TCP stack to become 
functional with gRPC, we can report bugs only, or, if appropriate, 
help to fix them?


Are you not going to be focused on the C++ gRPC by yourselves in the 
near feature?


Regards,
Peter



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Supporting vpp releases besides N and N-1

2018-01-26 Thread Ed Kern (ejk)

Id like to clean up jenkins ‘stream’ in the vpp section removing unsupported 
releases.
Note: Its more than just cosmetically ugly problem.

I could go into more detail but it comes down to “If they are not supported, 
why have them included in the build infra?”

This would mean the removal from the stream of:
1707
1704
1701
1609
1606

Anyone want to stand up and represent why these should remain?

Ed
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2018-01-26 Thread Dave Wallace

Actually you do ;)

The VPP vagrant environment supports OpenSUSE in addition to ubuntu16.04 
and centos7:


cd .../vpp/extras/vagrant
export VPP_VAGRANT_DISTRO=opensuse
vagrant up

Thanks,
-daw-

On 1/26/18 1:52 AM, Ole Troan wrote:

Hi Hongjun,


I have no OpenSUSE at hand, and could not give it a try.



Neither do I.

Ole



*From:* Ole Troan [mailto:otr...@employees.org]
*Sent:* Friday, January 26, 2018 2:08 PM
*To:* Ni, Hongjun >
*Cc:* Dave Barach (dbarach) >; Marco Varlese >; Gabriel Ganne >; Billy McFall >; Damjan Marion (damarion) 
>; vpp-dev 
>

*Subject:* Re: [vpp-dev] openSUSE build fails

Hongjun,

This looks suspect:

*03:32:31*APIGEN vlibmemory/memclnt.api.h *03:32:31* JSON API 
vlibmemory/memclnt.api.json *03:32:31* SyntaxError: invalid syntax 
(vppapigentab.py, line 11) *03:32:31* 
WARNING:vppapigen:/w/workspace/vpp-verify-master-opensuse/build-root/rpmbuild/BUILD/vpp-18.04/build-data/../src/vlibmemory/memclnt.api:0:1: 
Old Style VLA: u8 data[0]; *03:32:31* Makefile:8794: recipe for 
target 'vlibmemory/memclnt.api.h' failed *03:32:31* make[5]: *** 
[vlibmemory/memclnt.api.h] Error 1 *03:32:31* make[5]: *** Waiting 
for unfinished jobs *03:32:31*

**

Can you try running vppapigen manually on that platform?

Vppapigen —debug —input memclnt.api ...

Cheers

Ole


On 26 Jan 2018, at 06:38, Ni, Hongjun > wrote:


Hi all,

It seems that OpenSUSE build failed for this patch:

https://jenkins.fd.io/job/vpp-verify-master-opensuse/1285/console

Please help to take a

*From:* vpp-dev-boun...@lists.fd.io

[mailto:vpp-dev-boun...@lists.fd.io] *On Behalf Of *Dave Barach
(dbarach)
*Sent:* Friday, December 15, 2017 11:19 PM
*To:* Marco Varlese >;
Gabriel Ganne >; Billy McFall
>
*Cc:* Damjan Marion (damarion) >; vpp-dev >
*Subject:* Re: [vpp-dev] openSUSE build fails

Dear Marco,

Thanks very much...

Dave

*From:* Marco Varlese [mailto:mvarl...@suse.de]
*Sent:* Friday, December 15, 2017 9:06 AM
*To:* Dave Barach (dbarach) >; Gabriel Ganne
>; Billy
McFall >
*Cc:* Damjan Marion (damarion) >; vpp-dev >
*Subject:* Re: [vpp-dev] openSUSE build fails

We (at SUSE) are currently pushing an update to 2.2.11 for
openSUSE in our repositories.

Once that's confirmed to be upstream, I will push a new patch to
the ci-management repo to have the indent package upgraded to the
latest version and re-enable the "checkstyle".

Cheers,

Marco

On Fri, 2017-12-15 at 13:51 +, Dave Barach (dbarach) wrote:

With a bit of fiddling, I was able to fix gerrit 9440 so that
indent 2.2.10 and 2.2.11 appear to produce identical results...

HTH... Dave

*From:* vpp-dev-boun...@lists.fd.io

[mailto:vpp-dev-boun...@lists.fd.io] *On Behalf Of *Gabriel Ganne
*Sent:* Friday, December 15, 2017 8:42 AM
*To:* Billy McFall >; Marco Varlese >
*Cc:* Damjan Marion (damarion) >; vpp-dev >
*Subject:* Re: [vpp-dev] openSUSE build fails

Hi,

If you browse the source
http://hg.savannah.gnu.org/hgweb/indent/


The tag 2.2.11  is there, the source seems updated regularly.

Best regards,

--

Gabriel Ganne



*From:*vpp-dev-boun...@lists.fd.io

> on behalf of Billy
McFall >
*Sent:* Friday, December 15, 2017 2:26:42 PM
*To:* Marco Varlese
*Cc:* Damjan Marion (damarion); vpp-dev
  

[vpp-dev] builtin UDP server broken in 18.01 ?

2018-01-26 Thread Andreas Schultz
Hi,

I used to be able to do a

   builtin uri bind uri udp://0.0.0.0/1234

After upgrading to 18.01 this now failes with:

  builtin uri bind: bind_uri_server returned -1

Any hints on how to fix that?

Regards
Andreas
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2018-01-26 Thread Dave Barach (dbarach)
As Marco wrote: we’ve experienced sporadic, inexplicable LF infra-related build 
failures since the project started more than two years ago. It’s unusual for an 
otherwise correct patch to require more than one “recheck” for validation, but 
it’s absolutely not unknown.

To mitigate these problems, Ed Kern has built a containerized Jenkins minion 
system which runs on physical hardware, instead of the current setup which 
relies on cloud-hosted Openstack VMs. As soon as practicable – post 18.01 CSIT 
report – we’ll switch to it.

Given a failure which isn’t obviously related to a specific patch, please press 
the “recheck” button. No need to ask, just do it. In case of persistent 
failure, please email vpp-dev.

Thanks… Dave

From: Ni, Hongjun [mailto:hongjun...@intel.com]
Sent: Friday, January 26, 2018 3:25 AM
To: Marco Varlese ; Ole Troan 
Cc: Dave Barach (dbarach) ; Gabriel Ganne 
; Billy McFall ; Damjan Marion 
(damarion) ; vpp-dev 
Subject: RE: [vpp-dev] openSUSE build fails

Hi Marco,

Thank you for your explanation.  Would contact you if I ran into similar issue 
again.

Thanks,
Hongjun

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, January 26, 2018 4:21 PM
To: Ni, Hongjun >; Ole Troan 
>
Cc: Dave Barach (dbarach) >; 
Gabriel Ganne >; Billy 
McFall >; Damjan Marion 
(damarion) >; vpp-dev 
>
Subject: Re: [vpp-dev] openSUSE build fails

On Fri, 2018-01-26 at 06:58 +, Ni, Hongjun wrote:
I rechecked this patch twice, and it built successfully now.

But why need to recheck twice?
If a "recheck" fixed that then it must be an infrastructure glitch; that's the 
only thing I can think of...

That would not be a surprise either since it does happen from time-to-time to 
see random build failures which get fixed by a "recheck".

Having said that, if you happen to have again this sort of problems (and which 
do not go away with a recheck) feel free to drop me an email and I will look 
into it. Just take into account I'm based at UTC+1.



-Hongjun
- Marco


From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:53 PM
To: Ni, Hongjun >
Cc: Dave Barach (dbarach) >; Marco 
Varlese >; Gabriel Ganne 
>; Billy McFall 
>; Damjan Marion (damarion) 
>; vpp-dev 
>
Subject: Re: [vpp-dev] openSUSE build fails

Hi Hongjun,

I have no OpenSUSE at hand, and could not give it a try.

Neither do I.

Ole



From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:08 PM
To: Ni, Hongjun >
Cc: Dave Barach (dbarach) >; Marco 
Varlese >; Gabriel Ganne 
>; Billy McFall 
>; Damjan Marion (damarion) 
>; vpp-dev 
>
Subject: Re: [vpp-dev] openSUSE build fails

Hongjun,

This looks suspect:

03:32:31 APIGEN vlibmemory/memclnt.api.h 03:32:31 JSON API 
vlibmemory/memclnt.api.json 03:32:31 SyntaxError: invalid syntax 
(vppapigentab.py, line 11) 03:32:31 
WARNING:vppapigen:/w/workspace/vpp-verify-master-opensuse/build-root/rpmbuild/BUILD/vpp-18.04/build-data/../src/vlibmemory/memclnt.api:0:1:
 Old Style VLA: u8 data[0]; 03:32:31 Makefile:8794: recipe for target 
'vlibmemory/memclnt.api.h' failed 03:32:31 make[5]: *** 
[vlibmemory/memclnt.api.h] Error 1 03:32:31 make[5]: *** Waiting for unfinished 
jobs 03:32:31




Can you try running vppapigen manually on that platform?
Vppapigen —debug —input memclnt.api ...

Cheers
Ole


On 26 Jan 2018, at 06:38, Ni, Hongjun 
> wrote:
Hi all,

It seems that OpenSUSE build failed for this patch:
https://jenkins.fd.io/job/vpp-verify-master-opensuse/1285/console

Please help to take a

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Dave Barach (dbarach)
Sent: Friday, December 15, 2017 11:19 PM
To: Marco Varlese >; Gabriel Ganne 

Re: [vpp-dev] Howto implement L3 p2p tunnel interface without assigning IP to the interface?

2018-01-26 Thread Andreas Schultz
Hi Neale,

Neale Ranns (nranns)  schrieb am Fr., 26. Jan. 2018 um
11:27 Uhr:

> Hi Andreas,
>
>
>
> Ip[46]_sw_interface_enable_disable() are the internal APIs that
> enable/disable IP forwarding on an interface. There is an equivalent MPLS
> one too. The commands I listed previously are external means by which these
> internal APIs are invoked. It would not be acceptable to use these APIs to
> automatically IP enable GTP interfaces on interface creation.
>

The tunnels are not static, they are create through a management protocol
over the 3GPP Sx reference point. Having to add manual configuration steps
to make the tunnels work is not acceptable. So I have to use API's to setup
things the ways I need them.
I'm not really sure the interface model is even correct my use case. I
don't need to support L2 forwarding, so L2 bridging argument from the Wiki
article does not apply.

Regards,
Andreas


>
> Regards,
>
> neale
>
>
>
>
>
> *From: *Andreas Schultz 
> *Date: *Thursday, 25 January 2018 at 23:47
> *To: *"vpp-dev@lists.fd.io" 
> *Cc: *"Neale Ranns (nranns)" , Ole Troan <
> otr...@employees.org>
> *Subject: *Re: [vpp-dev] Howto implement L3 p2p tunnel interface without
> assigning IP to the interface?
>
>
>
> Ole Troan  schrieb am Do., 25. Jan. 2018 um
> 22:07 Uhr:
>
> > Not accepting IP[46] packets on any interface type that is not IP[46]
> enabled is a basic security feature. To IP4 enable an interface you have
> two option;
> > 1)   Assign it an IP address
> > 2)   Make it IP unnumbered to another interface that does have an
> address, e.g.
> > set int ip addr loop0 some-private-addr/32
> > set int unnumbered gtpu-tunnel-0 use loop0
> > set int unnumbered gtpu-tunnel-1 use loop0
> > set int unnumbered gtpu-tunnel-2 use loop0
> > etc…
> > It doesn’t have to be a loopback, I use that only as an example.
> >
> > To IP6 enable an interface instead of the unnumbered trick one can just
> do;
> > 1)   enable ip6 interface gtpu-tunnel0
>
> Although all IPv6 interfaces by definition have an IPv6 address (the IPv6
> link-local) I do wonder if we shouldn't allow for IP processing to be
> enabled for both IP4 and IP6 independently of having an address configured.
> (Of course that would imply that some protocols wouldn't work.)
>
>
>
> ip4_sw_interface_enable_disable() and/or ip6_sw_interface_enable_disable()
> did the trick. It works now without having to use the unnumbered option or
> having to assign a IPv4 address. I didn't check IPv6, though.
>
>
>
> Thanks for the help,
>
> Andreas
>
>
>
> Cheers,
> Ole
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Howto implement L3 p2p tunnel interface without assigning IP to the interface?

2018-01-26 Thread Neale Ranns (nranns)
Hi Andreas,

Ip[46]_sw_interface_enable_disable() are the internal APIs that enable/disable 
IP forwarding on an interface. There is an equivalent MPLS one too. The 
commands I listed previously are external means by which these internal APIs 
are invoked. It would not be acceptable to use these APIs to automatically IP 
enable GTP interfaces on interface creation.

Regards,
neale


From: Andreas Schultz 
Date: Thursday, 25 January 2018 at 23:47
To: "vpp-dev@lists.fd.io" 
Cc: "Neale Ranns (nranns)" , Ole Troan 
Subject: Re: [vpp-dev] Howto implement L3 p2p tunnel interface without 
assigning IP to the interface?

Ole Troan > schrieb am Do., 
25. Jan. 2018 um 22:07 Uhr:
> Not accepting IP[46] packets on any interface type that is not IP[46] enabled 
> is a basic security feature. To IP4 enable an interface you have two option;
> 1)   Assign it an IP address
> 2)   Make it IP unnumbered to another interface that does have an 
> address, e.g.
> set int ip addr loop0 some-private-addr/32
> set int unnumbered gtpu-tunnel-0 use loop0
> set int unnumbered gtpu-tunnel-1 use loop0
> set int unnumbered gtpu-tunnel-2 use loop0
> etc…
> It doesn’t have to be a loopback, I use that only as an example.
>
> To IP6 enable an interface instead of the unnumbered trick one can just do;
> 1)   enable ip6 interface gtpu-tunnel0

Although all IPv6 interfaces by definition have an IPv6 address (the IPv6 
link-local) I do wonder if we shouldn't allow for IP processing to be enabled 
for both IP4 and IP6 independently of having an address configured. (Of course 
that would imply that some protocols wouldn't work.)

ip4_sw_interface_enable_disable() and/or ip6_sw_interface_enable_disable() did 
the trick. It works now without having to use the unnumbered option or having 
to assign a IPv4 address. I didn't check IPv6, though.

Thanks for the help,
Andreas

Cheers,
Ole
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VCL-LDPRELOAD with C++ gRPC

2018-01-26 Thread Peter Palmár
Hi,

we have recently tested the VPP TCP stack/VCL-LDPRELOAD library with C++ gRPC 
(https://grpc.io/) and reported two bugs:
https://jira.fd.io/browse/VPP-1089
https://jira.fd.io/browse/VPP-1101

I would like to ask whether the way how the VCL-LDPRELOAD library is being 
developed is the same,
i.e., it's use-case driven and a full POSIX replacement via LDPRELOAD remains 
out of scope.

If so, does that mean that in order for the VPP TCP stack to become functional 
with gRPC, we can report bugs only, or, if appropriate, help to fix them?

Are you not going to be focused on the C++ gRPC by yourselves in the near 
feature?

Regards,
Peter

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2018-01-26 Thread Ni, Hongjun
Hi Marco,

Thank you for your explanation.  Would contact you if I ran into similar issue 
again.

Thanks,
Hongjun

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, January 26, 2018 4:21 PM
To: Ni, Hongjun ; Ole Troan 
Cc: Dave Barach (dbarach) ; Gabriel Ganne 
; Billy McFall ; Damjan Marion 
(damarion) ; vpp-dev 
Subject: Re: [vpp-dev] openSUSE build fails

On Fri, 2018-01-26 at 06:58 +, Ni, Hongjun wrote:
I rechecked this patch twice, and it built successfully now.

But why need to recheck twice?
If a "recheck" fixed that then it must be an infrastructure glitch; that's the 
only thing I can think of...

That would not be a surprise either since it does happen from time-to-time to 
see random build failures which get fixed by a "recheck".

Having said that, if you happen to have again this sort of problems (and which 
do not go away with a recheck) feel free to drop me an email and I will look 
into it. Just take into account I'm based at UTC+1.



-Hongjun
- Marco


From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:53 PM
To: Ni, Hongjun >
Cc: Dave Barach (dbarach) >; Marco 
Varlese >; Gabriel Ganne 
>; Billy McFall 
>; Damjan Marion (damarion) 
>; vpp-dev 
>
Subject: Re: [vpp-dev] openSUSE build fails

Hi Hongjun,

I have no OpenSUSE at hand, and could not give it a try.

Neither do I.

Ole



From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:08 PM
To: Ni, Hongjun >
Cc: Dave Barach (dbarach) >; Marco 
Varlese >; Gabriel Ganne 
>; Billy McFall 
>; Damjan Marion (damarion) 
>; vpp-dev 
>
Subject: Re: [vpp-dev] openSUSE build fails

Hongjun,

This looks suspect:

03:32:31 APIGEN vlibmemory/memclnt.api.h 03:32:31 JSON API 
vlibmemory/memclnt.api.json 03:32:31 SyntaxError: invalid syntax 
(vppapigentab.py, line 11) 03:32:31 
WARNING:vppapigen:/w/workspace/vpp-verify-master-opensuse/build-root/rpmbuild/BUILD/vpp-18.04/build-data/../src/vlibmemory/memclnt.api:0:1:
 Old Style VLA: u8 data[0]; 03:32:31 Makefile:8794: recipe for target 
'vlibmemory/memclnt.api.h' failed 03:32:31 make[5]: *** 
[vlibmemory/memclnt.api.h] Error 1 03:32:31 make[5]: *** Waiting for unfinished 
jobs 03:32:31




Can you try running vppapigen manually on that platform?
Vppapigen —debug —input memclnt.api ...

Cheers
Ole


On 26 Jan 2018, at 06:38, Ni, Hongjun 
> wrote:
Hi all,

It seems that OpenSUSE build failed for this patch:
https://jenkins.fd.io/job/vpp-verify-master-opensuse/1285/console

Please help to take a

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Dave Barach (dbarach)
Sent: Friday, December 15, 2017 11:19 PM
To: Marco Varlese >; Gabriel Ganne 
>; Billy McFall 
>
Cc: Damjan Marion (damarion) >; 
vpp-dev >
Subject: Re: [vpp-dev] openSUSE build fails

Dear Marco,

Thanks very much...

Dave

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, December 15, 2017 9:06 AM
To: Dave Barach (dbarach) >; 
Gabriel Ganne >; Billy 
McFall >
Cc: Damjan Marion (damarion) >; 
vpp-dev >
Subject: Re: [vpp-dev] openSUSE build fails

We (at SUSE) are currently pushing an update to 2.2.11 for openSUSE in our 
repositories.
Once that's confirmed to be upstream, I will push a new patch to the 
ci-management repo to have the indent package upgraded to the latest version 
and re-enable the "checkstyle".


Cheers,
Marco

On Fri, 2017-12-15 at 13:51 +, Dave Barach (dbarach) wrote:
With a bit of fiddling, I was able to fix gerrit 9440 so that indent 2.2.10 and 
2.2.11 appear to produce identical results...

HTH... Dave

From: