Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host to data plane wiring

2017-09-29 Thread Alec Hothan (ahothan)
t from whatever POD we might be run on. PDF is the other and 
important part of what we are doing so we can have total independence from PODs.
SDF will put the missing piece to get the BOM for any and every deployment and 
testing that is done in XCI.

Putting test containers into VM and running testing from there have been part 
of our overall strategy.

However, after talking with Alec on IRC, it was clear that putting test 
containers to VM is not applicable to all the testing projects.
As summarized above, the reason for us to use VMs is to isolate things from the 
physical jumphost  and ensure everything starts clean for reproducibility and 
so on.
Using containers for test projects serve the same purpose; isolation and 
reproducibility so we do not see any issue here and will do our best to support 
you, ensuring you get what you need as long as it fits into the overall 
strategy set by Infra WG.
If we find something that doesn't fit, we can bring that topic to Infra WG and 
can do necessary adjustments if possible.

/Fatih

From: "Alec Hothan (ahothan)" <ahot...@cisco.com>
Date: Friday, 29 September 2017 at 18:22
To: Wenjing Chu <wenjing@huawei.com>, Trevor Cooper 
<trevor.coo...@intel.com>, "opnfv-tech-discuss@lists.opnfv.org" 
<opnfv-tech-discuss@lists.opnfv.org>
Cc: Fatih Degirmenci <fatih.degirme...@ericsson.com>, Jack Morgan 
<jack.mor...@intel.com>
Subject: Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host 
to data plane wiring


I had an irc chat with Fatih and went through the various hurdles. The first 
hurdle is that all test containers run inside a VM on the jump host today. That 
is not going to work well with software traffic gen as they require direct 
access to the NIC using DPDK (well technically you can run a traffic gen in a 
VM but I think we need to run them native or container native to get the best 
performance and best clock accuracy).

So we will need to allow some test containers to run native on the jump host, 
which is not a problem per se, just need to coordinate with XCI team to support 
that mode.
Second hurdle is to standardize how every jump host should be tied to the pod 
data plane: specify NIC card, wiring requirements and data plane encaps.
Once we have these 2 hurdles solved, we should be able to automate data plane 
testing for any pod using software traffic generator based tools.

To start this, I have added some notes to the Pharos 2.0 spec etherpad
https://etherpad.opnfv.org/p/Pharos_Specification2.0

Please add/edit as needed.

Trevor: I have not looked at Dovetail use yet but we can certainly discuss with 
the Dovetail team if that will be helpful. I’ll need to check for example the 
Dovetail requirements for the data plane (e.g. what encaps, what type of 
neutron network… as there can be quite a lot of variations depending on the 
neutron plugins and neutron implementation).

Thanks

  Alec








From: Wenjing Chu <wenjing@huawei.com>
Date: Thursday, September 28, 2017 at 2:16 PM
To: "Cooper, Trevor" <trevor.coo...@intel.com>, "Alec Hothan (ahothan)" 
<ahot...@cisco.com>, "opnfv-tech-discuss@lists.opnfv.org" 
<opnfv-tech-discuss@lists.opnfv.org>
Subject: RE: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host 
to data plane wiring

I would second the point raised here.
We went through a similar thought process in Dovetail, and simplified the data 
connection model between the  jump server (the test node) and the pod (the 
system under test). I agree that clarity here will reduce complexity for many 
projects.

Regards
Wenjing

From: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Cooper, Trevor
Sent: Thursday, September 28, 2017 1:37 PM
To: Alec Hothan (ahothan) <ahot...@cisco.com>; 
opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host 
to data plane wiring

Hi Alec

I think that is a good goal. Since VSPERF is stand-alone (not dependent on 
OpenStack) there is no need for a Jump Server in the Pharos sense. In our CI 
and sandbox environments both hardware and software traffic generators are 
directly connected to the DUT. Especially for post-deployment tools like 
NFVBench I think your idea makes sense as it will help with usability which is 
key. I also think this is something we can take to Dovetail where we will 
definitely need a well-defined wiring model for any kind of performance tests. 
Do you have a proposal in mind?

Thanks

/Trevor




From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com]
Sent: Tuesday, September 26, 2017 11:38 AM
To: 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>; 
Cooper, Trevor <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>>
Cc: MORTON, ALFRED C (AL) <acmor...@att.com<mailto:acmor.

Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host to data plane wiring

2017-09-29 Thread Fatih Degirmenci
Hi,

I think there is a misunderstanding, especially with this: "The first hurdle is 
that all test containers run inside a VM on the jump host today."

I would like to clarify this and add some more details.

Test containers do not run inside a VM on the jump host today. Test containers 
directly run on the jumphost.

What I was referring to you when we had chat is the initiative Infra WG has 
been driving to move installers into VMs on jumphosts so the installation 
process can be isolated from the jumphost and driven from a VM in order to 
ensure installers can use any PODs we have.

The reason for this is that we currently use dedicated PODs per installer and 
these installers directly run on jumphosts, depending on OS and some other 
stuff on jumphosts (I have suspicions that they even expect certain usernames 
for the login user which is really strange).
This really makes things difficult for us in Infra, for the developers, and for 
the end users since noone knows what we have on those jumphosts and the state 
of CI POD jumphosts is a big question. (some jumphosts haven't been 
cleaned/reinstalled for 2 about years.)
This is not really a good practice since we basically do not have any change 
control on these machines and whatever is done there has possibility to cause 
headaches. (for example, strange failures which you can't reproduce elsewhere.)
And finally, our releases come out from these PODs so it is another big 
question; reproducibility of entire release...

We haven't been able to get the above idea implemented by installers and this 
results in other issues such as resource shortage and so on which is nothing 
compared to issues listed above.

As part of XCI initiative, we aim to apply all the ideas brought up by Infra WG 
over time and demostrate the benefits of what we have been proposing. 
(dogfooding)
This is not limited to having a VM on jumphost where the installation process 
is driven.
All the PDF, SDF, VM, dynamic CI, CI evolution etc. activities actually pursue 
to ensure we can have full traceability, reproducibility (CM101), and we use 
our resources wisely, shortening the time it takes for us to do things rather 
than things waiting in the queue for days. (Long list shows that none of the 
ideas we brought up was actually implemented which should be another concern 
for the community and a topic to brought up to TSC.)

One of the first things XCI Team started working on is putting the components 
used by XCI into a VM on jumphost so we always get the same behavior by the use 
of a clean machine and we can use any and every POD OPNFV has.
We also support 3 distros; Ubuntu, Centos, OpenSUSE. So we are basically 
independent from whatever POD we might be run on. PDF is the other and 
important part of what we are doing so we can have total independence from PODs.
SDF will put the missing piece to get the BOM for any and every deployment and 
testing that is done in XCI.

Putting test containers into VM and running testing from there have been part 
of our overall strategy.

However, after talking with Alec on IRC, it was clear that putting test 
containers to VM is not applicable to all the testing projects.
As summarized above, the reason for us to use VMs is to isolate things from the 
physical jumphost  and ensure everything starts clean for reproducibility and 
so on.
Using containers for test projects serve the same purpose; isolation and 
reproducibility so we do not see any issue here and will do our best to support 
you, ensuring you get what you need as long as it fits into the overall 
strategy set by Infra WG.
If we find something that doesn't fit, we can bring that topic to Infra WG and 
can do necessary adjustments if possible.

/Fatih

From: "Alec Hothan (ahothan)" <ahot...@cisco.com>
Date: Friday, 29 September 2017 at 18:22
To: Wenjing Chu <wenjing@huawei.com>, Trevor Cooper 
<trevor.coo...@intel.com>, "opnfv-tech-discuss@lists.opnfv.org" 
<opnfv-tech-discuss@lists.opnfv.org>
Cc: Fatih Degirmenci <fatih.degirme...@ericsson.com>, Jack Morgan 
<jack.mor...@intel.com>
Subject: Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host 
to data plane wiring


I had an irc chat with Fatih and went through the various hurdles. The first 
hurdle is that all test containers run inside a VM on the jump host today. That 
is not going to work well with software traffic gen as they require direct 
access to the NIC using DPDK (well technically you can run a traffic gen in a 
VM but I think we need to run them native or container native to get the best 
performance and best clock accuracy).

So we will need to allow some test containers to run native on the jump host, 
which is not a problem per se, just need to coordinate with XCI team to support 
that mode.
Second hurdle is to standardize how every jump host should be tied to the pod 
data plane: specify NIC card, wiring requirements and

Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host to data plane wiring

2017-09-29 Thread Alec Hothan (ahothan)

I had an irc chat with Fatih and went through the various hurdles. The first 
hurdle is that all test containers run inside a VM on the jump host today. That 
is not going to work well with software traffic gen as they require direct 
access to the NIC using DPDK (well technically you can run a traffic gen in a 
VM but I think we need to run them native or container native to get the best 
performance and best clock accuracy).

So we will need to allow some test containers to run native on the jump host, 
which is not a problem per se, just need to coordinate with XCI team to support 
that mode.
Second hurdle is to standardize how every jump host should be tied to the pod 
data plane: specify NIC card, wiring requirements and data plane encaps.
Once we have these 2 hurdles solved, we should be able to automate data plane 
testing for any pod using software traffic generator based tools.

To start this, I have added some notes to the Pharos 2.0 spec etherpad
https://etherpad.opnfv.org/p/Pharos_Specification2.0

Please add/edit as needed.

Trevor: I have not looked at Dovetail use yet but we can certainly discuss with 
the Dovetail team if that will be helpful. I’ll need to check for example the 
Dovetail requirements for the data plane (e.g. what encaps, what type of 
neutron network… as there can be quite a lot of variations depending on the 
neutron plugins and neutron implementation).

Thanks

  Alec








From: Wenjing Chu <wenjing@huawei.com>
Date: Thursday, September 28, 2017 at 2:16 PM
To: "Cooper, Trevor" <trevor.coo...@intel.com>, "Alec Hothan (ahothan)" 
<ahot...@cisco.com>, "opnfv-tech-discuss@lists.opnfv.org" 
<opnfv-tech-discuss@lists.opnfv.org>
Subject: RE: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host 
to data plane wiring

I would second the point raised here.
We went through a similar thought process in Dovetail, and simplified the data 
connection model between the  jump server (the test node) and the pod (the 
system under test). I agree that clarity here will reduce complexity for many 
projects.

Regards
Wenjing

From: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Cooper, Trevor
Sent: Thursday, September 28, 2017 1:37 PM
To: Alec Hothan (ahothan) <ahot...@cisco.com>; 
opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host 
to data plane wiring

Hi Alec

I think that is a good goal. Since VSPERF is stand-alone (not dependent on 
OpenStack) there is no need for a Jump Server in the Pharos sense. In our CI 
and sandbox environments both hardware and software traffic generators are 
directly connected to the DUT. Especially for post-deployment tools like 
NFVBench I think your idea makes sense as it will help with usability which is 
key. I also think this is something we can take to Dovetail where we will 
definitely need a well-defined wiring model for any kind of performance tests. 
Do you have a proposal in mind?

Thanks

/Trevor




From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com]
Sent: Tuesday, September 26, 2017 11:38 AM
To: 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>; 
Cooper, Trevor <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>>
Cc: MORTON, ALFRED C (AL) <acmor...@att.com<mailto:acmor...@att.com>>
Subject: [VSPERF][NFVBENCH] streamlining the jump host to data plane wiring

Hi Trevor and team,

I’d like to get some feedback regarding the way jump hosts are wired to the 
data plane as that will have a direct impact on how software based traffic gen 
like TRex is configured on the jump host.
My impression is that the wiring of traffic gen devices to the data plane has 
been ad-hoc per pod until now (meaning they might be different based on the 
testbed).
I have seen OPNFV diagrams where traffic gen devices are wired directly to a 
compute node and others where they are wired to a switch.
If we have a more streamlined way of wiring the jump host to the data plane, it 
will make the automation run of software based traffic gen tools a lot easier.

What I‘d like to suggest is to have a common jump host data plane wiring model 
for all pods so we can run the same automated scripts on any pod without having 
to deal with wild variations in wiring and data plane configuration.
This does not necessarily mean to setup a complex model, on the contrary, I’d 
like to propose a simple wiring model that can accommodate most use cases (this 
will of course not preclude the use of special different wiring for other use 
cases).
Would be interested to know the experience of VSPERF team in that regard and if 
we can come up with a joint proposal?

Thanks

  Alec




___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host to data plane wiring

2017-09-28 Thread Wenjing Chu
I would second the point raised here.
We went through a similar thought process in Dovetail, and simplified the data 
connection model between the  jump server (the test node) and the pod (the 
system under test). I agree that clarity here will reduce complexity for many 
projects.

Regards
Wenjing

From: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Cooper, Trevor
Sent: Thursday, September 28, 2017 1:37 PM
To: Alec Hothan (ahothan) <ahot...@cisco.com>; 
opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host 
to data plane wiring

Hi Alec

I think that is a good goal. Since VSPERF is stand-alone (not dependent on 
OpenStack) there is no need for a Jump Server in the Pharos sense. In our CI 
and sandbox environments both hardware and software traffic generators are 
directly connected to the DUT. Especially for post-deployment tools like 
NFVBench I think your idea makes sense as it will help with usability which is 
key. I also think this is something we can take to Dovetail where we will 
definitely need a well-defined wiring model for any kind of performance tests. 
Do you have a proposal in mind?

Thanks

/Trevor




From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com]
Sent: Tuesday, September 26, 2017 11:38 AM
To: 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>; 
Cooper, Trevor <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>>
Cc: MORTON, ALFRED C (AL) <acmor...@att.com<mailto:acmor...@att.com>>
Subject: [VSPERF][NFVBENCH] streamlining the jump host to data plane wiring

Hi Trevor and team,

I’d like to get some feedback regarding the way jump hosts are wired to the 
data plane as that will have a direct impact on how software based traffic gen 
like TRex is configured on the jump host.
My impression is that the wiring of traffic gen devices to the data plane has 
been ad-hoc per pod until now (meaning they might be different based on the 
testbed).
I have seen OPNFV diagrams where traffic gen devices are wired directly to a 
compute node and others where they are wired to a switch.
If we have a more streamlined way of wiring the jump host to the data plane, it 
will make the automation run of software based traffic gen tools a lot easier.

What I‘d like to suggest is to have a common jump host data plane wiring model 
for all pods so we can run the same automated scripts on any pod without having 
to deal with wild variations in wiring and data plane configuration.
This does not necessarily mean to setup a complex model, on the contrary, I’d 
like to propose a simple wiring model that can accommodate most use cases (this 
will of course not preclude the use of special different wiring for other use 
cases).
Would be interested to know the experience of VSPERF team in that regard and if 
we can come up with a joint proposal?

Thanks

  Alec




___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [VSPERF][NFVBENCH] streamlining the jump host to data plane wiring

2017-09-26 Thread Alec Hothan (ahothan)
Hi Trevor and team,

I’d like to get some feedback regarding the way jump hosts are wired to the 
data plane as that will have a direct impact on how software based traffic gen 
like TRex is configured on the jump host.
My impression is that the wiring of traffic gen devices to the data plane has 
been ad-hoc per pod until now (meaning they might be different based on the 
testbed).
I have seen OPNFV diagrams where traffic gen devices are wired directly to a 
compute node and others where they are wired to a switch.
If we have a more streamlined way of wiring the jump host to the data plane, it 
will make the automation run of software based traffic gen tools a lot easier.

What I‘d like to suggest is to have a common jump host data plane wiring model 
for all pods so we can run the same automated scripts on any pod without having 
to deal with wild variations in wiring and data plane configuration.
This does not necessarily mean to setup a complex model, on the contrary, I’d 
like to propose a simple wiring model that can accommodate most use cases (this 
will of course not preclude the use of special different wiring for other use 
cases).
Would be interested to know the experience of VSPERF team in that regard and if 
we can come up with a joint proposal?

Thanks

  Alec




___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss