Hi Alex
From: Alexandru Avadanii <alexandru.avada...@enea.com> Date: Sunday, September 10, 2017 at 3:40 PM To: "Alec Hothan (ahothan)" <ahot...@cisco.com>, Julien <julien...@gmail.com>, "Cooper, Trevor" <trevor.coo...@intel.com>, "Beierl, Mark" <mark.bei...@dell.com>, Bob Monkman <bob.monk...@arm.com>, Shai Tsur <shai.t...@arm.com> Cc: "opnfv-tech-discuss@lists.opnfv.org" <opnfv-tech-discuss@lists.opnfv.org> Subject: RE: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Hi, Alec, Multiarch support is still optional for all test projects, as well as installer projects. We (armband) provided and continue to provide software support, as well as hardware resources for AArch64 specific tasks. This includes community-available build servers, PODs etc. For functest and yardstick, most of the initial porting was handled by us (with the help of the respective project team), while storperf was mostly covered by Mark, we only helped with the infra. [Alec] that is great, but is I am not mistaken, the validation of the ARM version of any project is still to be done by the project team: this means testing the ARM version on ARM HW and make sure it behaves identically on an ongoing basis. When it comes to CI/CD, we can help all willing projects with the ARM port – e.g. we are currently working on extending Doctor testing to run on AArch64 PODs too, after we recently added arch-specific test-phases for storperf (again, mostly Mark’s work), functest etc. The builds usually run in parallel, so it doesn’t translate to doubling the verify job duration. [Alec] You may think it is parallel but they’re just queued in a single queue if I’m not mistaken (there is a loop checking if there is an ongoing docker build), there Is effectively only 1 docker build happening at any time. Meaning this effectively and exactly doubles the time to build containers and impacts the wait time for everyone. I agree that SW traffic generators are more complex to port to a new architecture, so I totally understand the decision regarding Trex. As for the VM images question, I am probably missing some context. I don’t see any problem in providing VM images for different architectures. Fuel@OPNFV uses some Ubuntu cloud images to provisiong its nodes, and apart from using a different URL depending on the hardware architecture, the process is completely transparent. [Alec] Providing 2 versions of VM images is not an issue but you have to build them and test them as well. VM image builds take time, and those can also be built on the releng build servers. My project builds a VM image for x86, every time I create a new image I would have to build it for 2 archs instead of 1 if I had to support arm64. I’ll have to think a bit more about installer vs test project requirements. I understand the motivation behind it, but implementing it in CI/CD raises some concerns, since so far we assumed the jumphost handles both install and test phases. [Alec] Installing the OPNFV test environment is likely just a matter of deploying a relatively small number of docker containers, could be done with docker compose I guess. And it has no need to support multiple Linux distros/HW arch for the reasons cited below. That is totally different from installing any full blown openstack installer so has no reason to be subject to the same constraints. Separating them can only simplify CI/CD (by having smaller independent units). Thanks Alec BR, Alex From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com] Sent: Sunday, September 10, 2017 10:30 PM To: Alexandru Avadanii; Julien; Cooper, Trevor; Beierl, Mark; Bob Monkman; Shai Tsur Cc: opnfv-tech-discuss@lists.opnfv.org Subject: Re: [opnfv-tech-discuss] Topics for Weekly Technical Discussion I’d like to point out that there is also a cost involved in supporting dual architecture on the testing tools side, that I think largely outweigh the material cost of adding an x86 jump host (including the additional wiring/config). Because of that dual architecture mandate, all test projects have to produce 2 times more artifacts + double the time to build them every time a build is triggered + the extra burden for maintaining CI/CD scripts and not even mentioning the time to test the code on 2 arch. Some test projects - most specifically those that use SW traffic generators like TRex - will not get into arm64 anytime soon because we just do not have the resources and time to invest there and we already have a lot of work to do and barely enough resources with just 1 arch, I honestly prefer to have 1 good quality test framework and toolset on 1 arch than getting stretched thin on 2 archs and not be able to provide good quality code. It is good that we support today container builds for both arch but what about VM images? Obviously, it would be “nicer” to have dual arch support for testing tools but we have to make trade-offs based on practical considerations. Given the primary function of these tools is to test a pod, it should not matter on which arch they run as long as they perform their function properly, predictably and consistently. On the contrary, what if test tools perform differently on arm64 than on x86? Won’t that add yet another unnecessary wild card for comparing platforms? I’d rather not take any chance and use the exact same SW/HW setting on the testing side to test different pods: pick a fixed HW arch (x86_64), a fixed Linux distro – both independent of the pod under test – and let testing teams focus on honing their framework and tools on 1 test platform - seems like the most sensible thing to do. We need consistency of test tools across all OPNFV labs and that includes on which type of test server they run as well. Regarding the comments below on deployer/installer requirements (single arch or dual arch on any given pod), we should distinguish pod installer requirements from OPNFV test framework installation requirements. They don’t have to be the same. Regards, Alec From: Alexandru Avadanii <alexandru.avada...@enea.com<mailto:alexandru.avada...@enea.com>> Date: Sunday, September 10, 2017 at 9:12 AM To: Julien <julien...@gmail.com<mailto:julien...@gmail.com>>, "Alec Hothan (ahothan)" <ahot...@cisco.com<mailto:ahot...@cisco.com>>, "Cooper, Trevor" <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>>, "Beierl, Mark" <mark.bei...@dell.com<mailto:mark.bei...@dell.com>>, Bob Monkman <bob.monk...@arm.com<mailto:bob.monk...@arm.com>>, Shai Tsur <shai.t...@arm.com<mailto:shai.t...@arm.com>> Cc: "opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>" <opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>> Subject: RE: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Hi, Julien, Afaik, both Armband Fuel@OPNFV and the soon-to-be-supported Apex on AArch64 require an AArch64 jump host, at least with the current code base. There is nothing stopping us from using a dual-jumphost (e.g. AArch64 for deploying, x86_64 for running test suites), but it adds complexity to the lab setup, and resource access control is harder (blocking access to the AArch64 deploy while tests are runnning on x86_64 jumphost). Added Bob and Shai to to-list. BR, Alex From: opnfv-tech-discuss-boun...@lists.opnfv.org<mailto:opnfv-tech-discuss-boun...@lists.opnfv.org> [mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Julien Sent: Sunday, September 10, 2017 12:45 PM To: Alec Hothan (ahothan); Cooper, Trevor; Beierl, Mark Cc: opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> Subject: Re: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Hi Bob, We are considering to setup a new ARM POD in our lab, and I plan only 5 blade servers with ARM CPU. I wonder I can still use a x86_64 server as the jumphost for this ARM POD. I don't expect it will be blocked by another arm server. BR/Julien Alec Hothan (ahothan) <ahot...@cisco.com<mailto:ahot...@cisco.com>>于2017年9月8日周五 上午6:31写道: Hi Trevor, Thanks for getting back on this. I agree there is not much incentive to run TRex on ARM at this point. ARM pods that want to do data plane benchmarking can use a HW traffic generator or run Trex on an Intel jump host. Thanks Alec From: "Cooper, Trevor" <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>> Date: Thursday, September 7, 2017 at 2:37 PM To: "Alec Hothan (ahothan)" <ahot...@cisco.com<mailto:ahot...@cisco.com>>, "Beierl, Mark" <mark.bei...@dell.com<mailto:mark.bei...@dell.com>> Cc: "opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>" <opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>>, "HU, BIN" <bh5...@att.com<mailto:bh5...@att.com>>, Raymond Paik <rp...@linuxfoundation.org<mailto:rp...@linuxfoundation.org>> Subject: RE: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Hi Alec … VSPERF does not currently plan to support TREX on ARM … it’s not clear what the benefit of this work would be given that there are multiple traffic generator options. The Pharos POD specification doesn’t have any bearing on components such as traffic generators. We have found that software traffic generators have a wide variety of capabilities. /Trevor From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com<mailto:ahot...@cisco.com>] Sent: Thursday, August 17, 2017 7:47 AM To: Beierl, Mark <mark.bei...@dell.com<mailto:mark.bei...@dell.com>> Cc: opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>; HU, BIN <bh5...@att.com<mailto:bh5...@att.com>>; Raymond Paik <rp...@linuxfoundation.org<mailto:rp...@linuxfoundation.org>>; Cooper, Trevor <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>> Subject: Re: [opnfv-tech-discuss] Topics for Weekly Technical Discussion [+Trevor to get vsperf point of view] Mark, Adding ARM artifacts is probably not that much work for python apps, for C/C++ apps that use DPDK it can be a lot more work. I just checked with the Trex team and as I suspected Trex is not available on ARM today. Somebody will have to try it out on an ARM server - meaning, it will take some work to compile Trex, link to DPDK and test it thoroughly to be on par with its x86 version – and a whole lot more people will have to maintain one more arch. The port might work right away or it might be pretty messy. I wonder if Trevor has a plan for TRex on ARM… From what I can see, to run data plane performance test with TRex on ARM pod will require an x86 server until Trex is validated on ARM. Thanks Alec From: "Beierl, Mark" <mark.bei...@dell.com<mailto:mark.bei...@dell.com>> Date: Thursday, August 17, 2017 at 6:21 AM To: "Alec Hothan (ahothan)" <ahot...@cisco.com<mailto:ahot...@cisco.com>> Cc: "opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>" <opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>>, "HU, BIN" <bh5...@att.com<mailto:bh5...@att.com>>, Raymond Paik <rp...@linuxfoundation.org<mailto:rp...@linuxfoundation.org>> Subject: Re: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Alec, It is completely up to you how you want to structure your project and your deliverables. If you don't want the extra hassle of supporting ARM, then don't. As for my project and the other ones that happen to support ARM, we will continue this discussion to see what makes sense. Regards, Mark Mark Beierl SW System Sr Principal Engineer Dell EMC | Office of the CTO mobile +1 613 314 8106<tel:1-613-314-8106> mark.bei...@dell.com<mailto:mark.bei...@dell.com> On Aug 16, 2017, at 21:02, HU, BIN <bh5...@att.com<mailto:bh5...@att.com>> wrote: Alec, Thank you for your input, and letting know you won’t be able to make the meeting tomorrow. Mark, Do you still want to discuss in the meeting tomorrow? (my only concern is the attendance, which may not warrant an effective live discussion. Or do you think the discussion on mailing list should be good enough? If we all think the discussion on mailing list is good enough, we don’t need to discuss it in the meeting tomorrow. Thanks Bin From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com] Sent: Wednesday, August 16, 2017 5:47 PM To: HU, BIN <bh5...@att.com<mailto:bh5...@att.com>>; Beierl, Mark <mark.bei...@dell.com<mailto:mark.bei...@dell.com>> Cc: opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> Subject: Re: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Mark, Thanks for updating me on the ARM situation. My only comment is that it could have been easier to perhaps have an x86 server/jump host servicing an ARM pod given that testing tools do not exactly have to run on the same arch than the pod under test, but I guess decision has been made - now we need every test tool to also support ARM (that in addition to more work to support 2 arch, more test to do…). On my side, I’ll need to check with the TRex team if they support ARM. If it does not work, every data plane test tool that uses TRex will be impacted (at least vsperf + nfvbench). It really seems to me that we could have saved all the extra hassle of ARM support with an x86 jump host (VMs is another story but we could have limited the overhead to VM artifacts only). Bin: unfortunately, I won’t be able to make it at the technical discussion meeting as it will be in the middle of my Thursday commute. Thanks Alec From: "HU, BIN" <bh5...@att.com<mailto:bh5...@att.com>> Date: Tuesday, August 15, 2017 at 5:00 PM To: "Beierl, Mark" <mark.bei...@dell.com<mailto:mark.bei...@dell.com>>, "Alec Hothan (ahothan)" <ahot...@cisco.com<mailto:ahot...@cisco.com>> Cc: "opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>" <opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>> Subject: RE: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Good discussion and suggestion, thank you Alec and Mark. We can discuss this on Thursday. I put it on the agenda “Container Versioning / Naming Schema for x86 and ARM”. Talk to you all on Thursday Bin From: Beierl, Mark [mailto:mark.bei...@dell.com] Sent: Tuesday, August 15, 2017 10:23 AM To: Alec Hothan (ahothan) <ahot...@cisco.com<mailto:ahot...@cisco.com>> Cc: HU, BIN <bh5...@att.com<mailto:bh5...@att.com>>; opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> Subject: Re: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Hello, Alec. Fair questions, but in the ARM pods there are not necessarily x86 servers to act as the host for the container. It is also my desire to support ARM for the various pods we have, and not make it difficult for them to run. We already support ARM containers for functest, yardstick, qtip and dovetail, just with a different naming scheme than other projects in docker hub. If you look at the way multiarch alpine structures their tags, yes, it is arch-version, so x86-euphrates.1.0 would be the correct way of labelling it. I realize we are getting close to Euphrates release date, so this might be postponed to F, but I would like to have a community discussion about this to see if it makes sense, or if we want to continue with creating repos to match the architecture. Regards, Mark Mark Beierl SW System Sr Principal Engineer Dell EMC | Office of the CTO mobile +1 613 314 8106<tel:1-613-314-8106> mark.bei...@dell.com<mailto:mark.bei...@dell.com> On Aug 15, 2017, at 12:03, Alec Hothan (ahothan) <ahot...@cisco.com<mailto:ahot...@cisco.com>> wrote: We need to look at the impact on versioning since the docker container tag reflects the release (e.g. euphrates-5.0.0), since this proposal prepends an arch field (x86-euphrates-5.0.0 ?). How many OPNFV containers will have to support more arch than just x86? I was under the impression that most test containers could manage to run on x86 only (since we can pick the server where these test containers will run), but I am missing the arm context and why (some) test containers need to support ARM… Is that a mandate for all OPNFV test containers? Thanks Alec From: <opnfv-tech-discuss-boun...@lists.opnfv.org<mailto:opnfv-tech-discuss-boun...@lists.opnfv.org>> on behalf of "Beierl, Mark" <mark.bei...@dell.com<mailto:mark.bei...@dell.com>> Date: Tuesday, August 15, 2017 at 8:18 AM To: "HU, BIN" <bh5...@att.com<mailto:bh5...@att.com>> Cc: "opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>" <opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>> Subject: [opnfv-tech-discuss] Topics for Weekly Technical Discussion Hello, Is this the right place to discuss changing the docker image names from containing the architecture to having the tag contain it instead? For example (from a previous email): Alpine tags as follows: multiarch/alpine:x86-latest-stable multiarch/alpine:aarch64-latest-stable Vs. in OPNFV we use the image name to specify the architecture [2], [3]: opnfv/functest:latest opnfv/functest_aarch64:latest I think the way multiarch/alpine does it is preferable so that there is only one repository name, but I think we need to discuss this across the different projects and releng to make these changes. [1] https://hub.docker.com/r/multiarch/alpine/tags/<https://url10.mailanyone.net/v1/?m=1dqyo8-0005aj-3N&i=57e1b682&c=ZlPe3G-O-6a8JwXkw8dT6JTJ37QQPoFmbJkjX3CqY-Ey7xIfSQd9WujqONMEqRLSNFuRsXqMqB5Q56-J8pxshTVEH1kTkodXVw5ikN4ofj6LLpW4X2NqiP932czqvLRZEaZd2H9O_G9R6vvX3Ymio6oQfs8SK3a07uz98eZeBID5aJnibS1M2S_P5zbSzmecEfNTmKrYgT6QlLSaz0m-5nIeqhfwkJHe4Y36NvQ8PyxANW2Jg1lEbv5XkKZzMGls_f8XyfvCJS3Pu1d6h2XZhp_KHnw3YSapEOeqrBOnPwPFlprkWmPfX2k5Ebj2TVc1Lc4cg4dWCCOctUmk5OCqh13AAkjb8CKxQpa5jX8IfCSvA9W-kKvj6_jt-0JtQiAkB3O5pqODAHIzwFItSr0I_v1bAIf0aICL_oQbzvsPNzJJGiDu_9XhsN2ilFccuaG-OiLQjfqHiiVUiWTcudRxztczlBwODZfAw2R-Ivt5q8mz6P4snuptPiEyOxOPleby5iInqIbKCAiqzo5vkmjlxQ> [2] https://hub.docker.com/r/opnfv/functest/<https://url10.mailanyone.net/v1/?m=1dqyo8-0005aj-3N&i=57e1b682&c=4pQ6l742BOfttrDCWMvvjjClXn-Et9zzL6VhcyfPYipZBSZUZ3tszsvImsZ_XxqUzrVXNjQaPAvcrol8fsCRbF3aNcJDl-67Ex8FbLcUA_7q50PLq7juaCcLTTqm5ZF9MGw3bcXqPeoT1nGPI3XsGiObkUGs1NsGzhJjfZh4-wLi3em1GljWFO98hsFRLXSEksigKjPpsEZ36ELwtrphx0KukdLaLMgDhVYUtd-Mr9hoqLfvKdJeP0qRZowcP_OYNNi6pLaDbduN0wDv04cBSEYCY6yuGGH9kxBQYg3GOBa0-KAdb69-sD8YIYR_4cRWQVDuMVyii5oJ7kQON2EF9WLoUcISeJULwQFBJhFvXtZ0JK0xeNugOwdpZzikQP0I7yXF2jW6FiezB7w6LrrVo7qqnQWErCi7ETtKaPHiJlS3veS_DZzafnj8dhgh547QuCrAcSs-ZufBK-GWY5_SLQ2WYGkyGgErdYFCLqVKzYCJtzck_y45ERlFt8FwqPBq> [3] https://hub.docker.com/r/opnfv/functest_aarch64/tags/<https://url10.mailanyone.net/v1/?m=1dqyo8-0005aj-3N&i=57e1b682&c=r_4c8DD1sky0PSO2to4sz2eahGuN75q96UyP0MADjicCS7XyJrzGmYAse7aQMuLLYAtwjVJR9xzk6t4qNdEjARqnb-9J8bdXSRhI3_kHfCeUDPjgoO3g7DUeD-SdJ7qjmMMoKcgGQpKZRL1VoYB2rY4o04p_e8WYZdQ2ss22J8Ll-mSzwXqyYeA2FaUoOQ_Q2xvQXVrlFcNnxVBHw8vfCSwxyhZUrh4_EKJhEr8FOEV5zkjgTPSw8k5W-kjmwcOYcD3jDH1LzIlbol6zXPJOeskkzYP6MRaUz3QJzCYU5vVC0JYa0q6Rakr6QXCFNQ8hDANcpKsukb_qYGagcyBRvblipYPlhkGMXAReLbqqYR5MLySodxDkwW74k0vPrTo_xLURoQc2gOI2cP_h3v5eza4SLRNlN17S7YBFQb6I_zKOeeVHsRnp3WJ_YtziS4ThnVRyaeePqjJ8yOzZxIEvwdlUf2Fqd001li1exPuljBPKt8ryg8wyix69EFwVGwOO8SG21yWjzWSa4fm19o7BhQ> Regards, Mark Mark Beierl SW System Sr Principal Engineer Dell EMC | Office of the CTO mobile +1 613 314 8106<tel:1-613-314-8106> mark.bei...@dell.com<mailto:mark.bei...@dell.com> On Aug 15, 2017, at 10:52, HU, BIN <bh5...@att.com<mailto:bh5...@att.com>> wrote: Hello community, Just a friendly reminder that if you want to discuss any item/topic/issue at our weekly technical discussion this Thursday 08/17, please feel free to let me know. Thanks Bin _______________________________________________ opnfv-tech-discuss mailing list opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss<https://url10.mailanyone.net/v1/?m=1dqyo8-0005aj-3N&i=57e1b682&c=R-ipSdFyZUReqcE6i09ZeI0g0rFO32KC798USTLN6waNhtnw1CTb44NBNq42ImY5JY-wdaS-IxA3leerxiVyJtPBv6Tt-AYaJBZv_TnZ8VdYIqC8Oq2qfUsLp76G_8I13gvw2pU2aXyCC8cFvKsUGf-nQgtyWCdXsws6SFBL_8MqKO3d_lDLZ7fA-gGQaHZGyOq4Z8KKXroJ38I6ENSglAK-XYjBej-CpBF_BVfqUyOW5mZgofHOUikw1vJ0OD3vN8Qzd-No1verTpWNJrlnSybH446jsLITjYLZVqhxgTrH-4vYmSTR_eEwg75F48UkkE5RpGifTybfXaj6SdQHhG5AACnokMkNsZ09Vvvcc3Sq1-ZGbvHO6ti1h2f_9Xdt8FflRU9LZWKhNVqCCMc0QCcobBElQz-9cqacA8WhBQsqjBgGT0upk4k3krOL_srJ0QGqwvnfdajOoySid31rGVqp9kHOn5L0cdcSh_M13nm8c8pMTXlV2uKdgzc6fyuFMHaLNiqmhRC_Dgzw7DnME7zRkqeLtCHujVqEA6xwMUg> _______________________________________________ opnfv-tech-discuss mailing list opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss<https://url10.mailanyone.net/v1/?m=1dqyo8-0005aj-3N&i=57e1b682&c=bKKWR6vrtpvWv0NU6GNHPAok7xb5esv9hORaoqiSedZSyhyBqIm6wM8T9l3idIN8_-Howd8le1tTTnrT-9ZdyslxxZ0GoMl--V01we9lCPyfB-GgKyksAmsx-snYHahILUveLTMLBWui3rrDHYF1altzdD5jv80WxQSFW2beh3ChCA3lVtIX4veJIUiMkB9J3zkCK2rWG26HM2oOBtyQvyZlmFiHR_XYzJkbyaa6Gw8APY9cqH4ikfLxEQMdwBXmJ-NfL_7NzOHrfShAdzrYCA>
_______________________________________________ opnfv-tech-discuss mailing list opnfv-tech-discuss@lists.opnfv.org https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss