On the release call today I mentioned several takeaways from the OPNFV 
Euphrates release that might be good to capture as part of the retrospective. 
Thanks to the release call participants to allow me to take the final 10 
minutes for these thoughts!

This release allowed us a pause on project goals that in part at least have 
shifted to ONAP, and to establish a new/modified strategy for how OPNFV can add 
value to ONAP and create more synergy in the new networking umbrella structure. 
The projects include primarily VES and Models, which have rejoined the release 
program with these goals for Fraser:

  *   VES<https://wiki.opnfv.org/display/ves/Project+Plan>: Test tools and 
tests for certification of OPNFV reference platform distributions and VNFs as 
compliant with the VES schema and interoperable with ONAP for lifecycle 
management based upon VES events.
     *   Supporting that goal is the ongoing work of deploying the Barometer 
VES agents and the new work of integrating them with ONAP components that will 
drive lifecycle management (DCAE and microservices that implement closed loop 
control functions).
     *   The test suite that verifies VES functionality for open source NFV 
platforms (e.g. the cloud-native platform WIP in Models) and VNFs (WIP in 
Models) will thus be directly usable by vendors/projects that develop NFV 
platforms and VNFs, and potentially form part of the Dovetail suite.
     *   This work will enable the Auto project to fulfill some of its goals 
for platform/VNF assessments under ONAP automation, e.g. per goals for ONAP's 
role in measuring/managing platform/VNF efficiency, performance, and resilience.
     *   ONAP projects related to VES are more able to focus on the data model 
and architecture for closed-loop control, and work with OPNFV on the broader 
goals of promoting VES support in diverse NFV platforms and VNFs
  *   Models<https://wiki.opnfv.org/display/models/Release+Plan>: Use case 
tests for certification of VIM platforms (cloud-native primarily, but also as 
time permits OpenStack-based VIM platforms) and VNFs as compatible with VNF 
orchestration via ONAP for lifecycle management based upon TOSCA-based VNF 
blueprints.
     *   ONAP has taken over most of the meta-goals of Models e.g. driving 
collaboration across SDOs and open source projects for modeled VNF support. 
OPNFV can retain however a key role in developing tools that apply those 
modeled VNF concepts to a diversity of NFV platforms and VNFs.
     *   Since the overall goal for modeling VNFs is that a single blueprint 
can be deployed on (and across) diverse NFV platforms, expanding Models focus 
to cloud-native and hybrid-cloud deployment further fits with the goal of 
supporting ONAP-managed VNF lifecycle in the true diversity of NFV environments 
that will be required. This will also be used to support the goals of the Auto 
project.
     *   Similar with VES, ONAP projects can thus focus on cloud/application 
orchestration, and benefit from OPNFV's driving the readiness of 
multi-cloud-compatible orchestration tools (e.g. Cloudify/Aria) for use in the 
ONAP Multi-VIM project, as applied to diverse NFV platforms. In some cases, 
e.g. for cloud-native, this work will accelerate ONAP's readiness to expand 
into hybrid cloud environments in subsequent releases.

The other takeaways from Euphrates reflected a shift in strategy for our focus 
in OPNFV:

  *   Away from (or at least expanding focus beyond) OpenStack-based cloud 
platforms as the core of OPNFV-inspired NFV platforms. This was partly due to 
the complexity of the OpenStack platform and the ongoing difficulty in getting 
OPNFV distros based upon it reliably deployed, but also its insufficiency to 
meet the goals of evolving beyond first-generation, large-datacenter focused 
deployments of legacy network functions as VM-based VNFs. Thus the refocus on 
cloud-native and hybrid-cloud in Models and VES.
     *   Models in particular will produce a variety of cloud-native stacks 
(Kubernetes, Docker-ce/Moby, Rancher) as test tools, including common 
subsystems such as SDS (e.g. 
ceph-docker<https://github.com/att/netarbiter/tree/master/sds/ceph-docker/examples/helm>),
 monitoring (Prometheus, VES), orchestration (Cloudify).
  *   Related to the above, the insufficiency of OpenStack as currently 
deployed in OPNFV installers to meet the efficiency and resiliency goals of 
edge-focused NFV platforms.
     *   We had proposed the OpenStack-Helm (OSH) based 
Armada<https://wiki.opnfv.org/display/PROJ/Armada> project earlier to address 
the latter goal (since kubernetes will enable much more resilient management of 
the OpenStack control plane), but that has not taken off yet. Nonetheless the 
OSH project is proceeding and getting pushed toward production-readiness by 
service providers (such as 
AT&T<http://about.att.com/innovationblog/enterprise_cloud>), and I fully expect 
there will be renewed opportunity for OPNFV to kick off a project related to 
OSH in early 2018.
     *   Even so, OSH at the edge may not be efficient or resilient enough, or 
address the needs of truly lightweight, disaggregated, cloud-native focused 
VNFs. So we have refocused our efforts to establish the same common ONAP 
support goals for modeled VNFs and closed-loop control toward those things in 
cloud-native platforms.

Thanks,
Bryan Sullivan | AT&T

_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to