Hi all,

As noted on the Barometer call here is a quick overview of the status of 
VES-Barometer integration, and some takeaways, in advance of a 30-min demo on 
some upcoming Barometer call. For reference, see the current VES deployment 
approach for OPFNV testing and ONAP integration at the VES home 
page<https://wiki.opnfv.org/display/ves/VES+Home>, and how the scripts below 
are used in a complete stack deployment ala 
demo_deploy.sh<https://github.com/opnfv/models/blob/master/tools/kubernetes/demo_deploy.sh>
 for the models 
kubernetes<https://github.com/opnfv/models/tree/master/tools/kubernetes> stack.

First as you can see in 
ves-setup.sh<https://github.com/opnfv/ves/blob/master/tools/ves-setup.sh> I'm 
now using the Barometer container out of the box. Thanks a lot for implementing 
this... it's really simplified the VES deployment.

In ves-setup.sh<https://github.com/opnfv/ves/blob/master/tools/ves-setup.sh> 
you can also see that I now deploy zookeeper as its own container using the 
default zookeeper image. For kafka, I had to create a container image that 
included some environment parameters that are needed, per the code snippet 
below:
    log "setup kafka server"
    source ~/k8s_env.sh
    sudo docker run -it -d -p 2181:2181 --name zookeeper zookeeper
    sudo docker run -it -d -p 9092:9092 --name ves-kafka \
      -e zookeeper_host=$k8s_master_host \
      -e zookeeper=$k8s_master \
      -e kafka_hostname=$ves_kafka_hostname \
      blsaws/ves-kafka:latest

It would be good if the build process for this container (see 
ves-kafka.sh<https://github.com/opnfv/ves/blob/master/build/ves-kafka.sh>) 
could be picked up by Barometer, and an "official" OPNFV kafka image generated, 
that VES could then use. This would help reduce the ants that anteater 
complains about (I assume you know what anteater is...).

The ves-agent.sh<https://github.com/opnfv/ves/blob/master/build/ves-agent.sh> 
and 
ves-collector.sh<https://github.com/opnfv/ves/blob/master/build/ves-collector.sh>
 build the containers that run those components. The ves-agent build script 
could migrate to Barometer since it's based upon the ves_app.py code, but 
that's an option. At the least we can review the dockerfiles/scripts to see 
what any Barometer docs or code can incorporate.

Right now those containers are simply started using docker in a shell script 
over SSH, on the node that the user (of 
demo_deploy.sh<https://github.com/opnfv/ves/blob/master/tools/demo_deploy.sh>) 
chooses as the "master" node. I plan to evolve that soon to a more robust 
approach including:

  *   Deploying/managing the containers using kubernetes, with k8s chart labels 
for service placement and restart policies for resiliency
  *   Assess other ways to enhance the resiliency/HA nature of the services, 
e.g. running multiple k8s pods for a service; this will include assessing how 
well k8s restores a service deployed as a single pod, and how that affects the 
reliability of the VES framework.
  *   Deploy the containers onto k8s clusters (managing the local cloud/NFV 
platform) using Helm and/or Cloudify, with the goal of aligning with the ONAP 
OOM project in the current/next release.

I'd also like to have a deep-dive on the VES schema and how the ves_app.py code 
maps the collectd events. I'll try to collect some specific issues re that in 
advance of some later session, and get Alok's team to support the discussion.

Other goals for this year include aligning the ONAP VES code with OPNFV, 
driving VES/DCAE/CLAMP integration in OPNFV thru the Auto project, and 
leveraging the above for development of VES-compliance tests for NFV platforms 
and VNFs, as well as within the broader goals of Auto for closed-loop control 
tests.

Thanks,
Bryan Sullivan | AT&T

_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to