Hi, As discussed yesterday, Alberto will provide a proposal for the dashboarding of testing projects together with a training for the community regarding the customization of the ELK dashboard. Before that, there are still several issues we need to resolve.
1. Strange values for version, criteria and scenario. We need to decide how we deal with these values. During the discussion we have 3 options here: a) We can make the dashboard not show the results from these values, i.e., screening the results from these values b) Each project to correct the values in TestDB to compliant with our Naming Conventions c) We correct as many as we can and leave the ones for project-wise purposes. Project Version Criteria Scenario Bottlenecks stable/Euphrates None None QTIP None 1640, 1777, 1785, 1562, 1734, 1662, 1669, 1636 generic VSPERF OVScc3a32f3b6891168cee98812e8f5e3d8a5a52c98 DPDK v17.02 OVScc3a32f3b6891168cee98812e8f5e3d8a5a52c98 OVScc3a32f3b6891168cee98812e8f5e3d8a5a52c98 DPDK not used OVSed26e3ea9995ba632e681d5990af5ee9814f650e DPDK v16.07 None vsperf 2. Needs/requirements/necessity for generating subpages for testing projects. There has been a long discussion about the subpages for testing projects in Biterg dashboard. Since the dashboard could automatically generate search results page for different project/keyword, we need to collect special needs for projects if the they are planning to show a subpage under the testing tab. There is a table below made by Morgan summarizing the needs/issues for this purpose. If you are planning to create a subpage, please make sure that the testing results of your test cases are pushed to TestDB and send the example graphs to Biterg team. Alberto will make a proposal based on the needs (for the projects that need a subpage) Project Subpage Status OPNFV contacts Customized expected graph(s) Comments Functest created<https://opnfv.biterg.io/goto/141bf895a14364f7408d0a77ac16e4bc> cedric.olliv...@orange.com<mailto:cedric.olliv...@orange.com> jalaus...@suse.com<mailto:jalaus...@suse.com> 1 graph : - (Runs, Failures, Cases) = f(time) display error on the test run/fail/skip graphs - only Sum of Failures displayed Graphs can be used for tempest/refstack/snaps_smoke storperf not created mark.bei...@emc.com<mailto:mark.bei...@emc.com> 1 graph - Bandwidth(bandwidth, average bandwidth, 110 % average, 90 % average, slope) = f(time) storperf created a docker for reporting http://docs.opnfv.org/en/latest/submodules/storperf/docs/testing/user/storperf-reporting.html VSPERF not created sridhar....@spirent.com<mailto:sridhar....@spirent.com> 2 graphs - Frames(64,128,512,1024,1518) = f(time) - Fps(64,128,512,1024,1518) = f(time) https://wiki.opnfv.org/display/vsperf/VSPERF+CI+Results bottlenecks not created gabriel.yuy...@huawei.com<mailto:gabriel.yuy...@huawei.com> Example done on local Kibana by Bottleneck team problem as 1 graph = 1 result ( no time based) : which result shall be represented / not matching usual filtering models qtip not created zhangyujun+...@gmail.com<mailto:zhangyujun+...@gmail.com> ?? yardstick not created ross.b.bratt...@intel.com<mailto:ross.b.bratt...@intel.com> jean.gaoli...@huawei.com<mailto:jean.gaoli...@huawei.com> None ? yardstick has already a complete dashboarding solution based on grafana - no need for subpages in bitergia ? Best, Gabriel
_______________________________________________ opnfv-tech-discuss mailing list opnfv-tech-discuss@lists.opnfv.org https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss