> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, April 9, 2019 6:05 PM
> To: Ananyev, Konstantin <konstantin.anan...@intel.com>; Aaron Conole
> <acon...@redhat.com>
> Cc: dev@dpdk.org; Jerin Jacob <jer...@marvell.com>; Gavin Hu
> <gavin...@arm.com>; Richardson, Bruce <bruce.richard...@intel.com>;
> Michael Santana <msant...@redhat.com>
> Subject: RE: [PATCH 3/3] acl: adjust the tests
> 
> 
> 
> >
> > > > Hi Aaron,
> > > >
> > > >>
> > > >> This makes the tests pass, and also ensures that on platforms
> > > >> where the testing is supported, we can properly test the
> > > >> implementation specific code.  One edge case is when we run on
> > > >> x86_64 systems that don't support AVX2, but where the compiler
> > > >> can generate such instructions.  That could be an enhancement in
> > > >> the future, but for now at least the tests will pass.
> > > >>
> > > >> Signed-off-by: Aaron Conole <acon...@redhat.com>
> > > >> ---
> > > >>  app/test/test_acl.c             | 62 +++++++++++++----------------
> ----
> > > >>  lib/librte_acl/Makefile         |  1 +
> > > >>  lib/librte_acl/acl_run_notsup.c | 46 ++++++++++++++++++++++++
> > > >>  lib/librte_acl/meson.build      |  4 +--
> > > >>  4 files changed, 73 insertions(+), 40 deletions(-)  create mode
> > > >> 100644 lib/librte_acl/acl_run_notsup.c
> > > >>
> > > >> diff --git a/app/test/test_acl.c b/app/test/test_acl.c index
> > > >> b1f75d1bc..c44faa251 100644
> > > >> --- a/app/test/test_acl.c
> > > >> +++ b/app/test/test_acl.c
> > > >> @@ -408,6 +408,9 @@ test_classify(void)
> > > >>                return -1;
> > > >>        }
> > > >>
> > > >> +      /* Always use the scalar testing for now. */
> > > >> +      rte_acl_set_ctx_classify(acx, RTE_ACL_CLASSIFY_SCALAR);
> > > >> +
> > > >>        ret = 0;
> > > >>        for (i = 0; i != TEST_CLASSIFY_ITER; i++) {
> > > >>
> > > >> @@ -547,6 +550,7 @@ test_build_ports_range(void)
> > > >>        for (i = 0; i != RTE_DIM(test_data); i++)
> > > >>                data[i] = (uint8_t *)&test_data[i];
> > > >>
> > > >> +      rte_acl_set_ctx_classify(acx, RTE_ACL_CLASSIFY_SCALAR);
> > > >>        for (i = 0; i != RTE_DIM(test_rules); i++) {
> > > >>                rte_acl_reset(acx);
> > > >>                ret = test_classify_buid(acx, test_rules, i + 1); @@ -
> 911,6
> > > >> +915,8 @@ test_convert_rules(const char *desc,
> > > >>                return -1;
> > > >>        }
> > > >>
> > > >> +      rte_acl_set_ctx_classify(acx, RTE_ACL_CLASSIFY_SCALAR);
> > > >> +
> > > >
> > > > As I understand here and above, on x86 you replaced default algo
> > > > (SSE, AVX2) with scalar one, right?
> > > > That looks like reduction of test coverage for x86.
> > >
> > > In one way, you're right.  However, the tests weren't testing what
> > > they purported anyway.
> >
> > Could you explain a bit more here?
> > What I am seeing: tests were running bot sse(or avx2) and scalar
> classify() method.
> > Now they always running scalar only.
> > To me it definitely looks like reduction in coverage.
> >
> > >  Actually, it's just a shift I think (previously, it would have
> > > tested the AVX2 but I don't see AVX2 having a fallback into the SSE
> > > code - unlike the SSE code falling back into scalar).
> >
> > Not sure I understand you here.
> > What fallback for AVX2 you expect that you think is missing?
> >
> > >
> > > The tests were failing for a number of reasons when built with
> > > meson,
> >
> > Ok, but with legacy build system (make) on x86 all tests passes, right?
> > So the problem is in new build system, not in the test itself.
> > Why we should compromise our test coverage to make it work with new
> > tools?
> > That just hides the problem without fixing it.
> > Instead I think the build system needs to be fixed.
> > Looking at it a bit closely, for .so meson+ninja generates code with
> > correct version of the function:
> >
> > nm x86_64-native-linuxapp-gcc-meson/lib/librte_acl.so.2 | grep
> > acl_classify_sse
> > 000000000000fa50 t rte_acl_classify_sse
> >
> > So for 'meson -Ddefault_library=shared'
> > acl_autotest passes without the problem.
> >
> > Though for static lib we have both:
> > nm x86_64-native-linuxapp-gcc-meson/lib/librte_acl.a | grep
> > acl_classify_sse
> > 0000000000000000 W rte_acl_classify_sse
> > 0000000000004880 T rte_acl_classify_sse
> >
> > And then linker pickups the wrong one:
> > nm x86_64-native-linuxapp-gcc-meson/app/test/dpdk-test | grep
> > acl_classify_sse
> > 00000000005f6100 W rte_acl_classify_sse
> >
> > While for make:
> > $ nm x86_64-native-linuxapp-gcc-aesmb/lib/librte_acl.a | grep
> > acl_classify_sse
> > 0000000000000000 W rte_acl_classify_sse
> > 0000000000004880 T rte_acl_classify_sse $ nm
> > x86_64-native-linuxapp-gcc-aesmb/app/test | grep acl_classify_sse
> > 0000000000240440 T rte_acl_classify_sse
> >
> > Linker pickups the right one.
> 
> And the changes below make linker to pick-up the proper version of the
> function and make acl_autotest to pass for static build too.
> 
> diff --git a/app/test/meson.build b/app/test/meson.build index
> 867cc5863..4364be932 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -328,6 +328,7 @@ test_dep_objs += cc.find_library('execinfo', required:
> false)  link_libs = []  if get_option('default_library') == 'static'
>         link_libs = dpdk_drivers
> +       link_libs += dpdk_static_libraries
>  endif
> 
>  if get_option('tests')
> diff --git a/meson.build b/meson.build
> index a96486597..df1e1c41c 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -62,6 +62,7 @@ configure_file(output: build_cfg,  # for static builds,
> include the drivers as libs and we need to "whole-archive"
>  # them.
>  dpdk_drivers = ['-Wl,--whole-archive'] + dpdk_drivers + ['-Wl,--no-whole-
> archive']
> +dpdk_static_libraries = ['-Wl,--whole-archive'] + dpdk_static_libraries
> ++ ['-Wl,--no-whole-archive']
> 
> Not saying that's the proper patch, but just to prove that linking
> librte_acl.a with '--whole-archive' does fix the problem.
> Bruce, could you point is the best way to fix things here (my meson
> knowledge is very limited)?
> Do we need extra container here as 'whole_archive_static_libraries[]' or
> so?
> Thanks
> Konstantin
> 
I'll look into this. I'd really rather avoid having to have all DPDK libraries
linked with link-whole, but if not, we'll need some sort of similar solution.

Regards,
/Bruce

Reply via email to