Hi Lijian,

+1 on the finding. It would be interested to know how much is the performance 
gain.
Having said that, correct me if I am wrong,  I think pmalloc module works only 
on single hugepage size (pm->def_log2_page_sz) which means either 1G or 2M and 
not both

Thanks,
Nitin

From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Honnappa 
Nagarahalli
Sent: Thursday, July 23, 2020 10:53 PM
To: Damjan Marion <dmar...@me.com>
Cc: Lijian Zhang <lijian.zh...@arm.com>; vpp-dev <vpp-dev@lists.fd.io>; nd 
<n...@arm.com>; Govindarajan Mohandoss <govindarajan.mohand...@arm.com>; 
Jieqiang Wang <jieqiang.w...@arm.com>; Honnappa Nagarahalli 
<honnappa.nagaraha...@arm.com>
Subject: [EXT] Re: [vpp-dev] Create big tables on huge-page

External Email
________________________________
Sure. We will create couple of patches (in the areas we are analyzing 
currently) and we can decide from there.
Thanks,
Honnappa

From: Damjan Marion <dmar...@me.com<mailto:dmar...@me.com>>
Sent: Thursday, July 23, 2020 12:17 PM
To: Honnappa Nagarahalli 
<honnappa.nagaraha...@arm.com<mailto:honnappa.nagaraha...@arm.com>>
Cc: Lijian Zhang <lijian.zh...@arm.com<mailto:lijian.zh...@arm.com>>; vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>; nd 
<n...@arm.com<mailto:n...@arm.com>>; Govindarajan Mohandoss 
<govindarajan.mohand...@arm.com<mailto:govindarajan.mohand...@arm.com>>; 
Jieqiang Wang <jieqiang.w...@arm.com<mailto:jieqiang.w...@arm.com>>
Subject: Re: [vpp-dev] Create big tables on huge-page



Hard to say without seeing the patch. Can you summarize what the changes will 
be in each particular .c file?


On 23 Jul 2020, at 18:00, Honnappa Nagarahalli 
<honnappa.nagaraha...@arm.com<mailto:honnappa.nagaraha...@arm.com>> wrote:

Hi Damjan,
                Thank you. Till your patch is ready, would you accept patches 
that would enable creating these tables in 1G huge pages as temporary solution?

Thanks,
Honnappa

From: Damjan Marion <dmar...@me.com<mailto:dmar...@me.com>>
Sent: Thursday, July 23, 2020 7:15 AM
To: Lijian Zhang <lijian.zh...@arm.com<mailto:lijian.zh...@arm.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>; nd 
<n...@arm.com<mailto:n...@arm.com>>; Honnappa Nagarahalli 
<honnappa.nagaraha...@arm.com<mailto:honnappa.nagaraha...@arm.com>>; 
Govindarajan Mohandoss 
<govindarajan.mohand...@arm.com<mailto:govindarajan.mohand...@arm.com>>; 
Jieqiang Wang <jieqiang.w...@arm.com<mailto:jieqiang.w...@arm.com>>
Subject: Re: [vpp-dev] Create big tables on huge-page


I started working on patch which addresses most of this points, few weeks ago, 
but likely I will not have it completed for 20.09.
Even if it is completed, it is probably bad idea to merge it so late in the 
release process….

—
Damjan



On 23 Jul 2020, at 10:45, Lijian Zhang 
<lijian.zh...@arm.com<mailto:lijian.zh...@arm.com>> wrote:

Hi Maintainers,
From VPP source code, ip4-mtrie table is created on huge-page only when below 
parameters are set in configuration file.
While adjacency table is created on normal-page always.
  36 ip {
  37   heap-size 256M
  38   mtrie-hugetlb
  39 }
In the 10K flow testing, I configured 10K routing entries in ip4-mtrie and 10K 
entries in adjacency table.
By creating ip4-mtrie table on 1G huge-page with above parameters set and 
similarly create adjacency table on 1G huge-page, although I don’t observe 
obvious throughput performance improvement, but TLB misses are dramatically 
reduced.
Do you think configuration of 10K routing entries + 10K adjacency entries is a 
reasonable and possible config, or normally it would be 10K routing entries + 
only several adjacency entries?
Does it make sense to create adjacency table on huge-pages?
Another problem is although above assigned heap-size is 256M, but on 1G 
huge-page system, it seems to occupy a huge-page completely, other memory space 
within that huge-page seems will not be used by other tables.

Same as the bihash based tables, only 2M huge-page system is supported. To 
support creating bihash based tables on 1G huge-page system, each table will 
occupy a 1G huge-page completely, but that will waste a lot of memories.
Is it possible just like pmalloc module, reserving a big memory space on 1G/2M 
huge-pages in initialization stage, and then allocate memory pieces per 
requirement for Bihash, ip4-mtrie and adjacency tables, so that all tables 
could be created on huge-pages and will fully utilize the huge-pages.
I tried to create MAC table on 1G huge-page, and it does improve throughput 
performance.
vpp# show bihash
Name                             Actual Configured
GBP Endpoints - MAC/BD               1m 1m
b4s                                 64m 64m
b4s                                 64m 64m
in2out                           10.12m 10.12m
in2out                           10.12m 10.12m
ip4-dr                               2m 2m
ip4-dr                               2m 2m
ip6 FIB fwding table                32m 32m
ip6 FIB non-fwding table            32m 32m
ip6 mFIB table                      32m 32m
l2fib mac table                    512m 512m
mapping_by_as4                      64m 64m
out2in                             128m 128m
out2in                             128m 128m
out2in                           10.12m 10.12m
out2in                           10.12m 10.12m
pppoe link table                     8m 8m
pppoe session table                  8m 8m
static_mapping_by_external          64m 64m
static_mapping_by_local             64m 64m
stn addresses                        1m 1m
users                              648k 648k
users                              648k 648k
vip_index_per_port                  64m 64m
vxlan4                               1m 1m
vxlan4-gbp                           1m 1m
Total                             1.28g 1.28g

Thanks.



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17105): https://lists.fd.io/g/vpp-dev/message/17105
Mute This Topic: https://lists.fd.io/mt/75742152/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to