On 09/21/2016 19:29, Matthew Hall wrote:
> On Tue, Sep 20, 2016 at 10:11:04PM +0200, Thomas Monjalon wrote:
>> Please, will you help reviewing this patch?
> Sure.
Sorry for being late to reply, I was a bit busy.
>
> 1. It adds a dependency on libbsd on Linux: bsd/sys/tree.h. Is this an 
> expected dependency of DPDK already? I use it in my code but not sure it's 
> expected for everybody else.
In the V1 I was just importing bsd/tree.h in DPDK sources but I was told
that it was better to depend on libbsd on linux. It seemed a good
choice, and the idea was taken from a previous patch submission from
Stephen Hemminger.
>
> 2. Some of the tabbing of the new code seems not right. Probably need to 
> recheck the indentation settings.
Ok, I will fix that.
>
> 3. The comment formatting of the new code is a bit odd. Continued lines in /* 
> ... */ in DPDK should have * on them I think.
Ok, I will fix that also.
>
> 4. It also contains some vendor specific comments like "/* Vyatta change to 
> use red-black tree */". That's probably not needed if it's going into normal 
> DPDK.
It was because I copy/pasted some code from another patch, I guess we
can drop that also.
>
> 5. It uses "malloc" instead of standard DPDK allocators. That's bad for me 
> because I don't want to use libc malloc in my code. Only DPDK allocators and 
> jemalloc.
Like Stephen has explained, the tree of rules is her for storing the
inserted rules in the modified DIR-24-8 itself.

So, you should no see impacts on the rte_lpm6_lookup function due to the
malloc() since I assume that you are no doing any
rte_lpm6_add/rte_lpm6_delete in your fastpath.
>
> 6. I don't see any updates to the unit test stuff. Don't we need new tests 
> for 
> the changes to deletion and re-insertion of rules and the changes in the tbl8 
> allocation.
I have slightly modified the test11 from in test_lpm6.c
(http://dpdk.org/dev/patchwork/patch/15294/).I have added some call to
rte_lpm6_tbl8_count() to verify that the number of tbl8 is consistent
with the previous implementation which was recreating the full DIR-24-8
on each rte_lpm6_delete calls.

During my development I was also often comparing the memory dump of of
the tbl8 and tbl24 tables. If it is needed I may resubmit the patches
with a test which do some memcmp between a LPM6 table created with only
rte_lpm6_add() calls and a LPM6 table that contains the same rules but
had some rte_lpm6_delete() calls in addition of the adds.

Do you feel that those 2 tests are enough or do you have some more tests
to suggest ?
>
> 7. Some of us previous submitted code to expand LPM4 and LPM6 to 24 bit next 
> hop but the current code is only doing 16 bit. This is not pleasant for me 
> because I really need the 24-bit support. When can we make this happen?
I have chosen to increase it to 16bit only because I wanted to keep the
rte_lpm6_tbl_entry structure on 4 bytes. Actually the nexthop size is
21bit in the struct but it is returning only 16bits. I may re-submit a
patch which permit to use the whole 21bit but the 16bit version seemed
better to me, and it was enough for my needs.

About increasing the size of the rte_lpm6_tbl_entry, I'm not sure it
will come without performance impacts and I don't really have the time
right now to test it carefully enough so any inputs are welcome on it.
>
> Matthew.
The part of the patch on which I would have ideally wanted a good review
is on the delete_step() and delete_expand_step().

I have the feeling that the code here may be improved in several way
concerning the thread safety. You cannot expect to have atomicity like
in LPM4 since you may have to modify several tbl8 one a delete, but if
you do a lookup in your fast path during a rte_lpm6_delete call that is
running in a different thread your lookup should fail gracefully. In my
tests it is the case
but I may have missed some corner case.

Reply via email to