Re: [lng-odp] New API to convert user area ptr to odp_packet_t

2017-12-08 Thread Honnappa Nagarahalli
On 8 December 2017 at 13:40, Bill Fischofer  wrote:
>
>
> On Fri, Dec 8, 2017 at 1:06 PM, Honnappa Nagarahalli
>  wrote:
>>
>> On 7 December 2017 at 22:36, Bill Fischofer 
>> wrote:
>> >
>> >
>> > On Thu, Dec 7, 2017 at 10:12 PM, Honnappa Nagarahalli
>> >  wrote:
>> >>
>> >> On 7 December 2017 at 17:36, Bill Fischofer 
>> >> wrote:
>> >> >
>> >> >
>> >> > On Thu, Dec 7, 2017 at 3:17 PM, Honnappa Nagarahalli
>> >> >  wrote:
>> >> >>
>> >> >> This experiment clearly shows the need for providing an API in ODP.
>> >> >>
>> >> >> On ODP2.0 implementations such an API will be simple enough
>> >> >> (constant
>> >> >> subtraction), requiring no additional storage in VLIB.
>> >> >>
>> >> >> Michal, can you send a PR to ODP for the API so that we can debate
>> >> >> the
>> >> >> feasibility of the API for Cavium/NXP platforms.
>> >> >
>> >> >
>> >> > That's the point. An API that is tailored to a specific
>> >> > implementation
>> >> > or
>> >> > application is not what ODP is about.
>> >> >
>> >> How are the requirements coming to ODP APIs currently? My
>> >> understanding is, it is coming from OFP and Petri's requirements.
>> >> Similarly, VPP is also an application of ODP. Recently, Arm community
>> >> (Arm and partners) prioritized on the open source projects that are of
>> >> importance and came up with top 50 (or 100) projects. If I remember
>> >> correct VPP is among top single digits (I am trying to get the exact
>> >> details). So, it is an application of significant interest.
>> >
>> >
>> > VPP is important, but what's important is for VPP to perform
>> > significantly
>> > better on at least one ODP implementation than it does today using DPDK.
>> > If
>> > we can't demonstrate that then there's no point to the ODP4VPP project.
>> > That's not going to happen on x86 since we can assume that VPP/DPDK is
>> > optimal here since VPP has been tuned to DPDK internals. So we need to
>> > focus
>> > the performance work on Arm SoC platforms that offer significant HW
>> > acceleration capabilities that VPP can exploit via ODP4VPP.
>>
>> VPP can exploit these capabilities through DPDK as well (may be few
>> APIs are missing, but they will be available soon) as Cavium/NXP
>> platforms support DPDK.
>
>
> If the goal is to be "just as good" as DPDK then we fail because a VPP
> application doesn't see or care whether DPDK or ODP is running underneath
> it. The requirement is for VPP applications to run significantly (2x, 4x,
> etc.) better using ODP4VPP than DPDK. That won't come by fine-tuning what's
> fundamentally the same code, but rather by eliminating entire processing
> steps by, for example, exploiting inline IPsec acceleration.
>
The point I am trying to make is, VPP can exploit inline IPsec
acceleration through DPDK as well (if those APIs are not available in
DPDK, they will be available soon). So, what use cases we will look at
that point? We need to be looking at all the use cases and we need to
be better at all them.

>>
>>
>> This API is the basic API that is required for any use case. I do not
>> understand why this API is not required for IPsec acceleration in NXP.
>> If we store odp_packet_t in VLIB buffer, it will affect the
>> performance of IPsec performance on NXP platform as well.
>
>
> That sounds like an assumption rather than a measurement. We should let
> Nikhil weigh in here about the key drivers to achieving best performance on
> NXP platforms.
>
Well, not exactly assumption, based on working on similar
optimizations, we all have done enough cache line related
optimizations in Linux-Generic now. We can do it again, Sachin has the
code as well.

>>
>>
>> This isn't one
>> > of those. The claim is that with or without this change ODP4VPP on x86
>> > performs worse than VPP/DPDK on x86.
>>
>> That does not mean, we do not work on increasing the performance of
>> ODP4VPP on x86. This API will help catch up on the performance.
>
>
> My point is that the best you can hope for on x86 is to be no different than
> DPDK. That's a fail, so such tuning isn't germane to ODP4VPP's success. We
> need to be focusing on exp

Re: [lng-odp] New API to convert user area ptr to odp_packet_t

2017-12-08 Thread Honnappa Nagarahalli
On 7 December 2017 at 22:36, Bill Fischofer  wrote:
>
>
> On Thu, Dec 7, 2017 at 10:12 PM, Honnappa Nagarahalli
>  wrote:
>>
>> On 7 December 2017 at 17:36, Bill Fischofer 
>> wrote:
>> >
>> >
>> > On Thu, Dec 7, 2017 at 3:17 PM, Honnappa Nagarahalli
>> >  wrote:
>> >>
>> >> This experiment clearly shows the need for providing an API in ODP.
>> >>
>> >> On ODP2.0 implementations such an API will be simple enough (constant
>> >> subtraction), requiring no additional storage in VLIB.
>> >>
>> >> Michal, can you send a PR to ODP for the API so that we can debate the
>> >> feasibility of the API for Cavium/NXP platforms.
>> >
>> >
>> > That's the point. An API that is tailored to a specific implementation
>> > or
>> > application is not what ODP is about.
>> >
>> How are the requirements coming to ODP APIs currently? My
>> understanding is, it is coming from OFP and Petri's requirements.
>> Similarly, VPP is also an application of ODP. Recently, Arm community
>> (Arm and partners) prioritized on the open source projects that are of
>> importance and came up with top 50 (or 100) projects. If I remember
>> correct VPP is among top single digits (I am trying to get the exact
>> details). So, it is an application of significant interest.
>
>
> VPP is important, but what's important is for VPP to perform significantly
> better on at least one ODP implementation than it does today using DPDK. If
> we can't demonstrate that then there's no point to the ODP4VPP project.
> That's not going to happen on x86 since we can assume that VPP/DPDK is
> optimal here since VPP has been tuned to DPDK internals. So we need to focus
> the performance work on Arm SoC platforms that offer significant HW
> acceleration capabilities that VPP can exploit via ODP4VPP.

VPP can exploit these capabilities through DPDK as well (may be few
APIs are missing, but they will be available soon) as Cavium/NXP
platforms support DPDK.

This API is the basic API that is required for any use case. I do not
understand why this API is not required for IPsec acceleration in NXP.
If we store odp_packet_t in VLIB buffer, it will affect the
performance of IPsec performance on NXP platform as well.

This isn't one
> of those. The claim is that with or without this change ODP4VPP on x86
> performs worse than VPP/DPDK on x86.

That does not mean, we do not work on increasing the performance of
ODP4VPP on x86. This API will help catch up on the performance.

>
> Since VPP applications don't change if ODP4VPP is in the picture or not, it
> doesn't matter whether it's used on x86, so tuning ODP4VPP on x86 is at best
> of secondary importance. We just need at least one Arm platform on which VPP
> applications run dramatically better than without it.

This is not tuning for only x86 platform, it is a tuning that would
apply to any platform.

>
>>
>>
>> >>
>> >>
>> >> On 7 December 2017 at 14:08, Bill Fischofer 
>> >> wrote:
>> >> > On Thu, Dec 7, 2017 at 12:22 PM, Michal Mazur
>> >> > 
>> >> > wrote:
>> >> >
>> >> >> Native VPP+DPDK plugin knows the size of rte_mbuf header and
>> >> >> subtracts
>> >> >> it
>> >> >> from the vlib pointer.
>> >> >>
>> >> >> struct rte_mbuf *mb0 = rte_mbuf_from_vlib_buffer (b0);
>> >> >> #define rte_mbuf_from_vlib_buffer(x) (((struct rte_mbuf *)x) - 1)
>> >> >>
>> >> >
>> >> > No surprise that VPP is a DPDK application, but I thought they wanted
>> >> > to
>> >> > be
>> >> > independent of DPDK. The problem is that ODP is never going to match
>> >> > DPDK
>> >> > at an ABI level on x86 so we can't be fixated on x86 performance
>> >> > comparisons between ODP4VPP and VPP/DPDK.
>> >> Any reason why we will not be able to match or exceed the performance?
>> >
>> >
>> > It's not that ODP can't have good performance on x86, it's that DPDK
>> > encourages apps to be very dependent on DPDK implementation details such
>> > as
>> > seen here. ODP is not going to match DPDK internals so applications that
>> > exploit such internals will always see a difference.
>> >
>> >>
>> >>
>> >> What we need to do is compare
>> >> > ODP4VPP on Arm-based SoCs vs. "nati

Re: [lng-odp] New API to convert user area ptr to odp_packet_t

2017-12-07 Thread Honnappa Nagarahalli
On 7 December 2017 at 17:36, Bill Fischofer  wrote:
>
>
> On Thu, Dec 7, 2017 at 3:17 PM, Honnappa Nagarahalli
>  wrote:
>>
>> This experiment clearly shows the need for providing an API in ODP.
>>
>> On ODP2.0 implementations such an API will be simple enough (constant
>> subtraction), requiring no additional storage in VLIB.
>>
>> Michal, can you send a PR to ODP for the API so that we can debate the
>> feasibility of the API for Cavium/NXP platforms.
>
>
> That's the point. An API that is tailored to a specific implementation or
> application is not what ODP is about.
>
How are the requirements coming to ODP APIs currently? My
understanding is, it is coming from OFP and Petri's requirements.
Similarly, VPP is also an application of ODP. Recently, Arm community
(Arm and partners) prioritized on the open source projects that are of
importance and came up with top 50 (or 100) projects. If I remember
correct VPP is among top single digits (I am trying to get the exact
details). So, it is an application of significant interest.

>>
>>
>> On 7 December 2017 at 14:08, Bill Fischofer 
>> wrote:
>> > On Thu, Dec 7, 2017 at 12:22 PM, Michal Mazur 
>> > wrote:
>> >
>> >> Native VPP+DPDK plugin knows the size of rte_mbuf header and subtracts
>> >> it
>> >> from the vlib pointer.
>> >>
>> >> struct rte_mbuf *mb0 = rte_mbuf_from_vlib_buffer (b0);
>> >> #define rte_mbuf_from_vlib_buffer(x) (((struct rte_mbuf *)x) - 1)
>> >>
>> >
>> > No surprise that VPP is a DPDK application, but I thought they wanted to
>> > be
>> > independent of DPDK. The problem is that ODP is never going to match
>> > DPDK
>> > at an ABI level on x86 so we can't be fixated on x86 performance
>> > comparisons between ODP4VPP and VPP/DPDK.
>> Any reason why we will not be able to match or exceed the performance?
>
>
> It's not that ODP can't have good performance on x86, it's that DPDK
> encourages apps to be very dependent on DPDK implementation details such as
> seen here. ODP is not going to match DPDK internals so applications that
> exploit such internals will always see a difference.
>
>>
>>
>> What we need to do is compare
>> > ODP4VPP on Arm-based SoCs vs. "native VPP" that can't take advantage of
>> > the
>> > HW acceleration present on those platforms. That's how we get to show
>> > dramatic differences. If ODP4VPP is only within a few percent (plus or
>> > minus) of VPP/DPDK there's no point of doing the project at all.
>> >
>> > So my advice would be to stash the handle in the VLIB buffer for now and
>> > focus on exploiting the native IPsec acceleration capabilities that ODP
>> > will permit.
>> >
>> >
>> >> On 7 December 2017 at 19:02, Bill Fischofer 
>> >> wrote:
>> >>
>> >>> Ping to others on the mailing list for opinions on this. What does
>> >>> "native" VPP+DPDK get and how is this problem solved there?
>> >>>
>> >>> On Thu, Dec 7, 2017 at 11:55 AM, Michal Mazur
>> >>> 
>> >>> wrote:
>> >>>
>> >>>> The _odp_packet_inline is common for all packets and takes up to two
>> >>>> cachelines (it contains only offsets). Reading pointer for each
>> >>>> packet from
>> >>>> VLIB would require to fetch 10 million cachelines per second.
>> >>>> Using prefetches does not help.
>> >>>>
>> >>>> On 7 December 2017 at 18:37, Bill Fischofer
>> >>>> 
>> >>>> wrote:
>> >>>>
>> >>>>> Yes, but _odp_packet_inline.udate is clearly not in the VLIB cache
>> >>>>> line
>> >>>>> either, so it's a separate cache line access. Are you seeing this
>> >>>>> difference in real runs or microbenchmarks? Why isn't the entire
>> >>>>> VLIB being
>> >>>>> prefetched at dispatch? Sequential prefetching should add negligible
>> >>>>> overhead.
>> >>>>>
>> >>>>> On Thu, Dec 7, 2017 at 11:13 AM, Michal Mazur
>> >>>>> 
>> >>>>> wrote:
>> >>>>>
>> >>>>>> It seems that only first cache line of VLIB buffer is in L1, new
>> >>>>>> pointer can be placed only in second cacheline.
&g

Re: [lng-odp] New API to convert user area ptr to odp_packet_t

2017-12-07 Thread Honnappa Nagarahalli
This experiment clearly shows the need for providing an API in ODP.

On ODP2.0 implementations such an API will be simple enough (constant
subtraction), requiring no additional storage in VLIB.

Michal, can you send a PR to ODP for the API so that we can debate the
feasibility of the API for Cavium/NXP platforms.

On 7 December 2017 at 14:08, Bill Fischofer  wrote:
> On Thu, Dec 7, 2017 at 12:22 PM, Michal Mazur 
> wrote:
>
>> Native VPP+DPDK plugin knows the size of rte_mbuf header and subtracts it
>> from the vlib pointer.
>>
>> struct rte_mbuf *mb0 = rte_mbuf_from_vlib_buffer (b0);
>> #define rte_mbuf_from_vlib_buffer(x) (((struct rte_mbuf *)x) - 1)
>>
>
> No surprise that VPP is a DPDK application, but I thought they wanted to be
> independent of DPDK. The problem is that ODP is never going to match DPDK
> at an ABI level on x86 so we can't be fixated on x86 performance
> comparisons between ODP4VPP and VPP/DPDK.
Any reason why we will not be able to match or exceed the performance?

What we need to do is compare
> ODP4VPP on Arm-based SoCs vs. "native VPP" that can't take advantage of the
> HW acceleration present on those platforms. That's how we get to show
> dramatic differences. If ODP4VPP is only within a few percent (plus or
> minus) of VPP/DPDK there's no point of doing the project at all.
>
> So my advice would be to stash the handle in the VLIB buffer for now and
> focus on exploiting the native IPsec acceleration capabilities that ODP
> will permit.
>
>
>> On 7 December 2017 at 19:02, Bill Fischofer 
>> wrote:
>>
>>> Ping to others on the mailing list for opinions on this. What does
>>> "native" VPP+DPDK get and how is this problem solved there?
>>>
>>> On Thu, Dec 7, 2017 at 11:55 AM, Michal Mazur 
>>> wrote:
>>>
 The _odp_packet_inline is common for all packets and takes up to two
 cachelines (it contains only offsets). Reading pointer for each packet from
 VLIB would require to fetch 10 million cachelines per second.
 Using prefetches does not help.

 On 7 December 2017 at 18:37, Bill Fischofer 
 wrote:

> Yes, but _odp_packet_inline.udate is clearly not in the VLIB cache line
> either, so it's a separate cache line access. Are you seeing this
> difference in real runs or microbenchmarks? Why isn't the entire VLIB 
> being
> prefetched at dispatch? Sequential prefetching should add negligible
> overhead.
>
> On Thu, Dec 7, 2017 at 11:13 AM, Michal Mazur 
> wrote:
>
>> It seems that only first cache line of VLIB buffer is in L1, new
>> pointer can be placed only in second cacheline.
>> Using constant offset between user area and ODP header i get 11 Mpps,
>> with pointer stored in VLIB buffer only 10Mpps and with this new api
>> 10.6Mpps.
>>
>> On 7 December 2017 at 18:04, Bill Fischofer > > wrote:
>>
>>> How would calling an API be better than referencing the stored data
>>> yourself? A cache line reference is a cache line reference, and 
>>> presumably
>>> the VLIB buffer is already in L1 since it's your active data.
>>>
>>> On Thu, Dec 7, 2017 at 10:45 AM, Michal Mazur <
>>> michal.ma...@linaro.org> wrote:
>>>
 Hi,

 For odp4vpp plugin we need a new API function which, given user area
 pointer, will return a pointer to ODP packet buffer. It is needed
 when
 packets processed by VPP are sent back to ODP and only a pointer to
 VLIB
 buffer data (stored inside user area) is known.

 I have tried to store the ODP buffer pointer in VLIB data but
 reading it
 for every packet lowers performance by 800kpps.

 For odp-dpdk implementation it can look like:
 /** @internal Inline function @param uarea @return */
 static inline odp_packet_t _odp_packet_from_user_area(void *uarea)
 {
return (odp_packet_t)((uintptr_t)uarea -
 _odp_packet_inline.udata);
 }

 Please let me know what you think.

 Thanks,
 Michal

>>>
>>>
>>
>

>>>
>>


Re: [lng-odp] odp dpdk

2017-12-07 Thread Honnappa Nagarahalli
On 7 December 2017 at 08:01, Bogdan Pricope  wrote:
> TX is at line rate. Probably will get RX at line rate in direct mode, too.
> Problem is how can you see the performance increase/degradation if you
> can process more than line rate with one core?

Any possibility to add one more port?

>
> I guess .. enable csum option... ?
>
> On 7 December 2017 at 15:46, Maxim Uvarov  wrote:
>> nice. TX is on line rate,  right?  Next step probably to add RX path without
>> scheduler. And we will have good testing environment.
>>
>>
>> On 7 December 2017 at 16:12, Bogdan Pricope 
>> wrote:
>>>
>>> More results with odp_generator in lava setup:
>>>
>>>  7.6 mpps  (TX) /  5.9 mpps (RX) - api-next with PR313 (Petri):
>>>  8.3 mpps  (TX) /  6.3 mpps (RX) - api-next with PR313 (Petri) +
>>> remove 1m sleep + replace atomic counters
>>> 14.8 mpps (TX) /  6.5 mpps (RX) - api-next with PR313 (Petri) + remove
>>> 1m sleep + replace atomic counters + remove csum
>>> calculation/validation
>>> 14.8 mpps (TX) /  6.8 mpps (RX) - master with PR327 (remove 1m sleep +
>>> replace atomic counters + remove csum calculation/validation)
>>>
>>> /Bogdan
>>>
>>>
>>> On 6 December 2017 at 13:49, Maxim Uvarov  wrote:
>>> > small update. Double checked that increasing num of desc does not give
>>> > any
>>> > effect in odp_generator.
>>> >
>>> > Disable check sums in odp_generator increases TX from 7M to 13M pps and
>>> > RX
>>> > from 5.9M to 6.1M pps.
>>> > Because of generator uses predefined packets with calculated checksum -
>>> > there is no need to enable checksum inside generator.
>>> >
>>> > It looks like problem inside DPDK driver itself.
>>> >
>>> > For this PR I think we need to merge it together with changes to
>>> > odp_generator (the same as for l2fwd) to enable hw check sum,
>>> > which has to be disabled by default.
>>> >
>>> > Maxim.
>>> >
>>> >
>>> > On 6 December 2017 at 10:46, Maxim Uvarov 
>>> > wrote:
>>> >>
>>> >> skip this message. I will recheck. Pushed to lava wrong branch.
>>> >>
>>> >> On 6 December 2017 at 10:42, Maxim Uvarov 
>>> >> wrote:
>>> >>>
>>> >>> Ilias was right yesterday. If number of descriptors increased to 1024
>>> >>> then TX became again 10M.
>>> >>>
>>> >>> +   ret = rte_eth_tx_queue_setup(port_id, i,
>>> >>> +
>>> >>> dev_info.tx_desc_lim.nb_max
>>> >>> > 1024 ? 1024 : dev_info.tx_desc_lim.nb_max,
>>> >>>
>>> >>> rte_eth_dev_socket_id(port_id),
>>> >>>  txconf);
>>> >>>
>>> >>> +   ret = rte_eth_rx_queue_setup(port_id, i,
>>> >>> +
>>> >>> dev_info.rx_desc_lim.nb_max
>>> >>> > 1024 ? 1024 : dev_info.rx_desc_lim.nb_max,
>>> >>>
>>> >>> rte_eth_dev_socket_id(port_id),
>>> >>>  NULL,
>>> >>> pkt_dpdk->pkt_pool);
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> Maxim.
>>> >>>
>>> >>> On 5 December 2017 at 11:20, Elo, Matias (Nokia - FI/Espoo)
>>> >>>  wrote:
>>> >>>>
>>> >>>> When I tested enabling HW checksum with Fortville NICs (i40e) the
>>> >>>> slower
>>> >>>> driver path alone caused ~20% throughput drop on l2fwd test. This was
>>> >>>> without actually calculating the checksums, I simply forced the
>>> >>>> slower
>>> >>>> driver path (no vectorization).
>>> >>>>
>>> >>>> -Matias
>>> >>>>
>>> >>>>
>>> >>>> > On 5 Dec 2017, at 8:59, Bogdan Pricope 
>>> >>>> > wrote:
>>> >>>> >
>>> >>>> > On RX side is kind-of expected result since it uses scheduler mode.
>>> >>>> >
>>> >>>> > On TX side there is this drop from 10 mpps to 7.69 mpps that is
>>> >>>> > unexpected.
>>> >&

Re: [lng-odp] Planning for 2.0 work items

2017-12-05 Thread Honnappa Nagarahalli
On 5 December 2017 at 09:03, Yi He  wrote:
> Hi, Honnappa
>
> We've talked about these goals in today's 2.0 call, here has some
> discussions and comments:
>
> 1, for net-mdev, Ilias suggested that upstreaming to Linux kernel will take
> very long time, we'd better define the net-mdev goal around demo
> definitions.
I do not have any issue with this. Upstreaming to Linux Kernel is not
a dependency for ODP for now, but the upstreaming to Linux Kernel has
to happen. Francois, how would you like to handle this? Is this our
responsibility? Should we make plans for this within ODP's scope?
Would it be taken up by LEG?

> 2, for performance benchmarking and directory structure goals, we need some
> clarifications and narrow the scope to some parts of ODP.
I will add it to Thursday's call.

>
> Please add this into Thursday's call to continue the discussion.
>
> Thanks and best regards, Yi
>
> On 5 December 2017 at 13:19, Honnappa Nagarahalli
>  wrote:
>>
>> Hi,
>>As discussed in the ODP 2.0 call (11/30/2017), we identified the
>> following areas of work. The idea is to have few demos with net-mdev
>> and user-space drivers for HKG18. The goal is to have all this
>> upstreamed by HKG18. I have mentioned a partial list of owners, please
>> feel free to add yourself to the list. Can the respective owners get
>> with Darcy, identify the scope and work items.
>>
>> 1) Upstreaming net-mdev (Ilias, Mykyta, Francois)
>>
>> a) Upstreaming to ODP 2.0
>>
>> b) Upstreaming to Linux Kernel
>>
>> 2) Performance benchmarking (need owners)
>>
>> a) Need to have a framework which allows for instrumenting
>> the code and logging the time stamp. This will allow for identifying
>> the number of cycles spent in various parts of the code. Need to
>> identify tools that can be used to plot the collected data.
>>
>>b) Need to have a benchmarking application which runs
>> packets through the typical path with dummy packet I/O - could be run
>> on any machine - loopback pkt I/O could be used.
>>
>> 3) Directory structure
>>
>>   a) Changes required to provide clarity of design.
>>
>> 4) Upstream user space drivers (Jianbo, Yi, Josep)
>>
>> Please let me know if you have any questions.
>>
>> Thank you,
>> Honnappa
>
>


[lng-odp] Planning for 2.0 work items

2017-12-04 Thread Honnappa Nagarahalli
Hi,
   As discussed in the ODP 2.0 call (11/30/2017), we identified the
following areas of work. The idea is to have few demos with net-mdev
and user-space drivers for HKG18. The goal is to have all this
upstreamed by HKG18. I have mentioned a partial list of owners, please
feel free to add yourself to the list. Can the respective owners get
with Darcy, identify the scope and work items.

1) Upstreaming net-mdev (Ilias, Mykyta, Francois)

a) Upstreaming to ODP 2.0

b) Upstreaming to Linux Kernel

2) Performance benchmarking (need owners)

a) Need to have a framework which allows for instrumenting
the code and logging the time stamp. This will allow for identifying
the number of cycles spent in various parts of the code. Need to
identify tools that can be used to plot the collected data.

   b) Need to have a benchmarking application which runs
packets through the typical path with dummy packet I/O - could be run
on any machine - loopback pkt I/O could be used.

3) Directory structure

  a) Changes required to provide clarity of design.

4) Upstream user space drivers (Jianbo, Yi, Josep)

Please let me know if you have any questions.

Thank you,
Honnappa


Re: [lng-odp] odp dpdk

2017-12-04 Thread Honnappa Nagarahalli
Can you run with Linux-DPDK in ODP 2.0?

On 4 December 2017 at 09:54, Maxim Uvarov  wrote:
> after clean patches apply and fix in run scripts I made it run.
>
> But results is really bad. --enable-dpdk-zero-copy
>
> TX rate is:
> 7673155 pps
>
> RX rate is:
> 5989846 pps
>
>
> Before patch PR 313 TX was 10M pps.
>
> I re run task and TX is 3.3M pps. All tests are single core. So
> something strange happens in lava or this PR.
>
> Maxim.
>
>
> On 12/04/17 17:03, Bogdan Pricope wrote:
>> On TX (https://lng.validation.linaro.org/scheduler/job/23252.0) I see:
>>
>> ODP_REPO='https://github.com/muvarov/odp'
>> ODP_BRANCH='api-next'
>>
>>
>> On RX (https://lng.validation.linaro.org/scheduler/job/23252.1) I see:
>>
>> ODP_REPO='https://github.com/muvarov/odp'
>> ODP_BRANCH='devel/api-next_shsum'
>>
>>
>> or are you referring to other test?
>>
>>
>> On 4 December 2017 at 15:53, Maxim Uvarov  wrote:
>>>
>>>
>>> On 4 December 2017 at 15:11, Bogdan Pricope 
>>> wrote:

 You need to put 313 on TX side (not RX).
>>>
>>>
>>>
>>> both rx and tx have patches from 313. l2fwd works on recv side. Generator
>>> does not work.
>>>
>>> Maxim.
>>>
>>>


 On 4 December 2017 at 13:19, Savolainen, Petri (Nokia - FI/Espoo)
  wrote:
> Is the DPDK version 17.08 ? Other versions might not work properly.
>
>
>
> -Petri
>
>
>
> From: Maxim Uvarov [mailto:maxim.uva...@linaro.org]
> Sent: Monday, December 04, 2017 1:10 PM
> To: Savolainen, Petri (Nokia - FI/Espoo) 
> Cc: Bogdan Pricope ; lng-odp-forward
> 
>
>
> Subject: Re: [lng-odp] odp dpdk
>
>
>
> 313 does not work also:
>
> https://lng.validation.linaro.org/scheduler/job/23242.1
>
> I will replace RX side to l2fwd and see that will be there.
>
> Maxim.
>
>
>
>
>
> On 4 December 2017 at 13:46, Savolainen, Petri (Nokia - FI/Espoo)
>  wrote:
>
> Maxim, try https://github.com/Linaro/odp/pull/313 It has been tested to
> fix
> checksum insert for 10/40GE Intel NICs.
>
> -Petri
>
>
>> -Original Message-
>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>> Bogdan Pricope
>> Sent: Monday, December 04, 2017 12:21 PM
>> To: Maxim Uvarov 
>> Cc: lng-odp-forward 
>> Subject: Re: [lng-odp] odp dpdk
>>
>> I suspect this is actually caused by csum issue in TX side: on RX,
>> socket pktio does not validate csum (and accept the packets) but on
>> dpdk pktio the csum is validated and packets are dropped.
>>
>> I am not seeing this in my setup because default txq_flags for igb
>> driver (1G interface) is
>>
>> .txq_flags = 0
>>
>> while for ixgbe (10G interface) is:
>>
>> .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
>> ETH_TXQ_FLAGS_NOOFFLOADS,
>>
>>
>> /B
>>
>>
>>
>>
>> On 1 December 2017 at 23:47, Maxim Uvarov 
>> wrote:
>>>
>>> Looking to dpdk pktio support and generator. It looks like receive
>>> part
>>> is broken. If for receive I use sockets it works well but receive
>>> with
>>> dpdk does not get any packets. For both master and api-next. Can
>>> somebody confirm please that it's so. Lava is not supper friendly to
>>> debug issue.
>>>
>>>
>>> 1. Recv
>>> odp_generator -I 0 -m r -c 0x4
>>>
>>> https://lng.validation.linaro.org/scheduler/job/23206.1
>>> Network devices using DPDK-compatible driver
>>> 
>>> :07:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>>> drv=igb_uio unused=
>>>
>>>
>>>
>>> 2. Send
>>> odp_generator -I 0 --srcmac 38:ea:a7:93:98:94 --dstmac
>>> 38:ea:a7:93:83:a0
>>> --srcip 192.168.100.2 --dstip 192.168.100.1 -m u -i 0 -c 0x8 -p 18 -e
>>> 5000 -f 5001 -n 8
>>>
>>> https://lng.validation.linaro.org/scheduler/job/23206.0
>>>
>>> Thank you,
>>> Maxim.
>
>
>>>
>>>
>


Re: [lng-odp] Preparing for ODP 2.0

2017-11-28 Thread Honnappa Nagarahalli
My understanding is that Linux-Generic implementation is a reference
implementation meant to showcase clarity. I do not think optimizations
go well with that objective.

On 28 November 2017 at 04:43, Maxim Uvarov  wrote:
> If patch is suitable for master or api-next (like cache line optimizations)
> it has to go to master/api-next first. As well as all bug fixes for master
> branch should go directly to master.
> Only than patch can be taken to development branch.
>
> Maxim.
>
>
> On 28 November 2017 at 12:43, Dmitry Eremin-Solenikov <
> dmitry.ereminsoleni...@linaro.org> wrote:
>
>> On 28/11/17 00:57, Bill Fischofer wrote:
>> > As a way of easing the sync burden on the 2.0 development branch, what do
>> > folks think of the idea of asking that new PRs being posted to api-next
>> > also be posted to 2.0? The contributions to api-next should be winding
>> down
>> > as we approach Tiger Moth freeze, so this will help keep things in sync
>> as
>> > we transition back into a single development target post-Tiger Moth.
>> >
>> > Please share your views on this.
>>
>> No, current '2.0' should be refactored as a set of PRs against
>> master/api-next. When its development was started, it was promised that
>> 2.0 will be reviewed before merging to master/api-next.
>>
>>
>> --
>> With best wishes
>> Dmitry
>>


Re: [lng-odp] ODP always falls back to normal pages instead of using hugepages

2017-11-13 Thread Honnappa Nagarahalli
I have seen this issue. But, I have not characterized any performance
impact of this. But most of the memory required is for packets
(depends on application as well) which comes from DPDK.

Feel free to submit a PR to fix the issue.

Thanks,
Honnappa

On 13 November 2017 at 03:37, gyanesh patra  wrote:
> Hi,
> I have noticed that ODP application always check for hugepages and fails
> though hugepage is configured in the system. And it runs normally using
> normal pages. I am not sure it affects the performance or not. But if it
> does, how can i make sure that ODP application uses the hugepages???
>
> I am adding the output of hugepage details and odp application below for
> reference:
>
>
> root@ubuntu:/home/ubuntu/P4/mac# cat /proc/sys/vm/nr_hugepages
> 183
> root@ubuntu:/home/ubuntu/P4/mac#
> root@ubuntu:/home/ubuntu/P4/mac# grep -i uge /proc/meminfo
> AnonHugePages: 28672 kB
> HugePages_Total: 183
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:1048576 kB
>
>
> ODP application output:
>
> EAL: Detected 56 lcore(s)
> EAL: Probing VFIO support...
> EAL: PCI device :05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device :05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device :81:00.0 on NUMA socket 1
> EAL:   probe driver: 15b3:1013 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_0" (SR-IOV:
> false, MPS: false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3a
> EAL: PCI device :81:00.1 on NUMA socket 1
> EAL:   probe driver: 15b3:1013 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_1" (SR-IOV:
> false, MPS: false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3b
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishm.c:866:_odp_ishm_reserve():No huge pages, fall back
> to normal pages. check: /proc/sys/vm/nr_hugepages.
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
>  PKTIO: initialized loop interface.
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> No crypto devices available
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
>
> ODP system info
> ---
> ODP API version: 1.15.0
> ODP impl name:   odp-dpdk
> CPU model:   Intel(R) Xeon(R) CPU E5-2680 v4
> CPU freq (hz):   24
> Cache line size: 64
> CPU count:   56
> ***
> ***
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
> allocate memory
>
>
> Thanks & Regards,
> P Gyanesh Kumar Patra


Re: [lng-odp] abi version support

2017-11-13 Thread Honnappa Nagarahalli
On 10 November 2017 at 04:38, Dmitry Eremin-Solenikov
 wrote:
> 10 нояб. 2017 г. 13:22 пользователь "Maxim Uvarov" 
> написал:
>
> I see that dpdk started to support abi versions in following ways
>
> I.e. they describe in .map file which functions to expert and what ABI/API
> level they are.
> We can use something the same. But I'm not big fun of manually writing such
> files. And maybe maintain different ABI versions in one source is not a
> good idea. But we can try to generate .map and filter out all not "odp_"
> functions, then specify this .map to linked.
>
>
> This requires maintaining ABI compatibility between releases. The whole
> point of versioning symbols is to be able to declare that this symbol
> conforms to old ABI and this one is new. And yes, this requires manual
> listings.

It also allows having multiple versions of the same symbol as well.


Re: [lng-odp] abi version support

2017-11-13 Thread Honnappa Nagarahalli
I would prefer to follow this approach to support deprecated features
(for few more releases providing the opportunity for the application
to change) between different versions of ODP rather than having
--enable-deprecated. This feature will allow for structure definitions
also to differ.

On 10 November 2017 at 04:22, Maxim Uvarov  wrote:
> I see that dpdk started to support abi versions in following ways:
>
> DPDK_2.0 {
> global:
>
> rte_jobstats_context_finish;
> rte_jobstats_context_init;
> .
> rte_jobstats_set_update_period_function;
> rte_jobstats_start;
>
> local: *;
> };
>
> DPDK_16.04 {
> global:
>
> rte_jobstats_abort;
>
> } DPDK_2.0;
>
> ./dpdk/lib/librte_jobstats/Makefile:EXPORT_MAP := rte_jobstats_version.map
> ./dpdk/mk/rte.lib.mk:CPU_LDFLAGS += --version-script=$(SRCDIR)/$(EXPORT_MAP)
>
> I.e. they describe in .map file which functions to expert and what ABI/API
> level they are.
> We can use something the same. But I'm not big fun of manually writing such
> files. And maybe maintain different ABI versions in one source is not a
> good idea. But we can try to generate .map and filter out all not "odp_"
> functions, then specify this .map to linked.
>
> Maxim.


Re: [lng-odp] ODP vs Protocol headers

2017-11-13 Thread Honnappa Nagarahalli
On 10 November 2017 at 05:35, Dmitry Eremin-Solenikov
 wrote:
> Hello,
>
> Historically ODP helper provided protocol-related headers with
> linux-generic ODP implementation using modified private copy of them.
> The main reason for that was, if I remember correctly, that ODP should
> not provide protocol-related definitions.
>
> I'd like to return to that question:
>  - I'm now adding more definitions to private protocol headers and I
> would not like for them to be too much out of sync.
>  - We started adding more and more protocol-specific handling in form of
> odp_packet_parse, odp packet flags, etc.
>
> I'd propose to put protocol headers (ip.h, tcp.h, udp.h, eth.h) into
> public ODP namespace (to ) with the following
> phrase specifying them:
>
> 
> These headers are not a part of ODP API specification, however they are
> provided to enable applications to use standard definitions for the
> protocol data. While neither of ODP API/ABI headers uses these protocol
> headers, an implementation SHOULD provide them AS IS to ease porting
> applications between ODP implementations.
> 
>

It is clear that these are not part of the ODP API spec. I think we
should create 'protocol' directory at the same level as 'helper'
directory and move the files there.

> --
> With best wishes
> Dmitry


[lng-odp] 2.0: pkt io changes

2017-11-09 Thread Honnappa Nagarahalli
Hi Bogdan/Josep,
I have created a document explaining my thoughts, please take a look.

https://docs.google.com/a/linaro.org/document/d/1dUHnt1jOm61Lz3GkwT7RHMCPMCmHRo0pXio3MMdKHsM/edit?usp=sharing

Thanks,
Honnappa


Re: [lng-odp] drv api in api-next

2017-11-01 Thread Honnappa Nagarahalli
On 1 November 2017 at 03:17, Dmitry Eremin-Solenikov
 wrote:
> On 31/10/17 22:21, Honnappa Nagarahalli wrote:
>> But they are APIs, even though they were copied from Linux-generic. I
>> am thinking the discussion has already happened on why they should be
>> in API directory. Is there any reason to revert and restart the
>> discussion?
>
> They are used for 2.0, but are unused in linux-generic (at least in
> TigerMoth). I'd vote to remove incomplete drv_* headers from TM. They
> will come back through 2.0.
>
The changes in api-next are merged to 2.0 branch. If we remove them in
master/api-next, the automatic merge will remove them from 2.0 as
well.

>> On 30 October 2017 at 11:43, Maxim Uvarov  wrote:
>>> In api-next we have some drv apis which is a copy of linux-generic but
>>> with drv prefix. I'm thinking what to do with them for Tiger Moth. Or
>>> merge them or merge and revert. For now we do not use that api.
>>>
>>> Maxim.
>
>
> --
> With best wishes
> Dmitry


Re: [lng-odp] net-mdev and user space drivers

2017-10-31 Thread Honnappa Nagarahalli
Hi Ilias,
Are you using the mempool from Linux-Generic implementation (in
2.0 branch) for packets?

Thanks,
Honnappa

On 31 October 2017 at 02:03, Ilias Apalodimas
 wrote:
> I wouldn't exactly call it flying. We do have some basic pktio via
> odp_generator. We should have detailed results on speed etc within the week.
>
> The device matching is partially complete, you can have a peek at
> https://github.com/apalos/odp/blob/ddf/mdev/api/sysfs_parse.c#L91
> This matches one(and only mediated device per interface for now). It does
> that by matching the device driver name as defined in the linux module(and
> gets inherited in sysfs attributes).
> We do supply the same name on each driver as seen in
> https://github.com/apalos/odp/blob/ddf/mdev/drivers/r8169.c#L376 and
> https://github.com/apalos/odp/blob/ddf/mdev/drivers/e1000e.c#L469
>
> So the device matching is partially solved for the mediated device approach
> and it's using kernel attributes that are unlikely to change in the future.
>
> Regards
> Ilias


Re: [lng-odp] drv api in api-next

2017-10-31 Thread Honnappa Nagarahalli
But they are APIs, even though they were copied from Linux-generic. I
am thinking the discussion has already happened on why they should be
in API directory. Is there any reason to revert and restart the
discussion?

On 30 October 2017 at 11:43, Maxim Uvarov  wrote:
> In api-next we have some drv apis which is a copy of linux-generic but
> with drv prefix. I'm thinking what to do with them for Tiger Moth. Or
> merge them or merge and revert. For now we do not use that api.
>
> Maxim.


[lng-odp] net-mdev and user space drivers

2017-10-30 Thread Honnappa Nagarahalli
Bill mentioned that the packets are flying through net-mdev, good news :)

I see clear distinction in what needs to be done.

1) DDF - scanning/enumerating the devices, matching the drivers for
the devices etc
2) Driver/packet I/O - Device initialization/configuration and actual
packet send/recv

As the discussion on DDF continues, I think we can work on 2). We
could enable/disable the packet I/O ops during compilation based on
the NIC in the platform. For ex: if the platform has x710 card, we can
enable the respective packet I/O during compilation. This portion of
the code will remain more or less intact irrespective of the outcome
of 1).

The enable/disable of the packet I/O during compilation can be removed
once we have the automation from 1)

Please let me know what you think.

Thank you,
Honnappa


Re: [lng-odp] DDF discussions taking time

2017-10-27 Thread Honnappa Nagarahalli
On 27 October 2017 at 13:35, Bill Fischofer 
wrote:

>
>
> On Fri, Oct 27, 2017 at 10:45 AM, Francois Ozog 
> wrote:
>
>>
>> Le ven. 27 oct. 2017 à 17:17, Bill Fischofer 
>> a écrit :
>>
>>> The problem with scanning, especially in a VNF environment, is that (a)
>>> the application probably isn't authorized to to that
>>>
>>
>> nothing prevents scanning what is available for n the vm. "Scale up"
>> Events are triggered when increasing resources (memory cpus Ethernet
>> ports...)
>>
>> and (b) the application certainly doesn't have real visibility into what
>>> the actual device topology is.
>>>
>>
>> Qemu allows to select if the VM has visibility of a real nuns topology or
>> a virtua one
>>
>
> Qemu may itself be running under a hypervisor and have limited visibility
> as to the real HW configuration. The whole point of 
> virtualization/containerization
> is to limit what the VM/application can see and do to provide management
> controls over isolation and portability.
>

In this case, the 'platform' refers to what is available in the VM. There
is no need to know what is available in the underlying hardware.

>
>
>>
>> It only knows what it "needs to know" (as determined by higher-level
>>> functions) so there's no point trying to second-guess that. Applications
>>> will be told "use these devices" and that should be our design assumption
>>> in this support.
>>>
>>
>> Vnf manager will know precisely the host vision of things (use this
>> vhostuser interface) but don't don't know how it may end up being named in
>> guest . This is a key problem that has been identified by British telecom .
>>
>
> Sounds like we should follow whatever recommended solution OPNFV comes up
> with in this area. In any event, that solution will be outside of ODP and
> would still result in the application being told what devices to use with
> names that are meaningful in the environment it's running in.
>
>
>>
>>> On Fri, Oct 27, 2017 at 10:10 AM, Honnappa Nagarahalli <
>>> honnappa.nagaraha...@linaro.org> wrote:
>>>
>>>>
>>>>
>>>> On 27 October 2017 at 09:50, Bill Fischofer 
>>>> wrote:
>>>>
>>>>> ODP 2.0 assumes Linux system services are available so the question of
>>>>> how to operate in bare metal environments is a separate one and up to 
>>>>> those
>>>>> ODP implementations. Again the application will provide a
>>>>> sufficiently-qualified device name string to identify which device it 
>>>>> wants
>>>>> to open in an unambiguous manner. How it does that is again outside the
>>>>> scope of ODP so this isn't something we need to worry about. All ODP needs
>>>>> to do is be able to identify which driver needs to be loaded and passed 
>>>>> the
>>>>> rest of the device name and then that driver handles the rest.
>>>>>
>>>>> By baremetal, I meant host with Linux OS.
>>>> I agree, it is applications responsibility to provide the device
>>>> string, how it does that is outside the scope of ODP.
>>>> We will continue the discussion on the slides. But, in the meanwhile, I
>>>> am thinking of possible design if we want to avoid complete scanning of the
>>>> platform during initialization. In the current packet I/O framework, all
>>>> the function pointers of the driver are known before odp_pktio_open API is
>>>> called. If we have to stick to this design, the drivers for the PCI device
>>>> should have registered their functions with the packet IO framework before
>>>> odp_pktio_open is called.
>>>>
>>>>
>>>>> On Fri, Oct 27, 2017 at 2:36 AM, Francois Ozog <
>>>>> francois.o...@linaro.org> wrote:
>>>>>
>>>>>> Well, we do not need to scan all the platform because the pktio_open
>>>>>> contains enough information to target the right device.
>>>>>> This is almost true as we need to have an additional "selector" for
>>>>>> the port on multiport NICs that are controlled by a single pci ID.
>>>>>> :: or something like
>>>>>> that.
>>>>>>
>>>>>> All ports may not be typically handled by ODP. For instance
>>>>>> management network will most certai

Re: [lng-odp] DDF discussions taking time

2017-10-27 Thread Honnappa Nagarahalli
On 27 October 2017 at 02:36, Francois Ozog  wrote:

> Well, we do not need to scan all the platform because the pktio_open
> contains enough information to target the right device.
> This is almost true as we need to have an additional "selector" for the
> port on multiport NICs that are controlled by a single pci ID.
> :: or something like that.
>
> I think, it clear that this string format is good. It has all the
information that is needed. I propose we change the  to
:

::

 would be part of such 'device/driver specific string' and will be
interpreted by the driver.

All ports may not be typically handled by ODP. For instance management
> network will most certainly be a native Linux one.
>
> Dpaa2 bus may impose us to full scan if we have to go the full driver
> route (vfio-pci) but this will not be necessary if we can have the
> net-mdev. In that case, the dpaa2 bus can be handled the same way as pci.
>
> Dynamic bus events (hot plug) are a nice to have and may be dropped from
> coding yet we can talk about it. and when it come to this, this is not
> about dealing with the bus controllers directly but tapping into Linux
> event framework.
>
> I know we can simplify things and I am very flexible in what we can decide
> not to do. That said I have been dealing with operational issues on that
> very topic since 2006 when I designed my first kernel based fast packet
> IO So there will be topics where I'll push hard to explain (not impose)
> why we should go a harder route. This slows things down but I think this is
> worth it.
>

I too think it is worth it. Even though the progress is slow we have not
stalled. We are discussing and understanding. At the end of this
discussion, we would have the following information:

1) How does the final solution looks like (addressing both all-user-space
drivers and net-mdev drivers)? May not be 100% clear, at least we will have
few options.
2) Most critical/minimal code required to get the packets into the user
space. This is what we will do in the next few weeks/month.
3) 'some what' of a plan for the rest of the code



> FF
>
> Le ven. 27 oct. 2017 à 06:23, Honnappa Nagarahalli <
> honnappa.nagaraha...@linaro.org> a écrit :
>
>> On 26 October 2017 at 16:34, Bill Fischofer 
>> wrote:
>> > I agree with Maxim. Best to get one or two working drivers and see what
>> else
>> > is needed. The intent here is not for ODP to become another OS, so I'm
>> not
>> > sure why we need to concern ourselves with bus walking and similar
>> arcana.
>> > Linux has already long solved this problem. We should leverage what's
>> there,
>> > not try to reinvent it.
>> >
>> > ODP applications are told what I/O interfaces to use, either through an
>> > application configuration file, command line, or other means outside the
>> > scope of ODP itself. ODP's job is simply to connect applications to
>> these
>> > configured I/O interfaces when they call odp_pktio_open(). The name
>> used for
>> > interfaces is simply a string that we've defined to have the format:
>> >
>> > class: device: other-info-needed-by-driver
>> >
>> > We've defined a number of classes already:
>> >
>> > - loop
>> > - pcap
>> > - ipc
>> > - dpdk
>> > - socket
>> > - socket_mmap
>> > - tap
>> > etc.
>> >
>> > We simply need to define new classes (e.g., ddf, mdev) and the names
>> they
>> > need to take to identify a specific device and the associated driver to
>> use
>> > for operate that device. The driver is then loaded if necessary and its
>> > open() interface is called.
>> >
>>
>> Coincidentally, internally we were discussing this exactly.
>>
>> Why do we need to scan and understand the complete platform during
>> initialization? - I would think, mostly, ODP will run in a platform
>> (baremetal, VM, container) where all the devices are supposed to be
>> used by ODP (i.e. ODP will not run in a platform where it will share
>> the platform with other applications). So why not scan and identify
>> the devices/drivers and create the packet IO ops during
>> initialization. The packet I/O framework assumes that various methods
>> (open, close, send, recv etc) of the device driver are known when
>> odp_pktio_open API is called. None of that has to change.
>>
>> Another solution I can think of is (tweak to reduce the time spent in
>> scanning etc), instead of scanning all the devices, DDF initialization
>> function can be provided with all the ports

Re: [lng-odp] DDF discussions taking time

2017-10-27 Thread Honnappa Nagarahalli
On 27 October 2017 at 09:50, Bill Fischofer 
wrote:

> ODP 2.0 assumes Linux system services are available so the question of how
> to operate in bare metal environments is a separate one and up to those ODP
> implementations. Again the application will provide a
> sufficiently-qualified device name string to identify which device it wants
> to open in an unambiguous manner. How it does that is again outside the
> scope of ODP so this isn't something we need to worry about. All ODP needs
> to do is be able to identify which driver needs to be loaded and passed the
> rest of the device name and then that driver handles the rest.
>
> By baremetal, I meant host with Linux OS.
I agree, it is applications responsibility to provide the device string,
how it does that is outside the scope of ODP.
We will continue the discussion on the slides. But, in the meanwhile, I am
thinking of possible design if we want to avoid complete scanning of the
platform during initialization. In the current packet I/O framework, all
the function pointers of the driver are known before odp_pktio_open API is
called. If we have to stick to this design, the drivers for the PCI device
should have registered their functions with the packet IO framework before
odp_pktio_open is called.


> On Fri, Oct 27, 2017 at 2:36 AM, Francois Ozog 
> wrote:
>
>> Well, we do not need to scan all the platform because the pktio_open
>> contains enough information to target the right device.
>> This is almost true as we need to have an additional "selector" for the
>> port on multiport NICs that are controlled by a single pci ID.
>> :: or something like that.
>>
>> All ports may not be typically handled by ODP. For instance management
>> network will most certainly be a native Linux one.
>>
>> Dpaa2 bus may impose us to full scan if we have to go the full driver
>> route (vfio-pci) but this will not be necessary if we can have the
>> net-mdev. In that case, the dpaa2 bus can be handled the same way as pci.
>>
>> Dynamic bus events (hot plug) are a nice to have and may be dropped from
>> coding yet we can talk about it. and when it come to this, this is not
>> about dealing with the bus controllers directly but tapping into Linux
>> event framework.
>>
>> I know we can simplify things and I am very flexible in what we can
>> decide not to do. That said I have been dealing with operational issues on
>> that very topic since 2006 when I designed my first kernel based fast
>> packet IO So there will be topics where I'll push hard to explain (not
>> impose) why we should go a harder route. This slows things down but I think
>> this is worth it.
>>
>> FF
>>
>> Le ven. 27 oct. 2017 à 06:23, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> a écrit :
>>
>>> On 26 October 2017 at 16:34, Bill Fischofer 
>>> wrote:
>>> > I agree with Maxim. Best to get one or two working drivers and see
>>> what else
>>> > is needed. The intent here is not for ODP to become another OS, so I'm
>>> not
>>> > sure why we need to concern ourselves with bus walking and similar
>>> arcana.
>>> > Linux has already long solved this problem. We should leverage what's
>>> there,
>>> > not try to reinvent it.
>>> >
>>> > ODP applications are told what I/O interfaces to use, either through an
>>> > application configuration file, command line, or other means outside
>>> the
>>> > scope of ODP itself. ODP's job is simply to connect applications to
>>> these
>>> > configured I/O interfaces when they call odp_pktio_open(). The name
>>> used for
>>> > interfaces is simply a string that we've defined to have the format:
>>> >
>>> > class: device: other-info-needed-by-driver
>>> >
>>> > We've defined a number of classes already:
>>> >
>>> > - loop
>>> > - pcap
>>> > - ipc
>>> > - dpdk
>>> > - socket
>>> > - socket_mmap
>>> > - tap
>>> > etc.
>>> >
>>> > We simply need to define new classes (e.g., ddf, mdev) and the names
>>> they
>>> > need to take to identify a specific device and the associated driver
>>> to use
>>> > for operate that device. The driver is then loaded if necessary and its
>>> > open() interface is called.
>>> >
>>>
>>> Coincidentally, internally we were discussing this exactly.
>>>
>>> Why do we need to scan and un

Re: [lng-odp] Relation between Enumerator class and Enumerator

2017-10-27 Thread Honnappa Nagarahalli
We have to support both types of drivers as of now:

1) Legacy all user space drivers - In this case, what Jianbo says is correct.
2) net-mdev drivers.

So, DDF has to support both the use cases.

Thank you,
Honnappa


On 27 October 2017 at 04:47, Francois Ozog  wrote:
> In the case of netmdev the device is still there. Please check with Ilias
> for detailed behavior of this technology
>
> FF
>
> Le ven. 27 oct. 2017 à 10:29, Jianbo Liu  a écrit :
>
>> The 10/27/2017 10:25, Maxim Uvarov wrote:
>> > yes, discovery is done by other tools like lspci, /proc  and /sys
>> > interfaces. Also udev rules are there to make naming persistent.
>> >
>> > For pci it can be:
>> >
>> > odp_pktio_open("mdev:eth0")
>> >
>> > then you parse /proc/bus/pci/devices to find actual driver used for this
>> > eth0. And if you have matching mdev pktio driver  - open it.
>>
>> There is no eth0 because the device is bound to vfio or uio driver.
>>
>> > That looks like very simply and clear.
>> >
>> > Maxim.
>> >
>> >
>> > On 27 October 2017 at 01:03, Bill Fischofer 
>> > wrote:
>> >
>> > > I think you've captured the distinction correctly. The larger question
>> is
>> > > what does ODP itself need to do with this? When an interface name is
>> > > presented to odp_pktio_open() it is the application's responsibility to
>> > > provide a name string that can be mapped to the device that should be
>> > > opened. ODP uses that string to locate / load the driver that's needed
>> to
>> > > operate that device. The driver is then responsible to make the actual
>> > > connection to the device.
>> > >
>> > > When a driver gets a "device string" is needs to translate that into
>> either
>> > > an appropriate Linux system call (for mdev) or to a set of PCI Bar or
>> other
>> > > HW register addresses (for dedicated I/O). So if there is a case of a
>> > > southbound ODP API, it would be to assist with this latter translation.
>> > >
>> > > On Thu, Oct 26, 2017 at 12:38 PM, Honnappa Nagarahalli <
>> > > honnappa.nagaraha...@linaro.org> wrote:
>> > >
>> > > > Hi,
>> > > >I created a document to convert our discussion into pictures. I
>> > > > will update this as we discuss other concepts like devio/driver etc.
>> > > >
>> > > > https://docs.google.com/a/linaro.org/document/d/1nzy-Qp6hYZU38DVi7_
>> > > > ZnPiaf3jRqdMN6IQHmfO80Al4/edit?usp=sharing
>> > > >
>> > > > Thanks,
>> > > > Honnappa
>> > > >
>> > >
>>
>> --
>> IMPORTANT NOTICE: The contents of this email and any attachments are
>> confidential and may also be privileged. If you are not the intended
>> recipient, please notify the sender immediately and do not disclose the
>> contents to any other person, use it for any purpose, or store or copy the
>> information in any medium. Thank you.
>>
> --
> [image: Linaro] <http://www.linaro.org/>
> François-Frédéric Ozog | *Director Linaro Networking Group*
> T: +33.67221.6485 
> francois.o...@linaro.org | Skype: ffozog


Re: [lng-odp] DDF discussions taking time

2017-10-26 Thread Honnappa Nagarahalli
On 26 October 2017 at 16:34, Bill Fischofer  wrote:
> I agree with Maxim. Best to get one or two working drivers and see what else
> is needed. The intent here is not for ODP to become another OS, so I'm not
> sure why we need to concern ourselves with bus walking and similar arcana.
> Linux has already long solved this problem. We should leverage what's there,
> not try to reinvent it.
>
> ODP applications are told what I/O interfaces to use, either through an
> application configuration file, command line, or other means outside the
> scope of ODP itself. ODP's job is simply to connect applications to these
> configured I/O interfaces when they call odp_pktio_open(). The name used for
> interfaces is simply a string that we've defined to have the format:
>
> class: device: other-info-needed-by-driver
>
> We've defined a number of classes already:
>
> - loop
> - pcap
> - ipc
> - dpdk
> - socket
> - socket_mmap
> - tap
> etc.
>
> We simply need to define new classes (e.g., ddf, mdev) and the names they
> need to take to identify a specific device and the associated driver to use
> for operate that device. The driver is then loaded if necessary and its
> open() interface is called.
>

Coincidentally, internally we were discussing this exactly.

Why do we need to scan and understand the complete platform during
initialization? - I would think, mostly, ODP will run in a platform
(baremetal, VM, container) where all the devices are supposed to be
used by ODP (i.e. ODP will not run in a platform where it will share
the platform with other applications). So why not scan and identify
the devices/drivers and create the packet IO ops during
initialization. The packet I/O framework assumes that various methods
(open, close, send, recv etc) of the device driver are known when
odp_pktio_open API is called. None of that has to change.

Another solution I can think of is (tweak to reduce the time spent in
scanning etc), instead of scanning all the devices, DDF initialization
function can be provided with all the ports the user has requested.

If the scanning for the device and identifying the driver has to be
triggered through the odp_pktio_open API, the current packet IO
framework needs to change.

> On Thu, Oct 26, 2017 at 3:59 PM, Maxim Uvarov 
> wrote:
>>
>> Hello Honnappa,
>>
>> I think we also need to take a look from bottom. I.e. from exact drivers
>> to
>> implement. That it will be more clear which interface is needed to be
>> created.
>> Do you have some list of drivers which needed to be implemented? I.e. with
>> pci drivers I think we in a good way, but non pci drivers are under
>> question.
>> I think we should not over-complicate odp itself with huge discovery and
>> numeration of devices. Let's take a look what is the minimal interface to
>> support
>> devices.
>>
>> Best regards,
>> Maxim.
>>
>> On 26 October 2017 at 23:35, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> wrote:
>>
>> > Hi,
>> >Agree, we have taken 2 hours and the progress has been slow. But
>> > the discussions have been good and helpful to us at Arm. The goal is
>> > to identify the gaps and work items. I am not sure if it has been
>> > helpful to others, please let me know.
>> >
>> > To speed this up, I propose few options below, let me know your opinion:
>> >
>> > 1) Have 2 additional (along with regular ODP 2.0) calls next week - We
>> > can do Tuesday 7:00am and then another on Thursday 7:00am (Austin, TX
>> > time, GMT-6, one hour before the regular ODP-2.0)
>> >
>> > 2) Resolve the pending PRs on emails
>> >
>> > 3) Discuss the DDF slides on email - Not sure how effective it will be
>> >
>> > Any other solutions?
>> >
>> > Thank you,
>> > Honnappa
>> >
>
>


Re: [lng-odp] DDF discussions taking time

2017-10-26 Thread Honnappa Nagarahalli
Hi Bill/Maxim,
These are the exact decisions I wanted us to make coming out of
this discussion. I have taken some what of a longer route, i.e.
examine the DDF design and understand the idea/thinking on why
somethings were created.

We are almost there, we do not have many more slides to cover, another
1.5 slides to cover and the last slide is about few changes Jianbo has
identified.

I expect that, after this discussion, we will identify what are the
gaps, which parts can we let go.

Thank you,
Honnappa

On 26 October 2017 at 16:34, Bill Fischofer  wrote:
> I agree with Maxim. Best to get one or two working drivers and see what else
> is needed. The intent here is not for ODP to become another OS, so I'm not
> sure why we need to concern ourselves with bus walking and similar arcana.
> Linux has already long solved this problem. We should leverage what's there,
> not try to reinvent it.
>
> ODP applications are told what I/O interfaces to use, either through an
> application configuration file, command line, or other means outside the
> scope of ODP itself. ODP's job is simply to connect applications to these
> configured I/O interfaces when they call odp_pktio_open(). The name used for
> interfaces is simply a string that we've defined to have the format:
>
> class: device: other-info-needed-by-driver
>
> We've defined a number of classes already:
>
> - loop
> - pcap
> - ipc
> - dpdk
> - socket
> - socket_mmap
> - tap
> etc.
>
> We simply need to define new classes (e.g., ddf, mdev) and the names they
> need to take to identify a specific device and the associated driver to use
> for operate that device. The driver is then loaded if necessary and its
> open() interface is called.
>
> On Thu, Oct 26, 2017 at 3:59 PM, Maxim Uvarov 
> wrote:
>>
>> Hello Honnappa,
>>
>> I think we also need to take a look from bottom. I.e. from exact drivers
>> to
>> implement. That it will be more clear which interface is needed to be
>> created.
>> Do you have some list of drivers which needed to be implemented? I.e. with
>> pci drivers I think we in a good way, but non pci drivers are under
>> question.
>> I think we should not over-complicate odp itself with huge discovery and
>> numeration of devices. Let's take a look what is the minimal interface to
>> support
>> devices.
>>
>> Best regards,
>> Maxim.
>>
>> On 26 October 2017 at 23:35, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> wrote:
>>
>> > Hi,
>> >Agree, we have taken 2 hours and the progress has been slow. But
>> > the discussions have been good and helpful to us at Arm. The goal is
>> > to identify the gaps and work items. I am not sure if it has been
>> > helpful to others, please let me know.
>> >
>> > To speed this up, I propose few options below, let me know your opinion:
>> >
>> > 1) Have 2 additional (along with regular ODP 2.0) calls next week - We
>> > can do Tuesday 7:00am and then another on Thursday 7:00am (Austin, TX
>> > time, GMT-6, one hour before the regular ODP-2.0)
>> >
>> > 2) Resolve the pending PRs on emails
>> >
>> > 3) Discuss the DDF slides on email - Not sure how effective it will be
>> >
>> > Any other solutions?
>> >
>> > Thank you,
>> > Honnappa
>> >
>
>


[lng-odp] DDF discussions taking time

2017-10-26 Thread Honnappa Nagarahalli
Hi,
   Agree, we have taken 2 hours and the progress has been slow. But
the discussions have been good and helpful to us at Arm. The goal is
to identify the gaps and work items. I am not sure if it has been
helpful to others, please let me know.

To speed this up, I propose few options below, let me know your opinion:

1) Have 2 additional (along with regular ODP 2.0) calls next week - We
can do Tuesday 7:00am and then another on Thursday 7:00am (Austin, TX
time, GMT-6, one hour before the regular ODP-2.0)

2) Resolve the pending PRs on emails

3) Discuss the DDF slides on email - Not sure how effective it will be

Any other solutions?

Thank you,
Honnappa


[lng-odp] Relation between Enumerator class and Enumerator

2017-10-26 Thread Honnappa Nagarahalli
Hi,
   I created a document to convert our discussion into pictures. I
will update this as we discuss other concepts like devio/driver etc.

https://docs.google.com/a/linaro.org/document/d/1nzy-Qp6hYZU38DVi7_ZnPiaf3jRqdMN6IQHmfO80Al4/edit?usp=sharing

Thanks,
Honnappa


Re: [lng-odp] api-next merge with ODP 2.0

2017-10-16 Thread Honnappa Nagarahalli
I have fixed the issue. I will post the PR once the internal review is
completed.

Thanks,
Honnappa

On 16 October 2017 at 14:40, Maxim Uvarov  wrote:

> (gdb) bt
> #0  0x7fbe66d26c37 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> #1  0x7fbe66d2a028 in __GI_abort () at abort.c:89
> #2  0x0042e279 in odp_override_abort () at odp_weak.c:42
> #3  0x00424604 in llq_enqueue (node=,
> llq=) at ./include/odp_llqueue.h:220
> #4  schedq_push (elem=, schedq=) at
> schedule/scalable.c:175
> #5  _schedule (from=from@entry=0x0, ev=ev@entry=0x7fffd51030b8,
> num_evts=num_evts@entry=1) at schedule/scalable.c:910
> #6  0x004249da in schedule (from=0x0, wait=0) at
> schedule/scalable.c:1310
> #7  0x00406485 in flush_input_queue (pktio=,
> imode=imode@entry=ODP_PKTIN_MODE_SCHED) at pktio.c:397
> #8  0x00408f00 in test_txrx 
> (in_mode=in_mode@entry=ODP_PKTIN_MODE_SCHED,
> num_pkts=num_pkts@entry=4,
> mode=mode@entry=TXRX_MODE_MULTI) at pktio.c:731
> #9  0x00409508 in pktio_test_sched_multi () at pktio.c:763
> #10 0x7fbe670c0482 in run_single_test () from
> /usr/local/lib/libcunit.so.1
> #11 0x7fbe670c00b2 in run_single_suite () from
> /usr/local/lib/libcunit.so.1
> #12 0x7fbe670bdd55 in CU_run_all_tests () from
> /usr/local/lib/libcunit.so.1
> #13 0x7fbe670c2245 in basic_run_all_tests () from
> /usr/local/lib/libcunit.so.1
> #14 0x7fbe670c1fe7 in CU_basic_run_tests () from
> /usr/local/lib/libcunit.so.1
> #15 0x0040bc3b in odp_cunit_run () at odp_cunit_common.c:300
> #16 0x7fbe66d11f45 in __libc_start_main (main=0x404480 , argc=1,
> argv=0x7fffd5103548, init=,
> fini=, rtld_fini=,
> stack_end=0x7fffd5103538) at libc-start.c:287
> #17 0x0040483d in _start ()
> (gdb)
>
>
> On 16 October 2017 at 22:34, Maxim Uvarov  wrote:
>
>> ./configure --enable-schedule-scalable
>> sudo make check
>>
>>   Test: pktio_test_sched_multi 
>> ..../include/odp_llqueue.h:269:llq_dequeue_cond():node->next
>> == SENTINEL
>> Aborted
>>
>> full log is attached.
>>
>>
>>
>> On 16 October 2017 at 19:53, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> wrote:
>>
>>> Dmitry had taken a look at these. He mentioned they can be ignored for
>>> now.
>>> Thanks,
>>> Honnappa
>>>
>>> On 16 October 2017 at 03:52, Maxim Uvarov 
>>> wrote:
>>>
>>>> bootstrap generates a lot of errors, I can reproduce them locally also
>>>>
>>>> https://travis-ci.org/nagarahalli/odp/jobs/287783022
>>>>
>>>> On 16 October 2017 at 09:21, Maxim Uvarov 
>>>> wrote:
>>>>
>>>>> I will take a look. But please check that you added this 3 commits.
>>>>>
>>>>>
>>>>> c16af648 travis: purge dpdk cache on version change
>>>>> 3cb45201 travis: build dpdk for general cpu
>>>>> 73bc4619 travis: temporary turn off dpdk caching
>>>>>
>>>>> In general Travis stopped due to no output. You can make tests more
>>>>> verbose with make check V=1
>>>>>
>>>>> But I would start with adding dpdk commits because without them result
>>>>> was also unclear.
>>>>>
>>>>> Maxim.
>>>>>
>>>>>
>>>>> On 16 October 2017 at 06:45, Honnappa Nagarahalli <
>>>>> honnappa.nagaraha...@linaro.org> wrote:
>>>>>
>>>>>> Hi Maxim,
>>>>>>  The test cases validation/api/pktio_run.sh and
>>>>>> validation/api/pktio_run_tap.sh are failing on Travis for the
>>>>>> following configurations:
>>>>>>
>>>>>> [image: Inline images 1]
>>>>>>
>>>>>> I am not able to reproduce these issues on my local machine. How do I
>>>>>> debug these? Is there any more log information that I can get from 
>>>>>> Travis?
>>>>>>
>>>>>> The merge is available at: https://github.com/nagarah
>>>>>> alli/odp/tree/2.0-api-next-merge-11Oct2017.
>>>>>>
>>>>>> Thank you,
>>>>>> Honnappa
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>


Re: [lng-odp] api-next merge with ODP 2.0

2017-10-16 Thread Honnappa Nagarahalli
Dmitry had taken a look at these. He mentioned they can be ignored for now.
Thanks,
Honnappa

On 16 October 2017 at 03:52, Maxim Uvarov  wrote:

> bootstrap generates a lot of errors, I can reproduce them locally also
>
> https://travis-ci.org/nagarahalli/odp/jobs/287783022
>
> On 16 October 2017 at 09:21, Maxim Uvarov  wrote:
>
>> I will take a look. But please check that you added this 3 commits.
>>
>>
>> c16af648 travis: purge dpdk cache on version change
>> 3cb45201 travis: build dpdk for general cpu
>> 73bc4619 travis: temporary turn off dpdk caching
>>
>> In general Travis stopped due to no output. You can make tests more
>> verbose with make check V=1
>>
>> But I would start with adding dpdk commits because without them result
>> was also unclear.
>>
>> Maxim.
>>
>>
>> On 16 October 2017 at 06:45, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> wrote:
>>
>>> Hi Maxim,
>>>  The test cases validation/api/pktio_run.sh and
>>> validation/api/pktio_run_tap.sh are failing on Travis for the following
>>> configurations:
>>>
>>> [image: Inline images 1]
>>>
>>> I am not able to reproduce these issues on my local machine. How do I
>>> debug these? Is there any more log information that I can get from Travis?
>>>
>>> The merge is available at: https://github.com/nagarah
>>> alli/odp/tree/2.0-api-next-merge-11Oct2017.
>>>
>>> Thank you,
>>> Honnappa
>>>
>>
>>
>


Re: [lng-odp] api-next merge with ODP 2.0

2017-10-16 Thread Honnappa Nagarahalli
On 16 October 2017 at 01:21, Maxim Uvarov  wrote:

> I will take a look. But please check that you added this 3 commits.
>
>
> c16af648 travis: purge dpdk cache on version change
>
This is not in the merge. The following 2 are in the merge


> 3cb45201 travis: build dpdk for general cpu
> 73bc4619 travis: temporary turn off dpdk caching
>
> In general Travis stopped due to no output. You can make tests more
> verbose with make check V=1
>
> But I would start with adding dpdk commits because without them result was
> also unclear.
>
> Maxim.
>
>
> On 16 October 2017 at 06:45, Honnappa Nagarahalli <
> honnappa.nagaraha...@linaro.org> wrote:
>
>> Hi Maxim,
>>  The test cases validation/api/pktio_run.sh and
>> validation/api/pktio_run_tap.sh are failing on Travis for the following
>> configurations:
>>
>> [image: Inline images 1]
>>
>> I am not able to reproduce these issues on my local machine. How do I
>> debug these? Is there any more log information that I can get from Travis?
>>
>> The merge is available at: https://github.com/nagarah
>> alli/odp/tree/2.0-api-next-merge-11Oct2017.
>>
>> Thank you,
>> Honnappa
>>
>
>


[lng-odp] api-next merge with ODP 2.0

2017-10-15 Thread Honnappa Nagarahalli
Hi Maxim,
 The test cases validation/api/pktio_run.sh and
validation/api/pktio_run_tap.sh are failing on Travis for the following
configurations:

[image: Inline images 1]

I am not able to reproduce these issues on my local machine. How do I debug
these? Is there any more log information that I can get from Travis?

The merge is available at:
https://github.com/nagarahalli/odp/tree/2.0-api-next-merge-11Oct2017.

Thank you,
Honnappa


Re: [lng-odp] Moving scalable scheduler to master

2017-10-12 Thread Honnappa Nagarahalli
We have agreed (when the code was accepted into api-next) that we will
take a re-look at it. Ola is doing that currently. The goal is to make
sure we do not loose performance. Once Ola has some results we can
discuss further.

Thank you,
Honnappa

On 12 October 2017 at 08:57, Dmitry Eremin-Solenikov
 wrote:
> On 12 October 2017 at 16:33, Brian Brooks  wrote:
>> This code is primarily contained within its own files, so I don't see
>> how this mitigates any issues (merge conflicts) with merging it to
>> master.
>
> Problem lies in code quality, not in (possible) merge conflicts.
>
> --
> With best wishes
> Dmitry


Re: [lng-odp] classifier_enable flag

2017-10-08 Thread Honnappa Nagarahalli
Also, are classifier and parser mutually exclusive?

Thank you,
Honnappa

On 8 October 2017 at 19:14, Honnappa Nagarahalli
 wrote:
> Hi,
>The 'classifier_enable' flag is a pktin queue parameter. Shouldn't
> this be a parameter of the pkt I/O device (port)?
>
> Does this mean, some queues of the pkt I/O device (port) can have
> classifier enabled and some hash enabled?
>
> Thank you,
> Honnappa


[lng-odp] classifier_enable flag

2017-10-08 Thread Honnappa Nagarahalli
Hi,
   The 'classifier_enable' flag is a pktin queue parameter. Shouldn't
this be a parameter of the pkt I/O device (port)?

Does this mean, some queues of the pkt I/O device (port) can have
classifier enabled and some hash enabled?

Thank you,
Honnappa


Re: [lng-odp] ODP classification framework

2017-10-08 Thread Honnappa Nagarahalli
Hi Lorin,
I am trying to understand this more. When you say Linux Generic
pkt I/O support, are you talking about Pkt I/O's like pcap, socket,
tap etc?

Assuming yes, these use software classification. But that software
classification can be completely replaced with your HW based
classification. It will result in a boost for all the Pkt I/Os.
Similar things have been done in odp-dpdk. For ex: it uses
pool/packet/buffer functionality from DPDK and the rest from
Linux-generic. This can be your immediate/temporary solution.

As Bill mentioned, in cloud-dev branch, we have a framework meant to
address these kind of challenges.

Thank you,
Honnappa

On 8 October 2017 at 15:17, Bill Fischofer  wrote:
> We are currently adding a modularization framework to ODP as part of the
> current cloud-dev development branch. The classifier is not currently part
> of that framework but will be moved to this as part of ongoing work.
>
> So yes, there is an intention to add this. If you'd like to help with that
> please let us know. One of the criteria for prioritizing these is having
> multiple implementations to work with so they can be selected / tested with.
>
> On Sun, Oct 8, 2017 at 1:11 PM, Liron Himi  wrote:
>
>> Hi,
>>
>> I started to implement odp-classification based on our HW. As we also
>> provide the linux-generic PKTIOs support it seems we also need to provide
>> the linux-generic classification support (As a pure SW based
>> implementation). But there is no framework, as exist for PKTIO, to support
>> various types of classification implementation.
>>
>> Is there any intention to add such framework? If so, I think that some
>> API's should be changed, such as 'odp_cls_capability' that should reflect
>> specific PKTIO cls capabilities.
>>
>> Regards,
>> Liron
>>


Re: [lng-odp] generic core + HW specific drivers

2017-10-06 Thread Honnappa Nagarahalli
Any experts on how is the packaging done for DPDK?

On 6 October 2017 at 08:36, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
>> > No, I'm pointing that the more there's common core SW, the more there
>> > are trade-offs and the less direct HW access == less  performance. For
>> > optimal performance, the amount of common core SW is zero.
>>
>> Yes this is sort of the ideal but I doubt this type of installation
>> will be accepted by e.g. Red Hat for inclusion in server-oriented
>> Linux distributions. Jon Masters seems to be strongly against this
>> (although I have only heard this second hand). So that's why I
>> proposed the common (generic) core + platform specific drivers model
>> that is used by e.g. Xorg and DPDK. Since DPDK is actually a user
>> space framework (unlike Xorg), this should be a good model for ODP and
>> something that Red Hat cannot object against.
>>
>
> If every line of code is maintained properly, why a distro would care about 
> the ratio between common core SW and HW specific driver SW?
>
> If they care, what is an acceptable ratio? Is it 90% common SW : 10% HW 
> specific SW, 80:20, 50:50, 10:90 and why not 0:100? How this ratio should be 
> calculated?
>
> DPDK is in Ubuntu already. Have anyone calculated what this ratio is for it?
>
> I'd be interested to see ODP as part of any distro first, and only after that 
> speculate what other distros may or may not say. E.g. Ubuntu seem to accept  
> packages that are only for single arch, e.g.:
> librte-pmd-fm10k17.05 (= 17.05.2-0ubuntu1) [amd64, i386]  <<< Intel Red Rock 
> Canyon net driver, provided only for x86
>
> -Petri
>
>


Re: [lng-odp] generic core + HW specific drivers

2017-10-05 Thread Honnappa Nagarahalli
On 5 October 2017 at 05:09, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
> No HTML mails, please.
>
>
> From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
> Sent: Wednesday, October 04, 2017 3:55 PM
> To: Savolainen, Petri (Nokia - FI/Espoo) 
> Cc: Andriy Berestovskyy ; Ola 
> Liljedahl ; lng-odp@lists.linaro.org
> Subject: Re: [lng-odp] generic core + HW specific drivers
>
>
>
> On Wed, Oct 4, 2017 at 7:47 AM, Savolainen, Petri (Nokia - FI/Espoo) 
>  wrote:
>
>
>> -Original Message-
>> From: Andriy Berestovskyy 
>> [mailto:mailto:andriy.berestovs...@caviumnetworks.com]
>> Sent: Tuesday, October 03, 2017 8:22 PM
>> To: Savolainen, Petri (Nokia - FI/Espoo) 
>> ; Ola
>> Liljedahl ; mailto:lng-odp@lists.linaro.org
>> Subject: Re: [lng-odp] generic core + HW specific drivers
>>
>> Hey,
>> Please see below.
>>
>> On 03.10.2017 10:12, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>> > So, we should be able to deliver ODP as a set of HW independent and
>> > HW specific packages (libraries). For example, minimal install would
>> >  include only odp, odp-linux and odp-test-suite, but when on arm64
>> > (and especially when on ThunderX) odp-thunderx would be installed
>>
>> There are architecture dependencies (i.e. i386, amd64, arm64 etc), but
>> there are no specific platform dependencies (i.e. Cavium ThunderX,
>> Cavium Octeon, NXP etc).
>>
>> In other words, there is no such mechanism in packaging to create a
>> package odp, which will automatically install package odp-thunderx only
>> on Cavium ThunderX platforms.
>
> I'd expect that ODP main package (e.g. for arm64) would run a script (written 
> by us) during install which digs out information about the system and sets up 
> ODP paths accordingly. E.g. libs/headers from odp-thunderx package would 
> added to search paths when installing into a ThunderX system. If system is 
> not recognized,  ODP libs/header paths would point into odp-linux.
>
> That's still trying to make this a static configuration that can be done at 
> install time. What about VMs/containers that are instantiated on different 
> hosts as they are deployed? This really needs to be determined at run time, 
> not install time.
>
>
>
> Also with a VM, all arm64 ODP packages would be present, and the problem to 
> solve would be which implementation to use (to link against). If a run time 
> code can probe the host system (e.g. are we on ThunderX), so does a script. 
> An ignorant user might not run additional scripts and thus be left with the 
> default setup (odp-linux). A more aware user would run an additional script 
> before launching/building any ODP apps. This script would notice that we have 
> e.g. ThunderX HW and would change ODP paths to point into odp-thunderx 
> libs/headers. The HW discovery could be as simple as cloud administrator 
> updating VM bootparams with SoC model information.
>
>
>>
>> All other projects you are mentioning (kernel, DPDK, Xorg) use
>> architecture dependency (different packages for different architectures)
>> combined with run time configuration/probing. A kernel driver might be
>> installed, but it will be unused until configured/probed.
>
> Those projects aim to maximize code re-use of the core part and minimize size 
> of the driver part. Optimally, we'd do the opposite - minimize the core part 
> to zero and dynamically link application directly to the right "driver" (== 
> HW specific odp implementation).
>
> If there's no core part, run time probing is not needed - install time 
> probing and lib/header path setup is enough.
>
> You're describing the embedded build case, which is similar to what we have 
> today with --enable-abi-compat=no. That's not changing. We're only talking 
> about what happens for --enable-abi-compat=yes builds.
>
>
>
> No, I'm pointing that the more there's common core SW, the more there are 
> trade-offs and the less direct HW access == less  performance. For optimal 
> performance, the amount of common core SW is zero.
>
> You may very well want to build and run non-ABI compat applications also in a 
> VM. Non-ABI compat just means that you are not going to run *the same 
> application image* with another implementation/system. You may still build 
> locally on the VM, or have bunch of (non-ABI compat) application images 
> pre-built - one per implementation/system.
>
>
>>
>> To support multiple platforms, runtime configuration/probing is
>> inevitable. The real question is who does this probing: ODP itself or
>> each application independently. To avoid code duplication, ODP looks
>> like a better choice...
>
> Install time probe/conf would be the best choice. The second best would be a 
> dummy "core ODP" implementation which would be just a giant function pointer 
> array (redirect every ODP API call to its implementation in a HW specific 
> lib).
>
> That's effectively what this is, except that the populat

Re: [lng-odp] generic core + HW specific drivers

2017-10-05 Thread Honnappa Nagarahalli
Bob Monkman commented on about 'platform discovery' during my talk at
Connect. His thought is that with Intel EPA and such (enhanced
platform awareness), the orchestrater will have in-depth data about
the platform. It will have information about the make and model of a
NIC/hardware accelerator for ex:. He promised to provide few
documents, I am following up.

Thanks,
Honnappa

On 4 October 2017 at 08:46, Francois Ozog  wrote:
> Hi Bogdan,
>
> cloud manager is a loosely defined entity ;-)
>
> In the context of NFV orchestration will not deal with this.
>
> VNF manager may but there is lack of "sensing" information.
>
> If you think of an AWS/Azure image, then this simply does not work.
>
> FF
>
> On 4 October 2017 at 15:27, Bogdan Pricope 
> wrote:
>
>> There are (at least) three cases:
>>
>> 1.   Discovery is done by odp
>>
>> 2.   Discovery is done by application
>>
>> 3.   Discovery is done by a third party entity
>>
>>
>>
>> For cloud, I would expect a cloud administrator entity will know
>> exactly the configuration of each ‘target’ and this can be provided to
>> ‘target’ as an environment variable or a file.
>>
>>
>> This information can be used to:
>>
>> 1.   Generate an odp.conf file (with a predefine structure),
>> identifying the module(s) to load.
>>
>>
>> e.g.
>>
>> module:
>>
>> {
>>
>>modules = ("libodp_thunderx.so.0");
>>
>> };
>>
>>
>>
>> 2.   Download the actual module libs (e.g. from a network drive)
>> if cannot be deployed with the application (at the same time)
>>
>>
>>
>> Ultimately, odp.conf stored in a predetermined location or indicated
>> as an environment variable will be used by odp-core library to load
>> the module(s).
>>
>>
>>
>> ODP_SYSCONFIG_FILE=/tmp/odp.conf ./example/generator/odp_generator -I
>> 1 -m r -c 0x8
>>
>>
>>
>> /Bogdan
>>
>> On 4 October 2017 at 15:54, Bill Fischofer 
>> wrote:
>> > On Wed, Oct 4, 2017 at 7:47 AM, Savolainen, Petri (Nokia - FI/Espoo) <
>> > petri.savolai...@nokia.com> wrote:
>> >
>> >>
>> >>
>> >> > -Original Message-
>> >> > From: Andriy Berestovskyy [mailto:Andriy.Berestovskyy@
>> caviumnetworks.com
>> >> ]
>> >> > Sent: Tuesday, October 03, 2017 8:22 PM
>> >> > To: Savolainen, Petri (Nokia - FI/Espoo) > >;
>> >> Ola
>> >> > Liljedahl ; lng-odp@lists.linaro.org
>> >> > Subject: Re: [lng-odp] generic core + HW specific drivers
>> >> >
>> >> > Hey,
>> >> > Please see below.
>> >> >
>> >> > On 03.10.2017 10:12, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>> >> > > So, we should be able to deliver ODP as a set of HW independent and
>> >> > > HW specific packages (libraries). For example, minimal install would
>> >> > >  include only odp, odp-linux and odp-test-suite, but when on arm64
>> >> > > (and especially when on ThunderX) odp-thunderx would be installed
>> >> >
>> >> > There are architecture dependencies (i.e. i386, amd64, arm64 etc), but
>> >> > there are no specific platform dependencies (i.e. Cavium ThunderX,
>> >> > Cavium Octeon, NXP etc).
>> >> >
>> >> > In other words, there is no such mechanism in packaging to create a
>> >> > package odp, which will automatically install package odp-thunderx
>> only
>> >> > on Cavium ThunderX platforms.
>> >>
>> >> I'd expect that ODP main package (e.g. for arm64) would run a script
>> >> (written by us) during install which digs out information about the
>> system
>> >> and sets up ODP paths accordingly. E.g. libs/headers from odp-thunderx
>> >> package would added to search paths when installing into a ThunderX
>> system.
>> >> If system is not recognized,  ODP libs/header paths would point into
>> >> odp-linux.
>> >>
>> >>
>> > That's still trying to make this a static configuration that can be done
>> at
>> > install time. What about VMs/containers that are instantiated on
>> different
>> > hosts as they are deployed? This really needs to be determined at run
>> time,
>> > not install time.
>> >
>> >
>> >>
>> >> >
>> >> >
>> >> > > Package:
>> >> > > * odp * depends on: odp-linux
>> >> > > * odp-linux * depends on: odp
>> >> >  > * odp-thunderx [arm64] * depends on: odp
>> >> >
>> >> > So installing odp-thunderx we will end up with odp, odp-linux and
>> >> > odp-thunderx, so still we have switch runtime between odp-linux and
>> >> > odp-thunderx...
>> >>
>> >> I hope it's a matter of probing and installing paths on install time,
>> >> instead of runtime. It's hard to believe that ODP would be the first
>> >> package ever to choose and install a library from a set of alternative
>> >> libraries.
>> >>
>> >
>> > ODP is a pioneer in the sense that it's offering access to platform
>> > acceleration capabilities, not simple attached device variants. So we may
>> > well be first in this sense.
>> >
>> >
>> >>
>> >> >
>> >> >
>> >> > All other projects you are mentioning (kernel, DPDK, Xorg) use
>> >> > architecture dependency (different packages for different
>> architectures)
>> >> > combined with run time configuration/probing. A kernel driver might be
>> >> > in

Re: [lng-odp] generic core + HW specific drivers

2017-10-03 Thread Honnappa Nagarahalli
On 3 October 2017 at 08:51, Francois Ozog  wrote:
> Hi Maxim,
>
> Modularization is configurable so that the produced output can be either:
> - runtime selection of modules based on whatever discovery mechanism is
> available
> - fixed compilation of a set of modules
>
> As per the discovery, I would favor a similar approach to packetio: give an
> opportunity to each module to say if it is appropriate for the environment.
> This avoids a standardized way to detect platform which may not even exist:
> - A thunderX module may check for PCI bus existence, and if yes certain PCI
> entries, or any other method (devicetree? ACPI?)
> - A DPAA2 may check for DPAA2 bus existence and particular objects for each
> board, or any other method (devicetree? ACPI?)
>
I was thinking that the platform discovery could be at a higher level.
Identify a processor string via /proc or such entry. Then use that
string to map to a set of libraries for that platform (this mapping
could be through config files). The libraries in turn should handle
any variations in the platform.

>
> FF
>
>
> On 3 October 2017 at 15:33, Maxim Uvarov  wrote:
>
>>
>>
>> On 3 October 2017 at 15:59, Bill Fischofer 
>> wrote:
>>
>>> Good summary. The key is that RedHat and others want:
>>>
>>> 1. They build the distribution from source we provide, we don't get to
>>> provide any binaries
>>> 2. There is a single distribution they will support per-ISA (e.g.,
>>> AArch64)
>>>
>>> The modular framework is designed to accommodate these requirements by
>>> allowing a "universal core" to discover the microarchitecture at run time
>>> and the most appropriate pluggable modules to use to exploit that
>>> microarchitecture's acceleration capabilities. These modules may be
>>> precompiled along with the core if they are part of the single ODP
>>> distribution, or they may be packaged and distributed separately as the
>>> supplier of these modules wishes.
>>>
>>> At the same time, this universal core can be statically linked for a
>>> specific target platform to accommodate the needs of embedded
>>> applications.
>>> In this case the discovery and call-indirection goes away so there should
>>> be no more overhead than exists in today's ODP when switching between
>>> --enable-abi-compat=[yes|no]
>>>
>>>
>> I have nothing to object about modularization components in linux-gen but
>> can we be more consistent. Or we speak
>> about runtime discovery or we speak about configuration files. I hear one
>> day people say about discovery, next day
>> about configuration files.
>> Does somebody have understanding how that discovery will work? How odp in
>> guest will understand during discovery what
>> to use? Scan PCI devices or some drivers? With drivers that is mode clear
>> as with other odp components.
>>
>> I think that people more afraid that changes to current master will bring
>> side affects as more complexity and less performance.
>> Maintain new work in separate branch is complex but I think we need to
>> define requirements when code can be merged to master.
>> These requirements from my side can be:
>> - no performance degradation according to current code;
>> - feature completeness (which means module framework works, we can change
>> scheduler, do discovery at least some platforms,
>> some pktio from module ddf works);
>> - code is formed as clean patches on top of current master and passes
>> merging review (I'm not sure how many people review current cloud-dev
>> branch work);
>> - at least several platforms are merged together with that framework to
>> show it's effectiveness;
>>
>> In my understanding if all 4 items are met nobody will have objections.
>>
>> Other thing is might be shift priorities a little bit. Instead of changing
>> everything to modules, change one or two things, merge second platform and
>> make it work. Then send clean patches for review.
>>
>> Any thoughts on that?
>>
>> Thank you,
>> Maxim.
>>
>>
>>
>>> On Tue, Oct 3, 2017 at 5:12 AM, Francois Ozog 
>>> wrote:
>>>
>>> > Thanks Ola and Petri.
>>> >
>>> > Let's talk about use cases first.
>>> >
>>> > Go to market for ODP applications may be:
>>> >
>>> >- A product composed of software and hardware (typically a NEP
>>> approach
>>> >such as Nokia)
>>> >- A software to be installed by a system administrator of an
>>> enterprise
>>> >- A "service" to be part of a cloud offering (say an AWS image)
>>> >- A VNF to be deployed on a wide variety, apriori unknown, of
>>> hardware
>>> >as a VM
>>> >- An image to be deployed on bare metal clouds (packet.net or OVH
>>> for
>>> >instance) with hardware diversity
>>> >
>>> > As a result, an ODP application may be :
>>> >
>>> >1. Deployed as a single installed image and instantiated in different
>>> >virtualized or bare metal clouds
>>> >2. A VM is live migrated between two asymetric compute nodes
>>> >3. Installed on a specific machine
>>> >4. Deployed as an image that is to be instantiated on a s

Re: [lng-odp] generic core + HW specific drivers

2017-10-03 Thread Honnappa Nagarahalli
On 3 October 2017 at 07:59, Bill Fischofer  wrote:
> Good summary. The key is that RedHat and others want:
>
> 1. They build the distribution from source we provide, we don't get to
> provide any binaries
> 2. There is a single distribution they will support per-ISA (e.g., AArch64)
>
> The modular framework is designed to accommodate these requirements by
> allowing a "universal core" to discover the microarchitecture at run time
> and the most appropriate pluggable modules to use to exploit that
> microarchitecture's acceleration capabilities. These modules may be
> precompiled along with the core if they are part of the single ODP
> distribution, or they may be packaged and distributed separately as the
> supplier of these modules wishes.
>
> At the same time, this universal core can be statically linked for a
> specific target platform to accommodate the needs of embedded applications.
> In this case the discovery and call-indirection goes away so there should
> be no more overhead than exists in today's ODP when switching between
> --enable-abi-compat=[yes|no]

Agree. We are currently looking into how the inline functions can be
enabled when building for a specific target platform.

>
> On Tue, Oct 3, 2017 at 5:12 AM, Francois Ozog 
> wrote:
>
>> Thanks Ola and Petri.
>>
>> Let's talk about use cases first.
>>
>> Go to market for ODP applications may be:
>>
>>- A product composed of software and hardware (typically a NEP approach
>>such as Nokia)
>>- A software to be installed by a system administrator of an enterprise
>>- A "service" to be part of a cloud offering (say an AWS image)
>>- A VNF to be deployed on a wide variety, apriori unknown, of hardware
>>as a VM
>>- An image to be deployed on bare metal clouds (packet.net or OVH for
>>instance) with hardware diversity
>>
>> As a result, an ODP application may be :
>>
>>1. Deployed as a single installed image and instantiated in different
>>virtualized or bare metal clouds
>>2. A VM is live migrated between two asymetric compute nodes
>>3. Installed on a specific machine
>>4. Deployed as an image that is to be instantiated on a single hardware
>>platform
>>
>> Irrespective of commercial Linux distribution acceptance, case 3 and 4 can
>> accommodate a static deployment paradigm where the hardware dependent
>> package is selected at installation time. Those cases corresponds to a
>> system integrator, an network equipment provider that builds a product for
>> a specific hardware platform.
>>
>> On the other hand, case 1 and 2 need a run time adaptation of the
>> framework. Case 2 may in fact be more between platform of the same type but
>> with different PLUGGED NICs and accelerators. While technically feasible
>> (yet very complex), I don't expect to deal with live migration between
>> Cavium and NXP or even Cavium ThunderX and Cavium ThunderX/2.
>> So case 1 is essentially addressing the needs of ISVs that do NOT sell a
>> bundle of software and hardware as a product. You can call it software
>> appliance.
>>
>> Ola, on the Xorg thing: yes it says that xorg.conf is now determined at
>> runtime... But if you concretely experience changing graphics card, or
>> supporting both CPU integrated graphics in additional to external GPU, you
>> will face trouble and find a lot of literature about achieving the results
>> or recovering from depressive situations...
>>
>>
>> The modular framework allows one ODP implementation to be considered as a
>> single module and loaded at runtime to solve case 1, 3 and 4. Those modules
>> may still be deployed as separate packages. The initial idea was to split
>> the implementation in more modules but it may not make that much sense
>> after giving more thoughts. Additional native drivers and the DDF itself
>> may be considered as separate modules and also distributed as separate
>> packages.
>> So we would have:
>> - odp-core
>> - odp-linux required module that provides AF packet and other packetios;
>> depends on odp-core
>> - odp-ddf optional module that provides dynamic pluggable hardware support;
>> depends on odp-core
>> - odp- optional modules for the various native NIC support; depends on
>> odp-ddf
>> - odp- optional modules to deal with SoC specific arch (ThunderX,
>> ThunderX/2, DPAA2...); depends on odp-core
>>
>> The odp- is derived from the current native SoC implementation
>> but need to leverage odp-mbuf and the new mempool facilities to allow
>> diversity of packetio to livetogether in a single platform, the rest is
>> entirely proprietary.
>>
>> The static and dynamic approaches are not mutually exclusive. I highly
>> recommend that the static (case 3 and 4) approach is driven by individual
>> members should they need it while we collectively solve the broader cloud
>> (case 1) deployment.
>>
>> Cheers
>>
>> FF
>>
>> On 3 October 2017 at 10:12, Savolainen, Petri (Nokia - FI/Espoo) <
>> petri.savolai...@nokia.com> wrote:
>>
>> > > -Original Messa

Re: [lng-odp] Future of per-arch-platform ABI spec files

2017-09-19 Thread Honnappa Nagarahalli
I think Brian had similar comments. He will be in office tomorrow.

Thanks,
Honnappa

On 19 September 2017 at 07:03, Dmitry Eremin-Solenikov
 wrote:
> Hello,
>
> I have been poking around per-arch-platform ABI spec files.
> Currently all architectures just include default specs. Do we have any
> particular use case for these separate files? Otherwise I'd suggest to
> drop them completely and just leave 'default' in place.
>
> --
> With best wishes
> Dmitry


[lng-odp] PR #139 discussion

2017-09-14 Thread Honnappa Nagarahalli
Hi,
I saw that PR #139 has a commit to remove the dpdk pkt I/O from
linux-generic. It is understood very clearly that we will have only
one dpdk pkt I/O, which is from linux-dpdk. My only concern is the
timing (I understand that we had already agreed on timing as well :)).

The concern that I have is, we need to benchmark and optimize
pool/packet/buffer components that we are using from Linux-generic.
Right now, these components will not be used in linux-dpdk. If we
remove linux-generic's dpdk pkt I/O, we may not have an efficient
enough pkt I/O to test these components. Another option could be to
use netmap, but I am not sure how it compares against linux-generic's
pkt I/O. Anyone has more information on this?

My suggestion is to keep linux-generic's dpdk pkt I/O around till we
have at least one ODP native driver using which we can test the above
mentioned components.

Any other suggestions?

Thank you,
Honnappa


Re: [lng-odp] IPsec API finialization

2017-09-12 Thread Honnappa Nagarahalli
On 7 September 2017 at 03:39, Peltonen, Janne (Nokia - FI/Espoo)
 wrote:
> Hi,
>
> Comments below:
>
> Bill Fischofer wrote:
>> As discussed during today's ARCH call, we need to close on the
>> remaining IPsec confusion surrounding the definition of
>> odp_ipsec_sa_disable().
>>
>> The main scenario that this API is designed to address, from an
>> application standpoint is as follows:
>>
>> - Application has one or more active SAs that are receiving inline RX
>> IPsec traffic.
>>
>> - An active SA needs to be deactivated / destroyed for some reason
>>
>> - Application needs to be sure that "in flight" events that it is not
>> yet aware of are in a defined state.
>>
>> odp_ipsec_sa_disable() allows the application to tell the ODP
>> implementation that it wants to deactivate an SA so that it can be
>> destroyed. In response to this the implementation is expected to:
>>
>> - Stop making internal use of this SA
>>
>> - Tell the application when the SA is quiesced as far as the
>> implementation is concerned.
>>
>> As currently defined, the way this second step is communicated to the
>> application is via a status event posted to the event queue associated
>> with the SA.
>>
>> The main point of the discussion is that NXP HW does not have a direct
>> analog to this API. As Nikhil explained it, the "quiescing" function
>> is assumed to be an application responsibility such that by the time
>> an application calls odp_ipsec_sa_destroy() it already knows that the
>> SA is inactive.
>>
>> When an application is using IPsec in lookaside mode this is a not
>> unreasonable requirement, as the application explicitly invokes every
>> IPsec operation, so it knows which operations are in flight.
>
> Yes, except that with fragmentation offload combined with IPsec
> offload the application cannot easily know when it has received all
> the result packets generated by single IPsec operation. If there
> were always one result packet per operation, simple SA reference
> counting in the application would work.
>
> We discussed this with Petri when designing the API and considered
> various options but concluded that the disable completion event
> is a simple solution with low overhead when an SA is not being
> disabled. And since event queues are a central concept of ODP and
> of its asynchronous APIs, informing the disable completion through
> an event seems natural.
>
> Other options that do not work that well:
>
> - Application inspects the result fragments to see when it has got
>   them all. This would require almost the same work as fragment
>   reassembly and it would introduce state that would have to be
>   synchronized between multiple threads (since the result packets
>   may get scheduled to different threads).
>
> - ODP implementation marks the last result fragment with a special
>   bit. This may perhaps be nasty for some HW implementations that
>   parallelize the processing and do not know which of the fragments
>   gets queued the last. If the application reference counts the
>   SAs (as it most likely has to do to safely destroy the SA and
>   safely delete its own state for the SA), it can unref and SA
>   when it sees the marked "last" result fragment but only after
>   it has processed the other fragments. This imposes synchronization
>   overhead (e.g. ordered lock if the SA queue is ordered and
>   scheduled) for the normal processing path when the SA is not
>   being deleted.
>
> - Variations of the above (e.g. returning the total number of result
>   fragments for one IPsec operation in all the results) have
>   similar problems regarding parallelization in the implementation
>   and synchronization need in the application.
>
> The disable completion event also requires synchronization (e.g.
> in the form of an ordered lock but only when the SA is being deleted)
>
>>
>> For inline processing, however, it's less clear what the application
>> can do since packets may be in process that the application has yet to
>> see. If an application calls odp_ipsec_sa_destroy() the implementation
>> can pick a point after which all packets matching that SA are
>> ignored/dropped, but the issue is what about packets that have been
>> processed and are sitting in event queues that the application is not
>> yet aware of?
>>
>> If an application receives an event that was processed under a
>> destroyed SA, the concern is that it not segfault or experience some
>> other anomaly trying to access any application context that it had
>> associated with that now-defunct SA.
>>
>> The solution proposed in PR #109[1] is that odp_ipsec_sa_disable() be
>> allowed to complete synchronously rather than asynchronously, which
>> means that the NXP implementation can treat it as an effective NOP
>> since it doesn't map to any HW function. While this solves one
>> problem, there is still the problem of how to deal with events that
>> are already enqueued at the time of this call. The intent of
>> odp_ipsec_sa_disable() is that after the

Re: [lng-odp] IPsec API finialization

2017-09-12 Thread Honnappa Nagarahalli
On 8 September 2017 at 06:10, Janne Peltonen  wrote:
>
>
> On Fri, 8 Sep 2017, Nikhil Agarwal wrote:
>> On 7 September 2017 at 14:09, Peltonen, Janne (Nokia - FI/Espoo)
>>  wrote:>
>> > Bill Fischofer wrote:
>> > >  if (odp_ipsec_result(&ipsec_res, pkt) < 0) { /* Stale
>> > > event, discard */
>>
>> > There is a race condition here if the idea is that calling
>> > odp_ipsec_sa_disable() (or destroy()) marks the SA stale to prevent
>> > further processing for that SA in the result event handling.
>>
>> > Maybe the odp_ipsec_result() succeeds but the SA becomes stale
>> > right after that, before rest of the processing is done.
>>
>> Same race condition exist in the proposed API when one thread received the 
>> last
>> packet of SA and still process it while the other thread on receiving disable
>> event destroy the SA, application will always need to synchronize its own
>> thread.
>
> There is no race in the API, only in incorrect use of it. As I explained,
> synchronization is needed even with the disable event and it can be
> easily done using an ordered lock that ensures that other event handling
> threads are finished processing the SA (they have released the queue
> context through another call to schedule or explicitly). So the event
> handler for the disable event can destroy the SA safely without any race
> if it uses an ordered lock (or if the SA queue is atomic).
>
This would mean, all IPSec processing would have to happen using
ordered queues even though there is no need. For ex: in the encryption
and look-aside case, the packets do not need to be processed in-order.

> I wrote a piece of pseudocode about this in my reply, maybe you missed it.
>
>>
>> Let me reiterate the solution we discussed yesterday. After the SA is 
>> disabled
>> successfully, implementations will stop enqueuing any more events from that 
>> SA
>> and any call to odp_ipsec_result will indicate that SA is disabled.(SA Entry 
>> is
>> still valid though). Application may choose to synchronize its database and
>> process that packet or may drop it. Then it will call odp_ipsec_sa_destroy 
>> which
>> will delete the SA and any further call to odp_ipsec_result pertaining to 
>> that
>> SA will result in invalid event. Hope this resolves the issue.
>
> That clarifies the API you are proposing but not how you intend it to
> be used. It is easy to say that an application just should do whatever
> synchronization is necessary.
>
> That said, I think your proposal could be made to work from application
> perspective, but with some inconvenience:
>
> After the packet event handler checks (for every IPsec packet event)
> that the event is not stale, it has to prevent odp_ipsec_sa_destroy()
> from being called in any other thread until it no longer uses the SA.
> While this could be done e.g. using an RCU based or epoch based
> synchronization scheme to delay the destroy() just long enough, it
> may perhaps not be easy for all application authors to do so since
> ODP does not really provide anything than the basic atomics for that.
>
RCU is a mechanism which needs to be tweaked to the actual application
that uses it. I do not think ODP needs to provide any support for RCU
based approach. This is secondary, can be discussed further.

But the method you have described above is a good solution. I think
the IPSec case we are trying to solve can be generalized as how to
synchronize deletion of state between dataplane and control plane
cores. For ex: take the case of route deletion, there needs to be some
synchronization mechanism between control plane core and data plane
cores to make sure the data plane cores do not seg-fault when the
route is deleted. i.e. synchronization issues do exist in other parts
(that do not need ODP or do not use any accelerators) of the
application. So, the application already has mechanisms to deal with
the issues and application can apply them to IPSec as well.


> The disable completion event is, IMHO, cleaner for ODP API. It is
> not clear to me if it is impossible to provide it or merely difficult.
>
> Janne
>


Re: [lng-odp] Supporting ODP_PKTIO_OP_MT_SAFE

2017-09-11 Thread Honnappa Nagarahalli
On 10 September 2017 at 22:26, Brian Brooks  wrote:
> Honnappa,
>
> Could your proposal be simplified to: MT-safe pktio should be
> deprecated because it is not a common use case. Applications will
> either use MT-unsafe pktio or the MT-safe scheduler.

Not sure about deprecation. Looks like there is some history to this
feature, we need to consider all of that.

>
>> 1) Polling method - in which one pkt I/O will be created for each receive 
>> worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE is not 
>> required.
>
> Absence of MT-safe does not require 1:1 mapping of thread to pktio. It
> just means that it is the application's responsibility to ensure
> exclusive access to a pktio.
>
>> for high throughput packet I/Os [..] we do not need to support 
>> ODP_PKTIO_OP_MT_SAFE
>> We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.
>
> This would introduce an undesirable leaky abstraction.

>
> BB
>
> On Sun, Sep 10, 2017 at 12:40 PM, Bill Fischofer
>  wrote:
>> We can discuss this during tomorrow's ARCH call, and probably further
>> at Connect. MT Safe is the default behavior and it's opposite ("MT
>> Unsafe") was added as a potential optimization when applications
>> assure implementations that only a single thread will be polling a
>> PktIn queue or adding to a Pktout queue.
>>
>> Ideally, we'd like to retire all application I/O polling and use the
>> scheduler exclusively, but that's that's a longer-term goal. For now
>> we have both.
>>
>> On Sun, Sep 10, 2017 at 8:11 AM, Honnappa Nagarahalli
>>  wrote:
>>> Hi,
>>> I think there are 2 ways in which pkt I/O will be used:
>>>
>>> 1) Polling method - in which one pkt I/O will be created for each
>>> receive worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE
>>> is not required.
>>> 2) Event method - the scheduler is used to receive packets. In this
>>> case the scheduler will provide the exclusive access to a pkt I/O.
>>> Again in this case support for ODP_PKTIO_OP_MT_SAFE is not required.
>>>
>>> I am thinking, for high throughput packet I/Os such as dpdk or ODP
>>> native drivers (in the future), we do not need to support
>>> ODP_PKTIO_OP_MT_SAFE. The odp_pktio_open API call can return an error
>>> if ODP_PKTIO_OP_MT_SAFE is asked for.
>>>
>>> We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.
>>>
>>> This will save space in cache for the locks as well as instruction cycles.
>>>
>>> Thank you,
>>> Honnappa


Re: [lng-odp] Supporting ODP_PKTIO_OP_MT_SAFE

2017-09-11 Thread Honnappa Nagarahalli
We can come up with use cases. My question is what do we expect to get
deployed? What is required for quick development might be different
from what is required for deployment.

With NICs supporting multiple queues, will we have a case of lesser
pkt ins than the number of cores?

Thanks,
Honnappa


On 11 September 2017 at 04:55, Maxim Uvarov  wrote:
> On 11 September 2017 at 12:11, Bogdan Pricope 
> wrote:
>
>> Hi,
>>
>> There is the case where a a pktio has less pktins than available
>> cores. It is a valid case? We want to support it?
>> For example: 4 pktins and 8 cores... or default (socket) pktio with
>> one pktin/one pktout?
>>
>> Best regards,
>> Bogdan
>>
>
> I think - yes,  there should not be limitation.
>
> Maxim.
>
>
>>
>> On 11 September 2017 at 11:33, Maxim Uvarov 
>> wrote:
>> > On 11 September 2017 at 06:26, Brian Brooks 
>> wrote:
>> >
>> >> Honnappa,
>> >>
>> >> Could your proposal be simplified to: MT-safe pktio should be
>> >> deprecated because it is not a common use case. Applications will
>> >> either use MT-unsafe pktio or the MT-safe scheduler.
>> >>
>> >> > 1) Polling method - in which one pkt I/O will be created for each
>> >> receive worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE is
>> >> not required.
>> >>
>> >
>> >
>> > That is not always a case. One pktio can be used in several working
>> > threads. For that case safe is needed.
>> >
>> > All modes are:
>> > /**
>> >  * Packet input mode
>> >  */
>> > typedef enum odp_pktin_mode_t {
>> > /** Direct packet input from the interface */
>> > ODP_PKTIN_MODE_DIRECT = 0,
>> > /** Packet input through scheduler and scheduled event queues */
>> > ODP_PKTIN_MODE_SCHED,
>> > /** Packet input through plain event queues */
>> > ODP_PKTIN_MODE_QUEUE,
>> > /** Application will never receive from this interface */
>> > ODP_PKTIN_MODE_DISABLED
>> > } odp_pktin_mode_t;
>> >
>> > /**
>> >  * Packet output mode
>> >  */
>> > typedef enum odp_pktout_mode_t {
>> > /** Direct packet output on the interface */
>> > ODP_PKTOUT_MODE_DIRECT = 0,
>> > /** Packet output through event queues */
>> > ODP_PKTOUT_MODE_QUEUE,
>> > /** Packet output through traffic manager API */
>> > ODP_PKTOUT_MODE_TM,
>> > /** Application will never send to this interface */
>> > ODP_PKTOUT_MODE_DISABLED
>> > } odp_pktout_mode_t;
>> >
>> >
>> > For DIRECT, QUEUE and TM implementation can have different optimization
>> > regrades is the same pktio or in/out queue connected to that pktio used
>> in
>> > single thread or shared between threads. Application can not provide
>> > synchronization in that case because locking should be done on low level
>> > for some short period of time. Locking ODP calls will significantly slow
>> > down data path functions.
>> >
>> > Best regards,
>> > Maxim.
>> >
>> >
>> >
>> >
>> >>
>> >> Absence of MT-safe does not require 1:1 mapping of thread to pktio. It
>> >> just means that it is the application's responsibility to ensure
>> >> exclusive access to a pktio.
>> >>
>> >> > for high throughput packet I/Os [..] we do not need to support
>> >> ODP_PKTIO_OP_MT_SAFE
>> >> > We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.
>> >>
>> >> This would introduce an undesirable leaky abstraction.
>> >>
>> >> BB
>> >>
>> >> On Sun, Sep 10, 2017 at 12:40 PM, Bill Fischofer
>> >>  wrote:
>> >> > We can discuss this during tomorrow's ARCH call, and probably further
>> >> > at Connect. MT Safe is the default behavior and it's opposite ("MT
>> >> > Unsafe") was added as a potential optimization when applications
>> >> > assure implementations that only a single thread will be polling a
>> >> > PktIn queue or adding to a Pktout queue.
>> >> >
>> >> > Ideally, we'd like to retire all application I/O polling and use the
>> >> > scheduler exclusively, but that's that's a longer-term

[lng-odp] Supporting ODP_PKTIO_OP_MT_SAFE

2017-09-10 Thread Honnappa Nagarahalli
Hi,
I think there are 2 ways in which pkt I/O will be used:

1) Polling method - in which one pkt I/O will be created for each
receive worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE
is not required.
2) Event method - the scheduler is used to receive packets. In this
case the scheduler will provide the exclusive access to a pkt I/O.
Again in this case support for ODP_PKTIO_OP_MT_SAFE is not required.

I am thinking, for high throughput packet I/Os such as dpdk or ODP
native drivers (in the future), we do not need to support
ODP_PKTIO_OP_MT_SAFE. The odp_pktio_open API call can return an error
if ODP_PKTIO_OP_MT_SAFE is asked for.

We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.

This will save space in cache for the locks as well as instruction cycles.

Thank you,
Honnappa


[lng-odp] Modular Pkt I/O - Status

2017-08-28 Thread Honnappa Nagarahalli
Hi,
I have captured the status of Modular Pkt I/O after PR 139. I have
also documented an open item for discussion.

https://docs.google.com/a/linaro.org/document/d/1mT5vQ766fn2pCtUUGGJAjnJINNfHO6VcExDOgfVZsFI/edit?usp=sharing

Hi Nikhil/Bala,
Can you please take a look at the document? I need your inputs on
your pkt I/O design/implementation.

I will add it to Tuesday's ODP-Cloud call.

Thank you,
Honnappa


[lng-odp] Tagging could-dev branch

2017-08-27 Thread Honnappa Nagarahalli
Hi Maxim/Bill,
 I liked Bill's idea of tagging cloud-dev branch so that
performance at various stages can be measured. Yi has already merged
linux-dpdk's modularized mempool (PR 138). We need to revert it, tag
cloud-dev, merge PR 138 and tag cloud-dev. i.e. we need to tag
cloud-dev before PR138 and after 138.

Since Yi is vacation, can one of you do this?

Thank you,
Honnappa


Re: [lng-odp] ODP_EVENT_CRYPTO_COMPL and ODP_EVENT_IPSEC_STATUS

2017-08-09 Thread Honnappa Nagarahalli
On 9 August 2017 at 15:54, Bill Fischofer  wrote:
>
>
> On Wed, Aug 9, 2017 at 10:55 AM, Honnappa Nagarahalli
>  wrote:
>>
>> Hi,
>>   I believe ODP_EVENT_CRYPTO_COMPL is being deprecated. I am tying to
>> understand ODP_EVENT_IPSEC_STATUS. Are we going to have a pool type
>> corresponding to ODP_EVENT_IPSEC_STATUS (like we have for buffers,
>> packets and timeouts)?
>
>
> How implementations manage events is up to them. In Dmitry's IPsec
> implementation he allocates a private buffer pool during initialization to
> hold these events.
>
We have a pool type for each of buffer, packet and timeout event
types. Why is there no pool type for ODP_EVENT_IPSEC_STATUS event
type?

>>
>>
>> Thanks,
>> Honnappa
>
>


[lng-odp] ODP_EVENT_CRYPTO_COMPL and ODP_EVENT_IPSEC_STATUS

2017-08-09 Thread Honnappa Nagarahalli
Hi,
  I believe ODP_EVENT_CRYPTO_COMPL is being deprecated. I am tying to
understand ODP_EVENT_IPSEC_STATUS. Are we going to have a pool type
corresponding to ODP_EVENT_IPSEC_STATUS (like we have for buffers,
packets and timeouts)?

Thanks,
Honnappa


Re: [lng-odp] Issues in adding mempool to modular framework

2017-08-04 Thread Honnappa Nagarahalli
On 4 August 2017 at 08:10, Bill Fischofer  wrote:
>
>
> On Fri, Aug 4, 2017 at 12:35 AM, Honnappa Nagarahalli
>  wrote:
>>
>> Hi,
>>Continuing our discussion from today's cloud call, in
>> Linux-generic, this is what I found:
>>
>> odp_buffer_t is the base class structure for odp_packet_t. So, the
>> odp_packet_alloc/alloc_multi and odp_packet_free/free_multi are
>> calling odp_buffer_alloc/alloc_multi/free/free_multi functions. So, it
>> is not possible to have completely independent .so files for
>> pool/buffer/packet in linux-generic.
>>
>>
>> Then I looked at the APIs to check if it is possible to implement
>> totally independent pool/buffer/packet modules. A given pool can be of
>> type BUFFER or PACKET. For the pool to be an independent component, it
>> should not be aware of what type of elements it holds. But,
>> odp_pool_create API needs to be aware of what a BUFFER / PACKET is.
>> Further, the APIs odp_buffer_alloc/alloc_multi take ‘pool’ as input.
>> That makes the ‘buffer’ component dependent on ‘pool’ component.
>> Similarly, 'packet' component also can not be an independent
>> component.
>>
>> So, it looks to me that it is not possible for any implementation to
>> have completely independent pool, buffer and packet modules. They will
>> be dependent on each other.
>>
>> However it looks like pool/buffer/packet components can be combined
>> into a single .so which is completely independent.
>
>
> That would seem to be a sensible first step, however there's no need to do
> this right now as the immediate need is to add the modules for pktios to
> support DDF and loadable drivers. The time to add modularity is when we have
> a secondary implementation that needs to be integrated. Until we had a
> second scheduler there was no need to consider a modular structure for the
> scheduler, etc. So if we want to extend the modular framework beyond pktios,
> the scheduler is the next place we should be looking since we now already
> have four different schedulers to choose from.
>
The secondary implementation of the packet/buffer/pool is coming from
dpdk platform. That is the reason we are looking into this.
The queue and scheduler is also going on currently. That's when Brian
realized that we should merge api-next to cloud-dev to get scalable
scheduler.

>>
>>
>> Thanks,
>> Honnappa
>
>


[lng-odp] Issues in adding mempool to modular framework

2017-08-03 Thread Honnappa Nagarahalli
Hi,
   Continuing our discussion from today's cloud call, in
Linux-generic, this is what I found:

odp_buffer_t is the base class structure for odp_packet_t. So, the
odp_packet_alloc/alloc_multi and odp_packet_free/free_multi are
calling odp_buffer_alloc/alloc_multi/free/free_multi functions. So, it
is not possible to have completely independent .so files for
pool/buffer/packet in linux-generic.


Then I looked at the APIs to check if it is possible to implement
totally independent pool/buffer/packet modules. A given pool can be of
type BUFFER or PACKET. For the pool to be an independent component, it
should not be aware of what type of elements it holds. But,
odp_pool_create API needs to be aware of what a BUFFER / PACKET is.
Further, the APIs odp_buffer_alloc/alloc_multi take ‘pool’ as input.
That makes the ‘buffer’ component dependent on ‘pool’ component.
Similarly, 'packet' component also can not be an independent
component.

So, it looks to me that it is not possible for any implementation to
have completely independent pool, buffer and packet modules. They will
be dependent on each other.

However it looks like pool/buffer/packet components can be combined
into a single .so which is completely independent.

Thanks,
Honnappa


Re: [lng-odp] odp_buffer_t and odp_packet_t

2017-08-03 Thread Honnappa Nagarahalli
On 2 August 2017 at 20:21, Bill Fischofer  wrote:
>
>
> On Tue, Aug 1, 2017 at 12:04 PM, Honnappa Nagarahalli
>  wrote:
>>
>> On 1 August 2017 at 06:57, Bill Fischofer 
>> wrote:
>> > ODP pools are objects of type odp_pool_t that store objects that can be
>> > allocated and freed via their own alloc/free calls. Pools come in three
>> > types determined at odp_pool_create() time:
>> >
>> > - ODP_POOL_BUFFER for odp_buffer_t objects
>> > - ODP_POOL_PACKET for odp_packet_t objects
>> > - ODP_POOL_TIMEOUT for odp_timeout_t objects
>> >
>> > The only commonality in these types is that they are types of
>> > odp_event_t
>> > and are aggregated through pools. They cannot be converted to each other
>> > via
>> > any ODP API.
>> >
>> I see event types of ODP_EVENT_CRYPTO_COMPL and
>> ODP_EVENT_IPSEC_STATUS. I do not see the corresponding pool types and
>> they are not event subtypes as well.
>
>
> ODP_EVENT_CRYPTO_COMPL is in fact a (hidden) packet event subtype in
> odp-linux. That's one of the reasons we're deprecating it in favor of
> ODP_EVENT_PACKET_CRYPTO.
>
> In Dmitry's IPsec implementation ODP_EVENT_IPSEC_STATUS events are drawn
> from a private buffer pool that is created and managed by the implementation
> for this purpose.
>
Is there a plan to make ODP_EVENT_IPSEC_STATUS as a subtype of ODP_EVENT_BUFFER?
Or is there a plan to have another pool type for ODP_EVENT_IPSEC_STATUS?
>>
>>
>> From the APIs I understand that the buffer (odp_buffer_t) is expected
>> to have meta data, it is not just a plain memory pointer. What are the
>> use cases for the buffers?
>
>
> Buffers are the ODP abstraction for any fixed-sized memory chunks that are
> used by the application. The metadata is minimal. odp_buffer_size() reports
> the fixed size of a buffer and odp_buffer_is_valid() says whether the buffer
> is valid.
>
Agree the metadata is minimal, we have to note that there is need to
store pool ID/handle also in the meta data. If I can think of an use
case, it would be to allocate memory to store a single type of object.
In such a case, user would have created a pool associated with that
object and hence the size is known. There might not be a need to have
odp_buffer_size() API.

>>
>>
>> > Beyond this, implementations are free to implement them however they
>> > wish.
>> > In the case of odp-linux, we've chosen to consider buffers to be a base
>> > type
>> > and derive packets and timeouts from them. So in odp-linux these share
>> > certain internal structures that are common to pools. In odp-linux a
>> > packet
>> > is a buffer with additional metadata and a timeout is a buffer with
>> > additional metadata. But again, there is no architectural requirement
>> > here
>> > since there are no ODP APIs that expose this relationship, so
>> > implementations may organize these differently if they choose.
>> >
>> > Contrast this with the newly defined event subtypes. Here an
>> > ODP_EVENT_PACKET_IPSEC or an ODP_EVENT_PACKET_CRYPTO are both
>> > ODP_EVENT_PACKETs and the object is an odp_packet_t even though you
>> > cannot
>> > convert from one subtype to another. So here the relationship between
>> > subtypes is architecturally defined and subtypes exist to formalize the
>> > notion that these subtypes share common metadata and differ in optional
>> > metadata between themselves.
>> >
>> > On Tue, Aug 1, 2017 at 4:24 AM, Maxim Uvarov 
>> > wrote:
>> >>
>> >> They are not linked. Packet might be different than buffers.
>> >>
>> >> Conversion with event is not really clear in implementation and has to
>> >> check event type match. I.e. only ODP_EVENT_BUFFER type event can be
>> >> created from buffer. If you need create packet, then alloc it and copy
>> >> data
>> >> to it with odp_packet_copy_data() for example.
>> >>
>> >> typedef enum odp_event_type_t {
>> >> ODP_EVENT_BUFFER   = 1,
>> >> ODP_EVENT_PACKET   = 2,
>> >> ODP_EVENT_TIMEOUT  = 3,
>> >>     ODP_EVENT_CRYPTO_COMPL = 4
>> >> } odp_event_type_t;
>> >>
>> >> odp_buffer_t odp_buffer_from_event(odp_event_t ev)
>> >> {
>> >> <--- here we need check that it's ODP_EVENT_BUFFER
>> >>
>> >> return (odp_buffer_t)ev;
>> >> }
>> >>
>> >> odp_packet_t odp_packet_from_event(odp_event_t ev)
>> >> {
>> >> if (odp_unlikely(ev == ODP_EVENT_INVALID))
>> >> return ODP_PACKET_INVALID;
>> >> <--- here we need check that it's ODP_EVENT_PACKET
>> >>
>> >> return (odp_packet_t)buf_to_packet_hdr((odp_buffer_t)ev);
>> >> }
>> >>
>> >> Best regards,
>> >> Maxim.
>> >>
>> >>
>> >>
>> >> On 1 August 2017 at 07:04, Honnappa Nagarahalli <
>> >> honnappa.nagaraha...@linaro.org> wrote:
>> >>
>> >> > Hi,
>> >> >I do not see anything in the API spec that mentions that
>> >> > odp_packet_t has to be derived from odp_buffer_t. I am asking the
>> >> > question because, in Linux-generic, odp_packet_t is derived from
>> >> > odp_buffer_t.
>> >> >
>> >> > Thank you,
>> >> > Honnappa
>> >> >
>> >
>> >
>
>


Re: [lng-odp] odp_buffer_t and odp_packet_t

2017-08-01 Thread Honnappa Nagarahalli
On 1 August 2017 at 06:57, Bill Fischofer  wrote:
> ODP pools are objects of type odp_pool_t that store objects that can be
> allocated and freed via their own alloc/free calls. Pools come in three
> types determined at odp_pool_create() time:
>
> - ODP_POOL_BUFFER for odp_buffer_t objects
> - ODP_POOL_PACKET for odp_packet_t objects
> - ODP_POOL_TIMEOUT for odp_timeout_t objects
>
> The only commonality in these types is that they are types of odp_event_t
> and are aggregated through pools.
Because of this and because of APIs odp_event_type, odp_event_subtype
etc, odp_event_t has to be a base class for odp_buffer_t, odp_packet_t
and odp_timeout_t.

They cannot be converted to each other via
> any ODP API.
>
> Beyond this, implementations are free to implement them however they wish.
> In the case of odp-linux, we've chosen to consider buffers to be a base type
> and derive packets and timeouts from them. So in odp-linux these share
> certain internal structures that are common to pools. In odp-linux a packet
> is a buffer with additional metadata and a timeout is a buffer with
> additional metadata. But again, there is no architectural requirement here
> since there are no ODP APIs that expose this relationship, so
> implementations may organize these differently if they choose.
>
> Contrast this with the newly defined event subtypes. Here an
> ODP_EVENT_PACKET_IPSEC or an ODP_EVENT_PACKET_CRYPTO are both
> ODP_EVENT_PACKETs and the object is an odp_packet_t even though you cannot
> convert from one subtype to another. So here the relationship between
> subtypes is architecturally defined and subtypes exist to formalize the
> notion that these subtypes share common metadata and differ in optional
> metadata between themselves.
>
> On Tue, Aug 1, 2017 at 4:24 AM, Maxim Uvarov 
> wrote:
>>
>> They are not linked. Packet might be different than buffers.
>>
>> Conversion with event is not really clear in implementation and has to
>> check event type match. I.e. only ODP_EVENT_BUFFER type event can be
>> created from buffer. If you need create packet, then alloc it and copy
>> data
>> to it with odp_packet_copy_data() for example.
>>
>> typedef enum odp_event_type_t {
>> ODP_EVENT_BUFFER   = 1,
>> ODP_EVENT_PACKET   = 2,
>> ODP_EVENT_TIMEOUT  = 3,
>> ODP_EVENT_CRYPTO_COMPL = 4
>> } odp_event_type_t;
>>
>> odp_buffer_t odp_buffer_from_event(odp_event_t ev)
>> {
>> <--- here we need check that it's ODP_EVENT_BUFFER
>>
>> return (odp_buffer_t)ev;
>> }
>>
>> odp_packet_t odp_packet_from_event(odp_event_t ev)
>> {
>> if (odp_unlikely(ev == ODP_EVENT_INVALID))
>> return ODP_PACKET_INVALID;
>> <--- here we need check that it's ODP_EVENT_PACKET
>>
>> return (odp_packet_t)buf_to_packet_hdr((odp_buffer_t)ev);
>> }
>>
>> Best regards,
>> Maxim.
>>
>>
>>
>> On 1 August 2017 at 07:04, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> wrote:
>>
>> > Hi,
>> >I do not see anything in the API spec that mentions that
>> > odp_packet_t has to be derived from odp_buffer_t. I am asking the
>> > question because, in Linux-generic, odp_packet_t is derived from
>> > odp_buffer_t.
>> >
>> > Thank you,
>> > Honnappa
>> >
>
>


Re: [lng-odp] odp_buffer_t and odp_packet_t

2017-08-01 Thread Honnappa Nagarahalli
On 1 August 2017 at 06:57, Bill Fischofer  wrote:
> ODP pools are objects of type odp_pool_t that store objects that can be
> allocated and freed via their own alloc/free calls. Pools come in three
> types determined at odp_pool_create() time:
>
> - ODP_POOL_BUFFER for odp_buffer_t objects
> - ODP_POOL_PACKET for odp_packet_t objects
> - ODP_POOL_TIMEOUT for odp_timeout_t objects
>
> The only commonality in these types is that they are types of odp_event_t
> and are aggregated through pools. They cannot be converted to each other via
> any ODP API.
>
I see event types of ODP_EVENT_CRYPTO_COMPL and
ODP_EVENT_IPSEC_STATUS. I do not see the corresponding pool types and
they are not event subtypes as well.

>From the APIs I understand that the buffer (odp_buffer_t) is expected
to have meta data, it is not just a plain memory pointer. What are the
use cases for the buffers?

> Beyond this, implementations are free to implement them however they wish.
> In the case of odp-linux, we've chosen to consider buffers to be a base type
> and derive packets and timeouts from them. So in odp-linux these share
> certain internal structures that are common to pools. In odp-linux a packet
> is a buffer with additional metadata and a timeout is a buffer with
> additional metadata. But again, there is no architectural requirement here
> since there are no ODP APIs that expose this relationship, so
> implementations may organize these differently if they choose.
>
> Contrast this with the newly defined event subtypes. Here an
> ODP_EVENT_PACKET_IPSEC or an ODP_EVENT_PACKET_CRYPTO are both
> ODP_EVENT_PACKETs and the object is an odp_packet_t even though you cannot
> convert from one subtype to another. So here the relationship between
> subtypes is architecturally defined and subtypes exist to formalize the
> notion that these subtypes share common metadata and differ in optional
> metadata between themselves.
>
> On Tue, Aug 1, 2017 at 4:24 AM, Maxim Uvarov 
> wrote:
>>
>> They are not linked. Packet might be different than buffers.
>>
>> Conversion with event is not really clear in implementation and has to
>> check event type match. I.e. only ODP_EVENT_BUFFER type event can be
>> created from buffer. If you need create packet, then alloc it and copy
>> data
>> to it with odp_packet_copy_data() for example.
>>
>> typedef enum odp_event_type_t {
>> ODP_EVENT_BUFFER   = 1,
>> ODP_EVENT_PACKET   = 2,
>> ODP_EVENT_TIMEOUT  = 3,
>> ODP_EVENT_CRYPTO_COMPL = 4
>> } odp_event_type_t;
>>
>> odp_buffer_t odp_buffer_from_event(odp_event_t ev)
>> {
>> <--- here we need check that it's ODP_EVENT_BUFFER
>>
>> return (odp_buffer_t)ev;
>> }
>>
>> odp_packet_t odp_packet_from_event(odp_event_t ev)
>> {
>> if (odp_unlikely(ev == ODP_EVENT_INVALID))
>> return ODP_PACKET_INVALID;
>> <--- here we need check that it's ODP_EVENT_PACKET
>>
>> return (odp_packet_t)buf_to_packet_hdr((odp_buffer_t)ev);
>> }
>>
>> Best regards,
>> Maxim.
>>
>>
>>
>> On 1 August 2017 at 07:04, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> wrote:
>>
>> > Hi,
>> >I do not see anything in the API spec that mentions that
>> > odp_packet_t has to be derived from odp_buffer_t. I am asking the
>> > question because, in Linux-generic, odp_packet_t is derived from
>> > odp_buffer_t.
>> >
>> > Thank you,
>> > Honnappa
>> >
>
>


[lng-odp] odp_buffer_t and odp_packet_t

2017-07-31 Thread Honnappa Nagarahalli
Hi,
   I do not see anything in the API spec that mentions that
odp_packet_t has to be derived from odp_buffer_t. I am asking the
question because, in Linux-generic, odp_packet_t is derived from
odp_buffer_t.

Thank you,
Honnappa


Re: [lng-odp] JIRA cards for ODP Cloud development

2017-07-31 Thread Honnappa Nagarahalli
Hi,
   I talked to Anoop after the call today. What we need is the effort
estimate and then a start date. These two show be able to provide a
plan as to when we can complete the work.

We have owners assigned to most of these JIRA cards now. Can you
please update the effort estimate and add a start date?

Thank you,
Honnappa

On 26 July 2017 at 10:39, Honnappa Nagarahalli
 wrote:
> Hello,
>   I have created the JIRA tickets for DPDK integration in the
> modular framework. I have noted them down in the document:
> https://docs.google.com/a/linaro.org/document/d/1TV1QNCgyUu2rL_gMobzPHhqJI8wzUdgqqXnb8NCeiSg/edit?usp=sharing.
>
> I have added the effort as well. These are my estimates and am I sure
> you will not agree with them :). The estimates are for submitting the
> first patch for review and do not consider subsequent effort to
> incorporate the review comments. I have also reduced the scope of the
> features to focus on immediate requirements (I am hoping this will
> save us some time in the short term).
>
> I think, we should capture the features that are not required for now,
> in the document here:
> https://docs.google.com/a/linaro.org/document/d/1OQANSj8_ekuJf8GQj5KClANXp3iNOU1bUtHkZGuUUbE/edit?usp=sharing
>
> I have few owners assigned to these tickets as they were working on
> these items already (ex: Yi, Krishna, Kevin). It would be good if
> everyone can take a look at the estimates (adjust where required) and
> have start and end dates assigned in these tickets. This will help us
> understand when will all this be ready. I think it will be a good
> achievement, if we can complete this by SFO17.
>
>
> Hi Krishna,
> I saw that you already have tickets created for your work items,
> so I did not create new ones. We can discuss about these tickets in
> 7/27 could call.
>
> Thank you,
> Honnappa


[lng-odp] JIRA cards for ODP Cloud development

2017-07-26 Thread Honnappa Nagarahalli
Hello,
  I have created the JIRA tickets for DPDK integration in the
modular framework. I have noted them down in the document:
https://docs.google.com/a/linaro.org/document/d/1TV1QNCgyUu2rL_gMobzPHhqJI8wzUdgqqXnb8NCeiSg/edit?usp=sharing.

I have added the effort as well. These are my estimates and am I sure
you will not agree with them :). The estimates are for submitting the
first patch for review and do not consider subsequent effort to
incorporate the review comments. I have also reduced the scope of the
features to focus on immediate requirements (I am hoping this will
save us some time in the short term).

I think, we should capture the features that are not required for now,
in the document here:
https://docs.google.com/a/linaro.org/document/d/1OQANSj8_ekuJf8GQj5KClANXp3iNOU1bUtHkZGuUUbE/edit?usp=sharing

I have few owners assigned to these tickets as they were working on
these items already (ex: Yi, Krishna, Kevin). It would be good if
everyone can take a look at the estimates (adjust where required) and
have start and end dates assigned in these tickets. This will help us
understand when will all this be ready. I think it will be a good
achievement, if we can complete this by SFO17.


Hi Krishna,
I saw that you already have tickets created for your work items,
so I did not create new ones. We can discuss about these tickets in
7/27 could call.

Thank you,
Honnappa


Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-19 Thread Honnappa Nagarahalli
On 19 July 2017 at 22:15, Honnappa Nagarahalli
 wrote:
> On 19 July 2017 at 05:45, Elo, Matias (Nokia - FI/Espoo)
>  wrote:
>>
>>> On 19 Jul 2017, at 6:25, Honnappa Nagarahalli 
>>>  wrote:
>>>
>>> On 18 July 2017 at 06:37, Elo, Matias (Nokia - FI/Espoo)
>>>  wrote:
>>>>
>>>>> On 18 Jul 2017, at 6:58, Honnappa Nagarahalli 
>>>>>  wrote:
>>>>>
>>>>> On 17 July 2017 at 04:23, Elo, Matias (Nokia - FI/Espoo)
>>>>>  wrote:
>>>>>> Does this patch fix some real problem? At leas for me it only makes the 
>>>>>> scheduler interface harder to follow by spreading the functions into 
>>>>>> multiple headers.
>>>>>
>>>>> I have said this again and again. odp_schedule_if.h is a scheduler
>>>>> interface file. i.e this file is supposed to contain
>>>>> services/functions provided by scheduler to other components in ODP
>>>>> (similar to what has been done in odp_queue_if.h - appreciate if a
>>>>> closer attention is paid to this). Now, this file contains functions
>>>>> provided by packet I/O (among other things). Appreciate if you could
>>>>> explain why this file should contain these functions?
>>>>>
>>>>> Also, Petri has understood what this patch does, can you check with him?
>>>>
>>>> These functions are used by the schedulers to interface with other ODP
>>>> components, so the scheduler_if.h is a logical place to define them. When
>>>> implementing a new scheduler it's helpful to see all available  functions 
>>>> from one
>>>> place. I'm not fundamentally against this patch, but it's the task of the 
>>>> patch
>>>> submitter to justify why a change is needed, not the other way around.
>>>>
>>>> Petri was originally opposed to moving these functions into xyz_internal.h 
>>>> headers,
>>>> and only approved moving the functions into xyz_if.h files if it must be 
>>>> done.
>>>>
>>>> I'm just trying to understand why this change is necessary. I patch like 
>>>> this
>>>> would be a lot easier to justify if it was sent as a part of the patch set 
>>>> which requires
>>>> this change. Without that, a more comprehensive commit log would be 
>>>> helpful.
>>>
>>> Any suggestions on the commit message?
>>>
>>> Does adding the following sentence help?
>>>
>>> "odp_schedule_if.h is the scheduler interface file. i.e this file is
>>> supposed to contain services/functions provided by scheduler to other
>>> components in ODP"
>>
>> Based on your older message the motivation for moving these functions is 
>> that they are
>> related to the default scheduler and queue implementations. This would be a 
>> good point to
>> mention.
>>
>>>
>>>>
>>>> The naming of the odp_packet_io_if.h is now a bit confusing as we already 
>>>> have the
>>>> pktio_if_ops_t struct in odp_packet_io_internal.h, which is the actual 
>>>> pktio interface.
>>>
>>> I definitely agree with you on this. That's how v1 of the patch was.
>>> It is changed after Petri's suggestion to move it to
>>> odp_packet_io_if.h.
>>
>> In a previous email Petri suggested filename odp_queue_sched_if.h. 
>> Extrapolating from this, odp_packet_io_if.h should be named 
>> odp_packet_io_sched_if.h.
>>
> odp_queue_sched_if.h is named such because it relates to queue and
> sched interface only and it is a tightly coupled interface. We have
> odp_queue_if.h for loosely coupled interfaces.
>
> The interface between scheduler and packet I/O is not a tightly
> coupled. Hence it was decided to name the file as odp_packet_io_if.h
> and in the future it can implement a modular interface.
>
> Petri is aware of this decision.
Refer to this discussion:
http://patches.opendataplane.org/patch/9322/. You can search for
"odp_packet_io_if.h"
>>


Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-19 Thread Honnappa Nagarahalli
On 19 July 2017 at 05:45, Elo, Matias (Nokia - FI/Espoo)
 wrote:
>
>> On 19 Jul 2017, at 6:25, Honnappa Nagarahalli 
>>  wrote:
>>
>> On 18 July 2017 at 06:37, Elo, Matias (Nokia - FI/Espoo)
>>  wrote:
>>>
>>>> On 18 Jul 2017, at 6:58, Honnappa Nagarahalli 
>>>>  wrote:
>>>>
>>>> On 17 July 2017 at 04:23, Elo, Matias (Nokia - FI/Espoo)
>>>>  wrote:
>>>>> Does this patch fix some real problem? At leas for me it only makes the 
>>>>> scheduler interface harder to follow by spreading the functions into 
>>>>> multiple headers.
>>>>
>>>> I have said this again and again. odp_schedule_if.h is a scheduler
>>>> interface file. i.e this file is supposed to contain
>>>> services/functions provided by scheduler to other components in ODP
>>>> (similar to what has been done in odp_queue_if.h - appreciate if a
>>>> closer attention is paid to this). Now, this file contains functions
>>>> provided by packet I/O (among other things). Appreciate if you could
>>>> explain why this file should contain these functions?
>>>>
>>>> Also, Petri has understood what this patch does, can you check with him?
>>>
>>> These functions are used by the schedulers to interface with other ODP
>>> components, so the scheduler_if.h is a logical place to define them. When
>>> implementing a new scheduler it's helpful to see all available  functions 
>>> from one
>>> place. I'm not fundamentally against this patch, but it's the task of the 
>>> patch
>>> submitter to justify why a change is needed, not the other way around.
>>>
>>> Petri was originally opposed to moving these functions into xyz_internal.h 
>>> headers,
>>> and only approved moving the functions into xyz_if.h files if it must be 
>>> done.
>>>
>>> I'm just trying to understand why this change is necessary. I patch like 
>>> this
>>> would be a lot easier to justify if it was sent as a part of the patch set 
>>> which requires
>>> this change. Without that, a more comprehensive commit log would be helpful.
>>
>> Any suggestions on the commit message?
>>
>> Does adding the following sentence help?
>>
>> "odp_schedule_if.h is the scheduler interface file. i.e this file is
>> supposed to contain services/functions provided by scheduler to other
>> components in ODP"
>
> Based on your older message the motivation for moving these functions is that 
> they are
> related to the default scheduler and queue implementations. This would be a 
> good point to
> mention.
>
>>
>>>
>>> The naming of the odp_packet_io_if.h is now a bit confusing as we already 
>>> have the
>>> pktio_if_ops_t struct in odp_packet_io_internal.h, which is the actual 
>>> pktio interface.
>>
>> I definitely agree with you on this. That's how v1 of the patch was.
>> It is changed after Petri's suggestion to move it to
>> odp_packet_io_if.h.
>
> In a previous email Petri suggested filename odp_queue_sched_if.h. 
> Extrapolating from this, odp_packet_io_if.h should be named 
> odp_packet_io_sched_if.h.
>
odp_queue_sched_if.h is named such because it relates to queue and
sched interface only and it is a tightly coupled interface. We have
odp_queue_if.h for loosely coupled interfaces.

The interface between scheduler and packet I/O is not a tightly
coupled. Hence it was decided to name the file as odp_packet_io_if.h
and in the future it can implement a modular interface.

Petri is aware of this decision.
>


[lng-odp] Cloud call on 20th July

2017-07-19 Thread Honnappa Nagarahalli
I do not have a lot of topics to discuss. If there is anything to
discuss, please add to the meeting page.

Thank you,
Honnappa


Re: [lng-odp] DPDK integration under DDF

2017-07-18 Thread Honnappa Nagarahalli
Hi Bill,
 I cleaned up and created a new document. Please take a look at
the last section listing the work items. Let me know if you need more
details there.

Document is here:
https://docs.google.com/a/linaro.org/document/d/1TV1QNCgyUu2rL_gMobzPHhqJI8wzUdgqqXnb8NCeiSg/edit?usp=sharing

Thank you,
Honnappa

On 17 July 2017 at 23:46, Honnappa Nagarahalli
 wrote:
> I have updated the document to define 2 phases for this integration,
> please take a look.
>
> I have considered L2FWD application as a demo for both phases and have
> defined few work items. We can discuss these work items in ODP-Cloud
> call.
>
>
> https://docs.google.com/document/d/1ocLcJLeHghkaUG5Lq8Cy0B5DgZoNZu1lBMtWCoogPOc/edit?usp=sharing
>
> Thank you,
> Honnappa
>
> On 12 July 2017 at 14:07, Honnappa Nagarahalli
>  wrote:
>> Hi Francois,
>> I will add this topic to tomorrow's ODP-Cloud call.
>>
>> Thank you,
>> Honnappa
>>
>> On 12 July 2017 at 04:51, Francois Ozog  wrote:
>>> I'd like we sync up on key elements of the document at the arch call today.
>>>
>>> I'd like to restate a few things:
>>>
>>> - ODP Cloud single binary should support multiple hardware environments
>>> through DDF
>>> - ODP and ODP applications should accommodate how HW uses memory on the
>>> receive side, not the oppsoite
>>> - HW should accommodate ODP and ODP applications when ODP wants to form
>>> packets for the transmit side (typically OFP)
>>> - after transmission, HW1 should be able to "free" a packet that was
>>> originally received by HW2 (typically VPP)
>>> - DPDK shall be considered as a HW that deals with packets its own way.
>>> DPDK can to let the HW manage memory through DPDK external memory manager
>>> drivers. Bottom line, there can be a case where memory is controlled by a
>>> PMD (through external memory) and used directly by an ODP application.
>>> - ODP_CLOUD defines an ODP_MBUF_T structure which is pointed to by an
>>> ODP_PACKET_T; the structure remains opaque to ODP applications.
>>> - ODP_MBUF_T has the same structure as DPDK RTE_MBUF, we may use some
>>> fields differently but all fields are present and at the same offsets as
>>> DPDK RTE_MBUF. The presence of other metadata coming from linux-generic may
>>> be accepted as interim but performance reviews need to be conducted to
>>> ensure we have optimal use of metadata space.
>>>
>>>
>>> And here are a few comments
>>> - I expect the metadata to shrink: it's not just about the size but about
>>> performance. The first cacheline is used for receive while the second
>>> mostly for transmit. Any access to additional meta data on the receive side
>>> results in accessing a second cacheline which is just twice the cost of
>>> DPDK...
>>> - There is a need to have pools common "objects" with pool_ops so allow the
>>> chain of control ODP_APP -> ODP -> DPDK -> PMD. which is equivalent to DPDK
>>> external memory manager but in ODP.
>>>
>>>
>>> Cordially,
>>>
>>> FF
>>>
>>>
>>>
>>> On 11 July 2017 at 02:11, Honnappa Nagarahalli <
>>> honnappa.nagaraha...@linaro.org> wrote:
>>>
>>>> Hi,
>>>>I have updated the document with further details. I am done with
>>>> this document. Hopefully, we can discuss and take a decision in the
>>>> next cloud call.
>>>>
>>>> https://docs.google.com/document/d/1ocLcJLeHghkaUG5Lq8Cy0B5DgZoNZ
>>>> u1lBMtWCoogPOc/edit?usp=sharing
>>>>
>>>> Thank you,
>>>> Honnappa
>>>>
>>>
>>>
>>>
>>> --
>>> [image: Linaro] <http://www.linaro.org/>
>>> François-Frédéric Ozog | *Director Linaro Networking Group*
>>> T: +33.67221.6485
>>> francois.o...@linaro.org | Skype: ffozog


Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-18 Thread Honnappa Nagarahalli
On 18 July 2017 at 06:37, Elo, Matias (Nokia - FI/Espoo)
 wrote:
>
>> On 18 Jul 2017, at 6:58, Honnappa Nagarahalli 
>>  wrote:
>>
>> On 17 July 2017 at 04:23, Elo, Matias (Nokia - FI/Espoo)
>>  wrote:
>>> Does this patch fix some real problem? At leas for me it only makes the 
>>> scheduler interface harder to follow by spreading the functions into 
>>> multiple headers.
>>
>> I have said this again and again. odp_schedule_if.h is a scheduler
>> interface file. i.e this file is supposed to contain
>> services/functions provided by scheduler to other components in ODP
>> (similar to what has been done in odp_queue_if.h - appreciate if a
>> closer attention is paid to this). Now, this file contains functions
>> provided by packet I/O (among other things). Appreciate if you could
>> explain why this file should contain these functions?
>>
>> Also, Petri has understood what this patch does, can you check with him?
>
> These functions are used by the schedulers to interface with other ODP
> components, so the scheduler_if.h is a logical place to define them. When
> implementing a new scheduler it's helpful to see all available  functions 
> from one
> place. I'm not fundamentally against this patch, but it's the task of the 
> patch
> submitter to justify why a change is needed, not the other way around.
>
> Petri was originally opposed to moving these functions into xyz_internal.h 
> headers,
> and only approved moving the functions into xyz_if.h files if it must be done.
>
> I'm just trying to understand why this change is necessary. I patch like this
> would be a lot easier to justify if it was sent as a part of the patch set 
> which requires
> this change. Without that, a more comprehensive commit log would be helpful.

Any suggestions on the commit message?

Does adding the following sentence help?

"odp_schedule_if.h is the scheduler interface file. i.e this file is
supposed to contain services/functions provided by scheduler to other
components in ODP"

>
> The naming of the odp_packet_io_if.h is now a bit confusing as we already 
> have the
> pktio_if_ops_t struct in odp_packet_io_internal.h, which is the actual pktio 
> interface.

I definitely agree with you on this. That's how v1 of the patch was.
It is changed after Petri's suggestion to move it to
odp_packet_io_if.h.

>
> -Matias
>


Re: [lng-odp] DPDK integration under DDF

2017-07-17 Thread Honnappa Nagarahalli
I have updated the document to define 2 phases for this integration,
please take a look.

I have considered L2FWD application as a demo for both phases and have
defined few work items. We can discuss these work items in ODP-Cloud
call.


https://docs.google.com/document/d/1ocLcJLeHghkaUG5Lq8Cy0B5DgZoNZu1lBMtWCoogPOc/edit?usp=sharing

Thank you,
Honnappa

On 12 July 2017 at 14:07, Honnappa Nagarahalli
 wrote:
> Hi Francois,
> I will add this topic to tomorrow's ODP-Cloud call.
>
> Thank you,
> Honnappa
>
> On 12 July 2017 at 04:51, Francois Ozog  wrote:
>> I'd like we sync up on key elements of the document at the arch call today.
>>
>> I'd like to restate a few things:
>>
>> - ODP Cloud single binary should support multiple hardware environments
>> through DDF
>> - ODP and ODP applications should accommodate how HW uses memory on the
>> receive side, not the oppsoite
>> - HW should accommodate ODP and ODP applications when ODP wants to form
>> packets for the transmit side (typically OFP)
>> - after transmission, HW1 should be able to "free" a packet that was
>> originally received by HW2 (typically VPP)
>> - DPDK shall be considered as a HW that deals with packets its own way.
>> DPDK can to let the HW manage memory through DPDK external memory manager
>> drivers. Bottom line, there can be a case where memory is controlled by a
>> PMD (through external memory) and used directly by an ODP application.
>> - ODP_CLOUD defines an ODP_MBUF_T structure which is pointed to by an
>> ODP_PACKET_T; the structure remains opaque to ODP applications.
>> - ODP_MBUF_T has the same structure as DPDK RTE_MBUF, we may use some
>> fields differently but all fields are present and at the same offsets as
>> DPDK RTE_MBUF. The presence of other metadata coming from linux-generic may
>> be accepted as interim but performance reviews need to be conducted to
>> ensure we have optimal use of metadata space.
>>
>>
>> And here are a few comments
>> - I expect the metadata to shrink: it's not just about the size but about
>> performance. The first cacheline is used for receive while the second
>> mostly for transmit. Any access to additional meta data on the receive side
>> results in accessing a second cacheline which is just twice the cost of
>> DPDK...
>> - There is a need to have pools common "objects" with pool_ops so allow the
>> chain of control ODP_APP -> ODP -> DPDK -> PMD. which is equivalent to DPDK
>> external memory manager but in ODP.
>>
>>
>> Cordially,
>>
>> FF
>>
>>
>>
>> On 11 July 2017 at 02:11, Honnappa Nagarahalli <
>> honnappa.nagaraha...@linaro.org> wrote:
>>
>>> Hi,
>>>I have updated the document with further details. I am done with
>>> this document. Hopefully, we can discuss and take a decision in the
>>> next cloud call.
>>>
>>> https://docs.google.com/document/d/1ocLcJLeHghkaUG5Lq8Cy0B5DgZoNZ
>>> u1lBMtWCoogPOc/edit?usp=sharing
>>>
>>> Thank you,
>>> Honnappa
>>>
>>
>>
>>
>> --
>> [image: Linaro] <http://www.linaro.org/>
>> François-Frédéric Ozog | *Director Linaro Networking Group*
>> T: +33.67221.6485
>> francois.o...@linaro.org | Skype: ffozog


Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-17 Thread Honnappa Nagarahalli
On 17 July 2017 at 14:46, Bill Fischofer  wrote:
> On Mon, Jul 17, 2017 at 4:23 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
>
>> Does this patch fix some real problem? At leas for me it only makes the
>> scheduler interface harder to follow by spreading the functions into
>> multiple headers.
>>
>
> Is this perhaps related to Yi's modular framework? Since that is providing
> a more fundamental structure, I'd like to see how we get to that. In any
> event, I'll add this to the agenda for tomorrow's public call. Joyce, can
> you join us for that at 15:00 UTC?
>
Joyce will not be able to join the call. We can discuss on next Monday
(7/24) if required.
>
>>
>> -Matias
>>
>>
>> > On 17 Jul 2017, at 10:26, Joyce Kong  wrote:
>> >
>> > The modular scheduler interface in odp_schedule_if.h includes functions
>> from
>> > pktio and queue. It needs to be cleaned up.
>> >
>> > Signed-off-by: Joyce Kong 
>> > ---
>> > platform/linux-generic/Makefile.am |  2 ++
>> > platform/linux-generic/include/odp_packet_io_if.h  | 23
>> +
>> > .../linux-generic/include/odp_queue_sched_if.h | 24
>> ++
>> > platform/linux-generic/include/odp_schedule_if.h   |  9 
>> > platform/linux-generic/odp_packet_io.c |  1 +
>> > platform/linux-generic/odp_queue.c |  1 +
>> > platform/linux-generic/odp_schedule.c  |  2 ++
>> > platform/linux-generic/odp_schedule_iquery.c   |  2 ++
>> > platform/linux-generic/odp_schedule_sp.c   |  2 ++
>> > 9 files changed, 57 insertions(+), 9 deletions(-)
>> > create mode 100644 platform/linux-generic/include/odp_packet_io_if.h
>> > create mode 100644 platform/linux-generic/include/odp_queue_sched_if.h
>> >
>> > diff --git a/platform/linux-generic/Makefile.am
>> b/platform/linux-generic/Makefile.am
>> > index 26eba28..5295abb 100644
>> > --- a/platform/linux-generic/Makefile.am
>> > +++ b/platform/linux-generic/Makefile.am
>> > @@ -150,6 +150,7 @@ noinst_HEADERS = \
>> > ${srcdir}/include/odp_packet_io_internal.h \
>> > ${srcdir}/include/odp_packet_io_ipc_internal.h \
>> > ${srcdir}/include/odp_packet_io_ring_internal.h \
>> > +   ${srcdir}/include/odp_packet_io_if.h \
>> > ${srcdir}/include/odp_packet_netmap.h \
>> > ${srcdir}/include/odp_packet_dpdk.h \
>> > ${srcdir}/include/odp_packet_socket.h \
>> > @@ -160,6 +161,7 @@ noinst_HEADERS = \
>> > ${srcdir}/include/odp_queue_internal.h \
>> > ${srcdir}/include/odp_ring_internal.h \
>> > ${srcdir}/include/odp_queue_if.h \
>> > +   ${srcdir}/include/odp_queue_sched_if.h \
>> > ${srcdir}/include/odp_schedule_if.h \
>> > ${srcdir}/include/odp_sorted_list_internal.h \
>> > ${srcdir}/include/odp_shm_internal.h \
>> > diff --git a/platform/linux-generic/include/odp_packet_io_if.h
>> b/platform/linux-generic/include/odp_packet_io_if.h
>> > new file mode 100644
>> > index 000..e574f22
>> > --- /dev/null
>> > +++ b/platform/linux-generic/include/odp_packet_io_if.h
>> > @@ -0,0 +1,23 @@
>> > +/* Copyright (c) 2017, ARM Limited
>> > + * All rights reserved.
>> > + *
>> > + *SPDX-License-Identifier:   BSD-3-Clause
>> > + */
>> > +
>> > +#ifndef ODP_PACKET_IO_IF_H_
>> > +#define ODP_PACKET_IO_IF_H_
>> > +
>> > +#ifdef __cplusplus
>> > +extern "C" {
>> > +#endif
>> > +
>> > +/* Interface for the scheduler */
>> > +int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]);
>> > +void sched_cb_pktio_stop_finalize(int pktio_index);
>> > +int sched_cb_num_pktio(void);
>> > +
>> > +#ifdef __cplusplus
>> > +}
>> > +#endif
>> > +
>> > +#endif
>> > diff --git a/platform/linux-generic/include/odp_queue_sched_if.h
>> b/platform/linux-generic/include/odp_queue_sched_if.h
>> > new file mode 100644
>> > index 000..4a301f4
>> > --- /dev/null
>> > +++ b/platform/linux-generic/include/odp_queue_sched_if.h
>> > @@ -0,0 +1,24 @@
>> > +/* Copyright (c) 2017, ARM Limited
>> > + * All rights reserved.
>> > + *
>> > + *SPDX-License-Identifier:   BSD-3-Clause
>> > + */
>> > +
>> > +#ifndef ODP_QUEUE_SCHED_IF_H_
>> > +#define ODP_QUEUE_SCHED_IF_H_
>> > +
>> > +#ifdef __cplusplus
>> > +extern "C" {
>> > +#endif
>> > +
>> > +/* Interface for the scheduler */
>> > +odp_queue_t sched_cb_queue_handle(uint32_t queue_index);
>> > +void sched_cb_queue_destroy_finalize(uint32_t queue_index);
>> > +int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[],
>> int num);
>> > +int sched_cb_queue_empty(uint32_t queue_index);
>> > +
>> > +#ifdef __cplusplus
>> > +}
>> > +#endif
>> > +
>> > +#endif
>> > diff --git a/platform/linux-generic/include/odp_schedule_if.h
>> b/platform/linux-generic/include/odp_schedule_if.h
>> > index 4cd8c3e..9a1f3ff 100644
>> > --- a/platform/linux-generic/include/odp_schedule_if.h
>> > +++ b/platform/lin

Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-17 Thread Honnappa Nagarahalli
On 17 July 2017 at 04:23, Elo, Matias (Nokia - FI/Espoo)
 wrote:
> Does this patch fix some real problem? At leas for me it only makes the 
> scheduler interface harder to follow by spreading the functions into multiple 
> headers.

I have said this again and again. odp_schedule_if.h is a scheduler
interface file. i.e this file is supposed to contain
services/functions provided by scheduler to other components in ODP
(similar to what has been done in odp_queue_if.h - appreciate if a
closer attention is paid to this). Now, this file contains functions
provided by packet I/O (among other things). Appreciate if you could
explain why this file should contain these functions?

Also, Petri has understood what this patch does, can you check with him?

>
> -Matias
>
>
>> On 17 Jul 2017, at 10:26, Joyce Kong  wrote:
>>
>> The modular scheduler interface in odp_schedule_if.h includes functions from
>> pktio and queue. It needs to be cleaned up.
>>
>> Signed-off-by: Joyce Kong 
>> ---
>> platform/linux-generic/Makefile.am |  2 ++
>> platform/linux-generic/include/odp_packet_io_if.h  | 23 +
>> .../linux-generic/include/odp_queue_sched_if.h | 24 
>> ++
>> platform/linux-generic/include/odp_schedule_if.h   |  9 
>> platform/linux-generic/odp_packet_io.c |  1 +
>> platform/linux-generic/odp_queue.c |  1 +
>> platform/linux-generic/odp_schedule.c  |  2 ++
>> platform/linux-generic/odp_schedule_iquery.c   |  2 ++
>> platform/linux-generic/odp_schedule_sp.c   |  2 ++
>> 9 files changed, 57 insertions(+), 9 deletions(-)
>> create mode 100644 platform/linux-generic/include/odp_packet_io_if.h
>> create mode 100644 platform/linux-generic/include/odp_queue_sched_if.h
>>
>> diff --git a/platform/linux-generic/Makefile.am 
>> b/platform/linux-generic/Makefile.am
>> index 26eba28..5295abb 100644
>> --- a/platform/linux-generic/Makefile.am
>> +++ b/platform/linux-generic/Makefile.am
>> @@ -150,6 +150,7 @@ noinst_HEADERS = \
>> ${srcdir}/include/odp_packet_io_internal.h \
>> ${srcdir}/include/odp_packet_io_ipc_internal.h \
>> ${srcdir}/include/odp_packet_io_ring_internal.h \
>> +   ${srcdir}/include/odp_packet_io_if.h \
>> ${srcdir}/include/odp_packet_netmap.h \
>> ${srcdir}/include/odp_packet_dpdk.h \
>> ${srcdir}/include/odp_packet_socket.h \
>> @@ -160,6 +161,7 @@ noinst_HEADERS = \
>> ${srcdir}/include/odp_queue_internal.h \
>> ${srcdir}/include/odp_ring_internal.h \
>> ${srcdir}/include/odp_queue_if.h \
>> +   ${srcdir}/include/odp_queue_sched_if.h \
>> ${srcdir}/include/odp_schedule_if.h \
>> ${srcdir}/include/odp_sorted_list_internal.h \
>> ${srcdir}/include/odp_shm_internal.h \
>> diff --git a/platform/linux-generic/include/odp_packet_io_if.h 
>> b/platform/linux-generic/include/odp_packet_io_if.h
>> new file mode 100644
>> index 000..e574f22
>> --- /dev/null
>> +++ b/platform/linux-generic/include/odp_packet_io_if.h
>> @@ -0,0 +1,23 @@
>> +/* Copyright (c) 2017, ARM Limited
>> + * All rights reserved.
>> + *
>> + *SPDX-License-Identifier:   BSD-3-Clause
>> + */
>> +
>> +#ifndef ODP_PACKET_IO_IF_H_
>> +#define ODP_PACKET_IO_IF_H_
>> +
>> +#ifdef __cplusplus
>> +extern "C" {
>> +#endif
>> +
>> +/* Interface for the scheduler */
>> +int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]);
>> +void sched_cb_pktio_stop_finalize(int pktio_index);
>> +int sched_cb_num_pktio(void);
>> +
>> +#ifdef __cplusplus
>> +}
>> +#endif
>> +
>> +#endif
>> diff --git a/platform/linux-generic/include/odp_queue_sched_if.h 
>> b/platform/linux-generic/include/odp_queue_sched_if.h
>> new file mode 100644
>> index 000..4a301f4
>> --- /dev/null
>> +++ b/platform/linux-generic/include/odp_queue_sched_if.h
>> @@ -0,0 +1,24 @@
>> +/* Copyright (c) 2017, ARM Limited
>> + * All rights reserved.
>> + *
>> + *SPDX-License-Identifier:   BSD-3-Clause
>> + */
>> +
>> +#ifndef ODP_QUEUE_SCHED_IF_H_
>> +#define ODP_QUEUE_SCHED_IF_H_
>> +
>> +#ifdef __cplusplus
>> +extern "C" {
>> +#endif
>> +
>> +/* Interface for the scheduler */
>> +odp_queue_t sched_cb_queue_handle(uint32_t queue_index);
>> +void sched_cb_queue_destroy_finalize(uint32_t queue_index);
>> +int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int 
>> num);
>> +int sched_cb_queue_empty(uint32_t queue_index);
>> +
>> +#ifdef __cplusplus
>> +}
>> +#endif
>> +
>> +#endif
>> diff --git a/platform/linux-generic/include/odp_schedule_if.h 
>> b/platform/linux-generic/include/odp_schedule_if.h
>> index 4cd8c3e..9a1f3ff 100644
>> --- a/platform/linux-generic/include/odp_schedule_if.h
>> +++ b/platform/linux-generic/include/odp_schedule_if.h
>> @@ -64,15 +64,6 @@ typedef struct schedule_fn_t {
>> /* Interface towards the scheduler */
>> exte

Re: [lng-odp] Platform configurations for ODP-Cloud

2017-07-12 Thread Honnappa Nagarahalli
Here is the link to Google Doc version:
https://docs.google.com/a/linaro.org/document/d/1CXqN3pZay7Ni1Z7xhxW8_tmcIaKyuU8v-l5mshZ2RGk/edit?usp=sharing

Thank you,
Honnappa

On 12 July 2017 at 15:49, Honnappa Nagarahalli
 wrote:
> I tried briefly. The document converts easily. Main problem is the
> pictures. The pictures need to be redrawn.
>
> May be I will convert the document and will draw the pictures later tonight.
>
> Thanks,
> Honnappa
>
> On 12 July 2017 at 15:14, Bill Fischofer  wrote:
>> Can you post a Google doc version of this to permit shared commenting?
>> I can do that if you'd like, but you should probably be the doc owner.
>>
>> On Wed, Jul 12, 2017 at 3:05 PM, Honnappa Nagarahalli
>>  wrote:
>>> Hi,
>>>The discussion about what are the different configurations of the
>>> platform that ODP-Cloud needs to support has come up now and then. I
>>> thought it is good to have a consensus, on the configurations we have
>>> to support for now, to help us guide in our future discussions. I
>>> created a word document (I realized the mistake now). The 'Should
>>> Support' column is just my thoughts, that column requires more input.
>>> Please take a look.
>>>
>>> https://drive.google.com/a/linaro.org/file/d/0ByE35NyL0RwHNWU3RFNLc284OXc/view?usp=sharing
>>>
>>> Try and open in Word for the pictures to appear.
>>>
>>> Thank you,
>>> Honnappa


Re: [lng-odp] Platform configurations for ODP-Cloud

2017-07-12 Thread Honnappa Nagarahalli
I tried briefly. The document converts easily. Main problem is the
pictures. The pictures need to be redrawn.

May be I will convert the document and will draw the pictures later tonight.

Thanks,
Honnappa

On 12 July 2017 at 15:14, Bill Fischofer  wrote:
> Can you post a Google doc version of this to permit shared commenting?
> I can do that if you'd like, but you should probably be the doc owner.
>
> On Wed, Jul 12, 2017 at 3:05 PM, Honnappa Nagarahalli
>  wrote:
>> Hi,
>>The discussion about what are the different configurations of the
>> platform that ODP-Cloud needs to support has come up now and then. I
>> thought it is good to have a consensus, on the configurations we have
>> to support for now, to help us guide in our future discussions. I
>> created a word document (I realized the mistake now). The 'Should
>> Support' column is just my thoughts, that column requires more input.
>> Please take a look.
>>
>> https://drive.google.com/a/linaro.org/file/d/0ByE35NyL0RwHNWU3RFNLc284OXc/view?usp=sharing
>>
>> Try and open in Word for the pictures to appear.
>>
>> Thank you,
>> Honnappa


[lng-odp] Platform configurations for ODP-Cloud

2017-07-12 Thread Honnappa Nagarahalli
Hi,
   The discussion about what are the different configurations of the
platform that ODP-Cloud needs to support has come up now and then. I
thought it is good to have a consensus, on the configurations we have
to support for now, to help us guide in our future discussions. I
created a word document (I realized the mistake now). The 'Should
Support' column is just my thoughts, that column requires more input.
Please take a look.

https://drive.google.com/a/linaro.org/file/d/0ByE35NyL0RwHNWU3RFNLc284OXc/view?usp=sharing

Try and open in Word for the pictures to appear.

Thank you,
Honnappa


Re: [lng-odp] DPDK integration under DDF

2017-07-12 Thread Honnappa Nagarahalli
Hi Francois,
I will add this topic to tomorrow's ODP-Cloud call.

Thank you,
Honnappa

On 12 July 2017 at 04:51, Francois Ozog  wrote:
> I'd like we sync up on key elements of the document at the arch call today.
>
> I'd like to restate a few things:
>
> - ODP Cloud single binary should support multiple hardware environments
> through DDF
> - ODP and ODP applications should accommodate how HW uses memory on the
> receive side, not the oppsoite
> - HW should accommodate ODP and ODP applications when ODP wants to form
> packets for the transmit side (typically OFP)
> - after transmission, HW1 should be able to "free" a packet that was
> originally received by HW2 (typically VPP)
> - DPDK shall be considered as a HW that deals with packets its own way.
> DPDK can to let the HW manage memory through DPDK external memory manager
> drivers. Bottom line, there can be a case where memory is controlled by a
> PMD (through external memory) and used directly by an ODP application.
> - ODP_CLOUD defines an ODP_MBUF_T structure which is pointed to by an
> ODP_PACKET_T; the structure remains opaque to ODP applications.
> - ODP_MBUF_T has the same structure as DPDK RTE_MBUF, we may use some
> fields differently but all fields are present and at the same offsets as
> DPDK RTE_MBUF. The presence of other metadata coming from linux-generic may
> be accepted as interim but performance reviews need to be conducted to
> ensure we have optimal use of metadata space.
>
>
> And here are a few comments
> - I expect the metadata to shrink: it's not just about the size but about
> performance. The first cacheline is used for receive while the second
> mostly for transmit. Any access to additional meta data on the receive side
> results in accessing a second cacheline which is just twice the cost of
> DPDK...
> - There is a need to have pools common "objects" with pool_ops so allow the
> chain of control ODP_APP -> ODP -> DPDK -> PMD. which is equivalent to DPDK
> external memory manager but in ODP.
>
>
> Cordially,
>
> FF
>
>
>
> On 11 July 2017 at 02:11, Honnappa Nagarahalli <
> honnappa.nagaraha...@linaro.org> wrote:
>
>> Hi,
>>I have updated the document with further details. I am done with
>> this document. Hopefully, we can discuss and take a decision in the
>> next cloud call.
>>
>> https://docs.google.com/document/d/1ocLcJLeHghkaUG5Lq8Cy0B5DgZoNZ
>> u1lBMtWCoogPOc/edit?usp=sharing
>>
>> Thank you,
>> Honnappa
>>
>
>
>
> --
> [image: Linaro] <http://www.linaro.org/>
> François-Frédéric Ozog | *Director Linaro Networking Group*
> T: +33.67221.6485
> francois.o...@linaro.org | Skype: ffozog


[lng-odp] DPDK integration under DDF

2017-07-10 Thread Honnappa Nagarahalli
Hi,
   I have updated the document with further details. I am done with
this document. Hopefully, we can discuss and take a decision in the
next cloud call.

https://docs.google.com/document/d/1ocLcJLeHghkaUG5Lq8Cy0B5DgZoNZu1lBMtWCoogPOc/edit?usp=sharing

Thank you,
Honnappa


Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: sched: remove schedule interface depedency to qentry

2017-07-10 Thread Honnappa Nagarahalli
On 10 July 2017 at 03:05, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
>
>
>> -Original Message-----
>> From: Honnappa Nagarahalli [mailto:honnappa.nagaraha...@linaro.org]
>> Sent: Friday, July 07, 2017 9:28 PM
>> To: Savolainen, Petri (Nokia - FI/Espoo) 
>> Cc: lng-odp-forward 
>> Subject: Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: sched: remove
>> schedule interface depedency to qentry
>>
>> On 7 July 2017 at 01:46, Savolainen, Petri (Nokia - FI/Espoo)
>>  wrote:
>> >> >>  typedef struct schedule_fn_t {
>> >> >> +   int status_sync;
>> >> >
>> >> > this structure should contain functions that are provided by
>> scheduler
>> >> > to other components of ODP. 'status_sync' seems to be an internal
>> >> > mechanism between the default scheduler and default queue. Hence it
>> >> > should not be here.
>> >> >
>> >> Any update on this comment?
>> >
>> > I did answer it already.
>>
>> Ok, found your answer. Should this variable be moved to queue internal
>> structure which is set only for iQuery scheduler?
>>
>> This structure should contain only the functions exposed by the
>> scheduler to other components of ODP. It should not contain anything
>> related to the interface between queue and scheduler (they are being
>> considered as a single module).
>
> These functions are called from queue, so it's queue -> iquery interface 
> (scheduler interface, not queue interface). These functions can be removed 
> (if iquery is not anymore maintained) or moved as a next step, but that's out 
> of scope of this patch set. This set simply removes unused functions and 
> minimizes number of functions calls, but does not move functions from one 
> (interface) file to another. This set removes dependency to those iquery 
> specific functions already from all other code except queue and iquery, so 
> removing / moving those is easier after this is merged.
>

Ok. As along as we understand that odp_schedule_if.h should NOT
contain scheduler-queue interface functions/variables (as we consider
them as a single module for now), we can do it as a different patch.

> -Petri
>
>


Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: sched: remove schedule interface depedency to qentry

2017-07-07 Thread Honnappa Nagarahalli
On 7 July 2017 at 01:46, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
>> >>  typedef struct schedule_fn_t {
>> >> +   int status_sync;
>> >
>> > this structure should contain functions that are provided by scheduler
>> > to other components of ODP. 'status_sync' seems to be an internal
>> > mechanism between the default scheduler and default queue. Hence it
>> > should not be here.
>> >
>> Any update on this comment?
>
> I did answer it already.

Ok, found your answer. Should this variable be moved to queue internal
structure which is set only for iQuery scheduler?

This structure should contain only the functions exposed by the
scheduler to other components of ODP. It should not contain anything
related to the interface between queue and scheduler (they are being
considered as a single module).


Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: sched: remove schedule interface depedency to qentry

2017-07-06 Thread Honnappa Nagarahalli
On 4 July 2017 at 23:04, Honnappa Nagarahalli
 wrote:
> On 30 June 2017 at 09:10, Petri Savolainen  
> wrote:
>> Do not use queue internal type in schedule interface.
>>
>> Signed-off-by: Petri Savolainen 
>> ---
>>  platform/linux-generic/include/odp_schedule_if.h |  8 +++--
>>  platform/linux-generic/odp_queue.c   |  9 --
>>  platform/linux-generic/odp_schedule.c| 18 ---
>>  platform/linux-generic/odp_schedule_iquery.c | 41 
>> +---
>>  platform/linux-generic/odp_schedule_sp.c | 18 +++
>>  5 files changed, 45 insertions(+), 49 deletions(-)
>>
>> diff --git a/platform/linux-generic/include/odp_schedule_if.h 
>> b/platform/linux-generic/include/odp_schedule_if.h
>> index 5877a1cd..5abbb732 100644
>> --- a/platform/linux-generic/include/odp_schedule_if.h
>> +++ b/platform/linux-generic/include/odp_schedule_if.h
>> @@ -35,9 +35,10 @@ typedef int (*schedule_term_local_fn_t)(void);
>>  typedef void (*schedule_order_lock_fn_t)(void);
>>  typedef void (*schedule_order_unlock_fn_t)(void);
>>  typedef unsigned (*schedule_max_ordered_locks_fn_t)(void);
>> -typedef void (*schedule_save_context_fn_t)(queue_entry_t *queue);
>> +typedef void (*schedule_save_context_fn_t)(uint32_t queue_index, void *ptr);
>>
>>  typedef struct schedule_fn_t {
>> +   int status_sync;
>
> this structure should contain functions that are provided by scheduler
> to other components of ODP. 'status_sync' seems to be an internal
> mechanism between the default scheduler and default queue. Hence it
> should not be here.
>
Any update on this comment?

>> schedule_pktio_start_fn_t   pktio_start;
>> schedule_thr_add_fn_t   thr_add;
>> schedule_thr_rem_fn_t   thr_rem;
>> @@ -45,7 +46,6 @@ typedef struct schedule_fn_t {
>> schedule_init_queue_fn_tinit_queue;
>> schedule_destroy_queue_fn_t destroy_queue;
>> schedule_sched_queue_fn_t   sched_queue;
>> -   schedule_unsched_queue_fn_t unsched_queue;
> these queue related functions are not used by other components within
> ODP. These are specific to default queue and default schedulers. These
> should not be part of this file.
>
>> schedule_ord_enq_multi_fn_t ord_enq_multi;
>> schedule_init_global_fn_t   init_global;
>> schedule_term_global_fn_t   term_global;
>> @@ -54,7 +54,11 @@ typedef struct schedule_fn_t {
>> schedule_order_lock_fn_torder_lock;
>> schedule_order_unlock_fn_t  order_unlock;
>> schedule_max_ordered_locks_fn_t max_ordered_locks;
>> +
>> +   /* Called only when status_sync is set */
>> +   schedule_unsched_queue_fn_t unsched_queue;
>> schedule_save_context_fn_t  save_context;
>> +
>>  } schedule_fn_t;
>>
>>  /* Interface towards the scheduler */
>> diff --git a/platform/linux-generic/odp_queue.c 
>> b/platform/linux-generic/odp_queue.c
>> index 19945584..2db95fc6 100644
>> --- a/platform/linux-generic/odp_queue.c
>> +++ b/platform/linux-generic/odp_queue.c
>> @@ -474,6 +474,7 @@ static inline int deq_multi(queue_t q_int, 
>> odp_buffer_hdr_t *buf_hdr[],
>> int i, j;
>> queue_entry_t *queue;
>> int updated = 0;
>> +   int status_sync = sched_fn->status_sync;
>>
>> queue = qentry_from_int(q_int);
>> LOCK(&queue->s.lock);
>> @@ -490,7 +491,9 @@ static inline int deq_multi(queue_t q_int, 
>> odp_buffer_hdr_t *buf_hdr[],
>> /* Already empty queue */
>> if (queue->s.status == QUEUE_STATUS_SCHED) {
>> queue->s.status = QUEUE_STATUS_NOTSCHED;
>> -   sched_fn->unsched_queue(queue->s.index);
>> +
>> +   if (status_sync)
>> +   sched_fn->unsched_queue(queue->s.index);
>> }
>>
>> UNLOCK(&queue->s.lock);
>> @@ -533,8 +536,8 @@ static inline int deq_multi(queue_t q_int, 
>> odp_buffer_hdr_t *buf_hdr[],
>> if (hdr == NULL)
>> queue->s.tail = NULL;
>>
>> -   if (queue->s.type == ODP_QUEUE_TYPE_SCHED)
>> -   sched_fn->save_context(queue);
>> +   if (status_sync && queue->s.type == ODP_QUEUE_TYPE_SCHED)
>> +   sched_fn->save_context(queue->s.index, queue);
>>
>> UNLOCK(&queue->s.lock

Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: sched: remove schedule interface depedency to qentry

2017-07-06 Thread Honnappa Nagarahalli
On 5 July 2017 at 01:31, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
>
>
>> -Original Message-----
>> From: Honnappa Nagarahalli [mailto:honnappa.nagaraha...@linaro.org]
>> Sent: Wednesday, July 05, 2017 7:04 AM
>> To: Petri Savolainen 
>> Cc: lng-odp-forward 
>> Subject: Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: sched: remove
>> schedule interface depedency to qentry
>>
>> On 30 June 2017 at 09:10, Petri Savolainen 
>> wrote:
>> > Do not use queue internal type in schedule interface.
>> >
>> > Signed-off-by: Petri Savolainen 
>> > ---
>> >  platform/linux-generic/include/odp_schedule_if.h |  8 +++--
>> >  platform/linux-generic/odp_queue.c   |  9 --
>> >  platform/linux-generic/odp_schedule.c| 18 ---
>> >  platform/linux-generic/odp_schedule_iquery.c | 41 +
>> ---
>> >  platform/linux-generic/odp_schedule_sp.c | 18 +++
>> >  5 files changed, 45 insertions(+), 49 deletions(-)
>> >
>> > diff --git a/platform/linux-generic/include/odp_schedule_if.h
>> b/platform/linux-generic/include/odp_schedule_if.h
>> > index 5877a1cd..5abbb732 100644
>> > --- a/platform/linux-generic/include/odp_schedule_if.h
>> > +++ b/platform/linux-generic/include/odp_schedule_if.h
>> > @@ -35,9 +35,10 @@ typedef int (*schedule_term_local_fn_t)(void);
>> >  typedef void (*schedule_order_lock_fn_t)(void);
>> >  typedef void (*schedule_order_unlock_fn_t)(void);
>> >  typedef unsigned (*schedule_max_ordered_locks_fn_t)(void);
>> > -typedef void (*schedule_save_context_fn_t)(queue_entry_t *queue);
>> > +typedef void (*schedule_save_context_fn_t)(uint32_t queue_index, void
>> *ptr);
>> >
>> >  typedef struct schedule_fn_t {
>> > +   int status_sync;
>>
>> this structure should contain functions that are provided by scheduler
>> to other components of ODP. 'status_sync' seems to be an internal
>> mechanism between the default scheduler and default queue. Hence it
>> should not be here.
>
> This flags if unsched_queue() and save_context() needs to be called. Those 
> calls are only needed by iquery scheduler. With this flag, queue needs to 
> check only single variable if those calls are needed or not. Today, these 
> calls are made always, which hurt performance.
>
>>
>> > schedule_pktio_start_fn_t   pktio_start;
>> > schedule_thr_add_fn_t   thr_add;
>> > schedule_thr_rem_fn_t   thr_rem;
>> > @@ -45,7 +46,6 @@ typedef struct schedule_fn_t {
>> > schedule_init_queue_fn_tinit_queue;
>> > schedule_destroy_queue_fn_t destroy_queue;
>> > schedule_sched_queue_fn_t   sched_queue;
>> > -   schedule_unsched_queue_fn_t unsched_queue;
>> these queue related functions are not used by other components within
>> ODP. These are specific to default queue and default schedulers. These
>> should not be part of this file.
>
> Didn't add or remove those functions in this patch set. This discussion can 
> be done in context of another patch set.
>
Are you just cleaning up the scheduler and queue interface in this
patch? If not this change can be taken up in this patch.

> -Petri
>
>>
>> > schedule_ord_enq_multi_fn_t ord_enq_multi;
>> > schedule_init_global_fn_t   init_global;
>> > schedule_term_global_fn_t   term_global;
>> > @@ -54,7 +54,11 @@ typedef struct schedule_fn_t {
>> > schedule_order_lock_fn_torder_lock;
>> > schedule_order_unlock_fn_t  order_unlock;
>> > schedule_max_ordered_locks_fn_t max_ordered_locks;
>> > +
>> > +   /* Called only when status_sync is set */
>> > +   schedule_unsched_queue_fn_t unsched_queue;
>> > schedule_save_context_fn_t  save_context;
>> > +
>> >  } schedule_fn_t;


Re: [lng-odp] [API-NEXT PATCH 2/4] linux-gen: sched: use config max ordered locks

2017-07-06 Thread Honnappa Nagarahalli
On 5 July 2017 at 01:35, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
>
>> > diff --git a/platform/linux-generic/include/odp_config_internal.h
>> b/platform/linux-generic/include/odp_config_internal.h
>> > index 3cff0045..469396df 100644
>> > --- a/platform/linux-generic/include/odp_config_internal.h
>> > +++ b/platform/linux-generic/include/odp_config_internal.h
>> > @@ -27,7 +27,7 @@
>> >  /*
>> >   * Maximum number of ordered locks per queue
>> >   */
>> > -#define CONFIG_QUEUE_MAX_ORD_LOCKS 4
>> > +#define CONFIG_QUEUE_MAX_ORD_LOCKS 2
>>
>> This is unnecessary change for this patch. This patch does not need this
>> change.
>
> With this value (2), the current situation does not change. Internal defines 
> limited the number into 2, so this keeps it 2.

This change affects other implementations of the scheduler. If the
default scheduler is implemented independent of what the value of
CONFIG_QUEUE_MAX_ORD_LOCKS is, this change is not required.
This change should not be necessary for this patch.

>
> -Petri


Re: [lng-odp] [PATCH 7/7] linux-gen: dpdk: enable zero-copy operation

2017-07-05 Thread Honnappa Nagarahalli
On 3 July 2017 at 07:01, Matias Elo  wrote:
> Implements experimental zero-copy mode for DPDK pktio. This can be enabled
> with additional '--enable-dpdk-zero-copy' configure flag.
>
> This feature has been put behind an extra configure flag as it doesn't
> entirely adhere to the DPDK API and may behave unexpectedly with untested
> DPDK NIC drivers. Zero-copy operation has been tested with pcap, ixgbe, and
> i40e drivers.
>

Can you elaborate more on this? Which parts do not adhere to DPDK APIs?

> Signed-off-by: Matias Elo 
> ---
>  .../linux-generic/include/odp_buffer_internal.h|   2 +-
>  .../linux-generic/include/odp_packet_internal.h|  13 +
>  platform/linux-generic/include/odp_pool_internal.h |   4 +
>  platform/linux-generic/m4/odp_dpdk.m4  |  14 +-
>  platform/linux-generic/odp_pool.c  |   2 +
>  platform/linux-generic/pktio/dpdk.c| 676 
> -
>  6 files changed, 562 insertions(+), 149 deletions(-)
>
> diff --git a/platform/linux-generic/include/odp_buffer_internal.h 
> b/platform/linux-generic/include/odp_buffer_internal.h
> index 076abe9..78ea527 100644
> --- a/platform/linux-generic/include/odp_buffer_internal.h
> +++ b/platform/linux-generic/include/odp_buffer_internal.h
> @@ -109,7 +109,7 @@ struct odp_buffer_hdr_t {
>
> /* Data or next header */
> uint8_t data[0];
> -};
> +} ODP_ALIGNED_CACHE;
>
>  ODP_STATIC_ASSERT(CONFIG_PACKET_MAX_SEGS < 256,
>   "CONFIG_PACKET_MAX_SEGS_TOO_LARGE");
> diff --git a/platform/linux-generic/include/odp_packet_internal.h 
> b/platform/linux-generic/include/odp_packet_internal.h
> index 11f2fdc..78569b6 100644
> --- a/platform/linux-generic/include/odp_packet_internal.h
> +++ b/platform/linux-generic/include/odp_packet_internal.h
> @@ -92,6 +92,12 @@ typedef struct {
> uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, also ICMP) 
> */
>  } packet_parser_t;
>
> +/* Packet extra data length */
> +#define PKT_EXTRA_LEN 128
> +
> +/* Packet extra data types */
> +#define PKT_EXTRA_TYPE_DPDK 1
> +
>  /**
>   * Internal Packet header
>   *
> @@ -131,6 +137,13 @@ typedef struct {
> /* Result for crypto */
> odp_crypto_generic_op_result_t op_result;
>
> +#ifdef ODP_PKTIO_DPDK
> +   /* Type of extra data */
> +   uint8_t extra_type;
> +   /* Extra space for packet descriptors. E.g. DPDK mbuf  */
> +   uint8_t extra[PKT_EXTRA_LEN] ODP_ALIGNED_CACHE;
> +#endif
> +
> /* Packet data storage */
> uint8_t data[0];
>  } odp_packet_hdr_t;
> diff --git a/platform/linux-generic/include/odp_pool_internal.h 
> b/platform/linux-generic/include/odp_pool_internal.h
> index ebb779d..acea079 100644
> --- a/platform/linux-generic/include/odp_pool_internal.h
> +++ b/platform/linux-generic/include/odp_pool_internal.h
> @@ -68,6 +68,10 @@ typedef struct pool_t {
> uint8_t *base_addr;
> uint8_t *uarea_base_addr;
>
> +   /* Used by DPDK zero-copy pktio */
> +   void*ext_desc;
> +   uint16_t ext_ref_count;
> +
> pool_cache_t local_cache[ODP_THREAD_COUNT_MAX];
>
> odp_shm_tring_shm;
> diff --git a/platform/linux-generic/m4/odp_dpdk.m4 
> b/platform/linux-generic/m4/odp_dpdk.m4
> index 58d1472..edcc4c8 100644
> --- a/platform/linux-generic/m4/odp_dpdk.m4
> +++ b/platform/linux-generic/m4/odp_dpdk.m4
> @@ -9,6 +9,16 @@ AC_HELP_STRING([--with-dpdk-path=DIR   path to dpdk build 
> directory]),
>  pktio_dpdk_support=yes],[])
>
>  ##
> +# Enable zero-copy DPDK pktio
> +##
> +zero_copy=0
> +AC_ARG_ENABLE([dpdk-zero-copy],
> +[  --enable-dpdk-zero-copy  enable experimental zero-copy DPDK pktio 
> mode],
> +[if test x$enableval = xyes; then
> +zero_copy=1
> +fi])
> +
> +##
>  # Save and set temporary compilation flags
>  ##
>  OLD_CPPFLAGS=$CPPFLAGS
> @@ -38,9 +48,9 @@ then
>  done
>  DPDK_PMD+=--no-whole-archive
>
> -ODP_CFLAGS="$ODP_CFLAGS -DODP_PKTIO_DPDK"
> +ODP_CFLAGS="$ODP_CFLAGS -DODP_PKTIO_DPDK -DODP_DPDK_ZERO_COPY=$zero_copy"
>  AM_LDFLAGS="$AM_LDFLAGS -L$DPDK_PATH/lib -Wl,$DPDK_PMD"
> -LIBS="$LIBS -ldpdk -ldl -lpcap"
> +LIBS="$LIBS -ldpdk -ldl -lpcap -lm"
>  else
>  pktio_dpdk_support=no
>  fi
> diff --git a/platform/linux-generic/odp_pool.c 
> b/platform/linux-generic/odp_pool.c
> index 5360b94..8a27c8a 100644
> --- a/platform/linux-generic/odp_pool.c
> +++ b/platform/linux-generic/odp_pool.c
> @@ -395,6 +395,8 @@ static odp_pool_t pool_create(const char *name, 
> odp_pool_param_t *params,
> pool->uarea_size = uarea_size;
> pool->shm_size   = num * block_size;
> pool->ua

Re: [lng-odp] [PATCH CLOUD-DEV v1] [RFC 1/2] odp: add modular framework

2017-07-04 Thread Honnappa Nagarahalli
Hi Yi,
scalable scheduler added a lock-less (for ARM, spin lock for x86)
linked list in odp_llqueue.h. Lock-less implementation can be more
helpful in the place of reader-writer lock. You might want to look at
that.

Thanks,
Honnappa

On 3 July 2017 at 03:55, Yi He  wrote:
> Yes, Dmitry and community
>
> The modular framework minimizes dependencies to linked-list and rwlock
> facilities only, and by reusing Linux kernel list.h and alias rwlock APIs
> to ODP rwlock implementation, it really makes the framework so concise
> (only one source module.c).
>
> I'd like to ask how do we typically solve this kind of license problem (GPL
> vs 3-clause BSD license)? Or I can find a BSD list.h to use.
>
> Thanks and Best Regards, Yi
>
>
> On 3 July 2017 at 16:24, Dmitry Eremin-Solenikov <
> dmitry.ereminsoleni...@linaro.org> wrote:
>
>> On 03.07.2017 11:00, Github ODP bot wrote:
>> > From: Yi He 
>> >
>> > Add modular programming framework to support runtime selectable
>> > implementations for variant software subsystems.
>> >
>> > Signed-off-by: Yi He 
>> > ---
>> > /** Email created from pull request 65 (heyi-linaro:modular-framework)
>> >  ** https://github.com/Linaro/odp/pull/65
>> >  ** Patch: https://github.com/Linaro/odp/pull/65.patch
>> >  ** Base sha: 1ba26aa5650c05718c177842178de6d0f70b7fc1
>> >  ** Merge commit sha: f0f96a26d22e16b9299777cd413dfd6ae89a024e
>> >  **/
>> >  modular-framework/list.h   | 620
>> +
>> >  modular-framework/module.c | 158 ++
>> >  modular-framework/module.h | 205 +++
>> >  modular-framework/rwlock.h |  88 +++
>> >  platform/linux-generic/Makefile.am |  11 +
>> >  .../include/odp/api/plat/atomic_types.h|   2 +
>> >  .../include/odp/api/plat/rwlock_types.h|   2 +
>> >  7 files changed, 1086 insertions(+)
>> >  create mode 100644 modular-framework/list.h
>> >  create mode 100644 modular-framework/module.c
>> >  create mode 100644 modular-framework/module.h
>> >  create mode 100644 modular-framework/rwlock.h
>> >
>> > diff --git a/modular-framework/list.h b/modular-framework/list.h
>> > new file mode 100644
>> > index ..366d9f38
>> > --- /dev/null
>> > +++ b/modular-framework/list.h
>> > @@ -0,0 +1,620 @@
>> > +#ifndef _LINUX_LIST_H
>> > +#define _LINUX_LIST_H
>>
>> This file is licensed under GPL, while the rest of ODP is licensed under
>> 3-clause BSD license.
>>
>> > +
>> > +#include 
>> > +
>> > +#if defined(__STDC__)
>> > +#define typeof __typeof__
>> > +#endif
>>
>>
>> --
>> With best wishes
>> Dmitry
>>


Re: [lng-odp] [API-NEXT PATCH 4/4] linux-gen: sched: remove unused sched interface functions

2017-07-04 Thread Honnappa Nagarahalli
On 30 June 2017 at 09:10, Petri Savolainen  wrote:
> Removed functions that are no longer used. Also removed unused
> parameter from save_context function.
>
> Signed-off-by: Petri Savolainen 
> ---
>  platform/linux-generic/include/odp_schedule_if.h |  7 +--
>  platform/linux-generic/odp_queue.c   | 21 +
>  platform/linux-generic/odp_schedule_iquery.c |  4 +---
>  3 files changed, 3 insertions(+), 29 deletions(-)
>
> diff --git a/platform/linux-generic/include/odp_schedule_if.h 
> b/platform/linux-generic/include/odp_schedule_if.h
> index 5abbb732..b514c88a 100644
> --- a/platform/linux-generic/include/odp_schedule_if.h
> +++ b/platform/linux-generic/include/odp_schedule_if.h
> @@ -35,7 +35,7 @@ typedef int (*schedule_term_local_fn_t)(void);
>  typedef void (*schedule_order_lock_fn_t)(void);
>  typedef void (*schedule_order_unlock_fn_t)(void);
>  typedef unsigned (*schedule_max_ordered_locks_fn_t)(void);
> -typedef void (*schedule_save_context_fn_t)(uint32_t queue_index, void *ptr);
> +typedef void (*schedule_save_context_fn_t)(uint32_t queue_index);
>
>  typedef struct schedule_fn_t {
> int status_sync;
> @@ -68,11 +68,6 @@ extern const schedule_fn_t *sched_fn;
>  int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]);
>  void sched_cb_pktio_stop_finalize(int pktio_index);
>  int sched_cb_num_pktio(void);
> -int sched_cb_num_queues(void);
> -int sched_cb_queue_prio(uint32_t queue_index);
> -int sched_cb_queue_grp(uint32_t queue_index);
> -int sched_cb_queue_is_ordered(uint32_t queue_index);
> -int sched_cb_queue_is_atomic(uint32_t queue_index);
These changes are captured by Joyce's patch already.

>  odp_queue_t sched_cb_queue_handle(uint32_t queue_index);
>  void sched_cb_queue_destroy_finalize(uint32_t queue_index);
>  int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int 
> num);
> diff --git a/platform/linux-generic/odp_queue.c 
> b/platform/linux-generic/odp_queue.c
> index d907779b..4c85027b 100644
> --- a/platform/linux-generic/odp_queue.c
> +++ b/platform/linux-generic/odp_queue.c
> @@ -520,7 +520,7 @@ static inline int deq_multi(queue_t q_int, 
> odp_buffer_hdr_t *buf_hdr[],
> queue->s.tail = NULL;
>
> if (status_sync && queue->s.type == ODP_QUEUE_TYPE_SCHED)
> -   sched_fn->save_context(queue->s.index, queue);
> +   sched_fn->save_context(queue->s.index);
>
> UNLOCK(&queue->s.lock);
>
> @@ -672,25 +672,6 @@ static int queue_info(odp_queue_t handle, 
> odp_queue_info_t *info)
> return 0;
>  }
>
> -int sched_cb_num_queues(void)
> -{
> -   return ODP_CONFIG_QUEUES;
> -}
> -
> -int sched_cb_queue_prio(uint32_t queue_index)
> -{
> -   queue_entry_t *qe = get_qentry(queue_index);
> -
> -   return qe->s.param.sched.prio;
> -}
> -
> -int sched_cb_queue_grp(uint32_t queue_index)
> -{
> -   queue_entry_t *qe = get_qentry(queue_index);
> -
> -   return qe->s.param.sched.group;
> -}
> -
>  odp_queue_t sched_cb_queue_handle(uint32_t queue_index)
>  {
> return queue_from_id(queue_index);
> diff --git a/platform/linux-generic/odp_schedule_iquery.c 
> b/platform/linux-generic/odp_schedule_iquery.c
> index f315a4f0..9605edc7 100644
> --- a/platform/linux-generic/odp_schedule_iquery.c
> +++ b/platform/linux-generic/odp_schedule_iquery.c
> @@ -1308,10 +1308,8 @@ static inline bool is_ordered_queue(unsigned int 
> queue_index)
> return (sched->queues[queue_index].sync == ODP_SCHED_SYNC_ORDERED);
>  }
>
> -static void schedule_save_context(uint32_t queue_index, void *ptr)
> +static void schedule_save_context(uint32_t queue_index)
>  {
> -   (void)ptr;
> -
> if (is_atomic_queue(queue_index)) {
> thread_local.atomic = &sched->availables[queue_index];
> } else if (is_ordered_queue(queue_index)) {
> --
> 2.13.0
>


Re: [lng-odp] [API-NEXT PATCH 2/4] linux-gen: sched: use config max ordered locks

2017-07-04 Thread Honnappa Nagarahalli
On 30 June 2017 at 09:10, Petri Savolainen  wrote:
> Use config file value for the number of ordered locks
> everywhere.
>
> Signed-off-by: Petri Savolainen 
> ---
>  platform/linux-generic/include/odp_config_internal.h | 2 +-
>  platform/linux-generic/odp_schedule.c| 8 +---
>  platform/linux-generic/odp_schedule_iquery.c | 8 +---
>  3 files changed, 3 insertions(+), 15 deletions(-)
>
> diff --git a/platform/linux-generic/include/odp_config_internal.h 
> b/platform/linux-generic/include/odp_config_internal.h
> index 3cff0045..469396df 100644
> --- a/platform/linux-generic/include/odp_config_internal.h
> +++ b/platform/linux-generic/include/odp_config_internal.h
> @@ -27,7 +27,7 @@
>  /*
>   * Maximum number of ordered locks per queue
>   */
> -#define CONFIG_QUEUE_MAX_ORD_LOCKS 4
> +#define CONFIG_QUEUE_MAX_ORD_LOCKS 2

This is unnecessary change for this patch. This patch does not need this change.

>
>  /*
>   * Maximum number of packet IO resources
> diff --git a/platform/linux-generic/odp_schedule.c 
> b/platform/linux-generic/odp_schedule.c
> index 69de7ac0..53670a71 100644
> --- a/platform/linux-generic/odp_schedule.c
> +++ b/platform/linux-generic/odp_schedule.c
> @@ -121,12 +121,6 @@ ODP_STATIC_ASSERT((8 * sizeof(pri_mask_t)) >= 
> QUEUES_PER_PRIO,
>  /* Maximum number of dequeues */
>  #define MAX_DEQ CONFIG_BURST_SIZE
>
> -/* Maximum number of ordered locks per queue */
> -#define MAX_ORDERED_LOCKS_PER_QUEUE 2
> -
> -ODP_STATIC_ASSERT(MAX_ORDERED_LOCKS_PER_QUEUE <= CONFIG_QUEUE_MAX_ORD_LOCKS,
> - "Too_many_ordered_locks");
> -
>  /* Ordered stash size */
>  #define MAX_ORDERED_STASH 512
>
> @@ -449,7 +443,7 @@ static inline int grp_update_tbl(void)
>
>  static unsigned schedule_max_ordered_locks(void)
>  {
> -   return MAX_ORDERED_LOCKS_PER_QUEUE;
> +   return CONFIG_QUEUE_MAX_ORD_LOCKS;
>  }
>
>  static inline int queue_per_prio(uint32_t queue_index)
> diff --git a/platform/linux-generic/odp_schedule_iquery.c 
> b/platform/linux-generic/odp_schedule_iquery.c
> index 75f56e63..8d8dcc29 100644
> --- a/platform/linux-generic/odp_schedule_iquery.c
> +++ b/platform/linux-generic/odp_schedule_iquery.c
> @@ -148,12 +148,6 @@ typedef struct {
> odp_event_t stash[MAX_DEQ], *top;
>  } event_cache_t;
>
> -/* Maximum number of ordered locks per queue */
> -#define MAX_ORDERED_LOCKS_PER_QUEUE 2
> -
> -ODP_STATIC_ASSERT(MAX_ORDERED_LOCKS_PER_QUEUE <= CONFIG_QUEUE_MAX_ORD_LOCKS,
> - "Too_many_ordered_locks");
> -
>  /* Ordered stash size */
>  #define MAX_ORDERED_STASH 512
>
> @@ -1266,7 +1260,7 @@ static void schedule_order_unlock(unsigned lock_index)
>
>  static unsigned schedule_max_ordered_locks(void)
>  {
> -   return MAX_ORDERED_LOCKS_PER_QUEUE;
> +   return CONFIG_QUEUE_MAX_ORD_LOCKS;
>  }
>
>  static inline bool is_atomic_queue(unsigned int queue_index)
> --
> 2.13.0
>


Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: sched: remove schedule interface depedency to qentry

2017-07-04 Thread Honnappa Nagarahalli
On 30 June 2017 at 09:10, Petri Savolainen  wrote:
> Do not use queue internal type in schedule interface.
>
> Signed-off-by: Petri Savolainen 
> ---
>  platform/linux-generic/include/odp_schedule_if.h |  8 +++--
>  platform/linux-generic/odp_queue.c   |  9 --
>  platform/linux-generic/odp_schedule.c| 18 ---
>  platform/linux-generic/odp_schedule_iquery.c | 41 
> +---
>  platform/linux-generic/odp_schedule_sp.c | 18 +++
>  5 files changed, 45 insertions(+), 49 deletions(-)
>
> diff --git a/platform/linux-generic/include/odp_schedule_if.h 
> b/platform/linux-generic/include/odp_schedule_if.h
> index 5877a1cd..5abbb732 100644
> --- a/platform/linux-generic/include/odp_schedule_if.h
> +++ b/platform/linux-generic/include/odp_schedule_if.h
> @@ -35,9 +35,10 @@ typedef int (*schedule_term_local_fn_t)(void);
>  typedef void (*schedule_order_lock_fn_t)(void);
>  typedef void (*schedule_order_unlock_fn_t)(void);
>  typedef unsigned (*schedule_max_ordered_locks_fn_t)(void);
> -typedef void (*schedule_save_context_fn_t)(queue_entry_t *queue);
> +typedef void (*schedule_save_context_fn_t)(uint32_t queue_index, void *ptr);
>
>  typedef struct schedule_fn_t {
> +   int status_sync;

this structure should contain functions that are provided by scheduler
to other components of ODP. 'status_sync' seems to be an internal
mechanism between the default scheduler and default queue. Hence it
should not be here.

> schedule_pktio_start_fn_t   pktio_start;
> schedule_thr_add_fn_t   thr_add;
> schedule_thr_rem_fn_t   thr_rem;
> @@ -45,7 +46,6 @@ typedef struct schedule_fn_t {
> schedule_init_queue_fn_tinit_queue;
> schedule_destroy_queue_fn_t destroy_queue;
> schedule_sched_queue_fn_t   sched_queue;
> -   schedule_unsched_queue_fn_t unsched_queue;
these queue related functions are not used by other components within
ODP. These are specific to default queue and default schedulers. These
should not be part of this file.

> schedule_ord_enq_multi_fn_t ord_enq_multi;
> schedule_init_global_fn_t   init_global;
> schedule_term_global_fn_t   term_global;
> @@ -54,7 +54,11 @@ typedef struct schedule_fn_t {
> schedule_order_lock_fn_torder_lock;
> schedule_order_unlock_fn_t  order_unlock;
> schedule_max_ordered_locks_fn_t max_ordered_locks;
> +
> +   /* Called only when status_sync is set */
> +   schedule_unsched_queue_fn_t unsched_queue;
> schedule_save_context_fn_t  save_context;
> +
>  } schedule_fn_t;
>
>  /* Interface towards the scheduler */
> diff --git a/platform/linux-generic/odp_queue.c 
> b/platform/linux-generic/odp_queue.c
> index 19945584..2db95fc6 100644
> --- a/platform/linux-generic/odp_queue.c
> +++ b/platform/linux-generic/odp_queue.c
> @@ -474,6 +474,7 @@ static inline int deq_multi(queue_t q_int, 
> odp_buffer_hdr_t *buf_hdr[],
> int i, j;
> queue_entry_t *queue;
> int updated = 0;
> +   int status_sync = sched_fn->status_sync;
>
> queue = qentry_from_int(q_int);
> LOCK(&queue->s.lock);
> @@ -490,7 +491,9 @@ static inline int deq_multi(queue_t q_int, 
> odp_buffer_hdr_t *buf_hdr[],
> /* Already empty queue */
> if (queue->s.status == QUEUE_STATUS_SCHED) {
> queue->s.status = QUEUE_STATUS_NOTSCHED;
> -   sched_fn->unsched_queue(queue->s.index);
> +
> +   if (status_sync)
> +   sched_fn->unsched_queue(queue->s.index);
> }
>
> UNLOCK(&queue->s.lock);
> @@ -533,8 +536,8 @@ static inline int deq_multi(queue_t q_int, 
> odp_buffer_hdr_t *buf_hdr[],
> if (hdr == NULL)
> queue->s.tail = NULL;
>
> -   if (queue->s.type == ODP_QUEUE_TYPE_SCHED)
> -   sched_fn->save_context(queue);
> +   if (status_sync && queue->s.type == ODP_QUEUE_TYPE_SCHED)
> +   sched_fn->save_context(queue->s.index, queue);
>
> UNLOCK(&queue->s.lock);
>
> diff --git a/platform/linux-generic/odp_schedule.c 
> b/platform/linux-generic/odp_schedule.c
> index 814746c7..69de7ac0 100644
> --- a/platform/linux-generic/odp_schedule.c
> +++ b/platform/linux-generic/odp_schedule.c
> @@ -22,9 +22,11 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>
> +/* Should remove this dependency */
> +#include 
> +
>  /* Number of priority levels  */
>  #define NUM_PRIO 8
>
> @@ -1340,22 +1342,14 @@ static int schedule_sched_queue(uint32_t queue_index)
> return 0;
>  }
>
> -static int schedule_unsched_queue(uint32_t queue_index ODP_UNUSED)
> -{
> -   return 0;
> -}
> -
>  static int schedule_num_grps(void)
>  {
> return NUM_SCHED_GRPS;
>  }
>
> -static void schedule_save_context(queue_entry_t *queue ODP_UNUSED)
> -{
> -}
> -
>  /* Fill in scheduler 

Re: [lng-odp] [PATCHv2] linux-gen: scheduler: modular scheduler interface

2017-07-04 Thread Honnappa Nagarahalli
The intention of this patch is to clean up odp_schedule_if.h file
alone, there is no other cleanup being done. This file contains
function declarations that should not be in this file. For ex: pkt io
functions and queue functions that are for default scheduler and
default queue interaction.

If this goal is not clear the commit message can be changed.

As agreed in the last call, instead of moving the pkt I/O functions to
pkt I/O internal header file, we will explore creating a new file
odp_packet_io_if.h and place the function declarations in that file.

The queue internal functions will be moved to odp_queue_internal.h as
has been done in this patch.

On 22 June 2017 at 16:34, Brian Brooks  wrote:
> On 06/29 12:08:49, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>>
>>
>> > -Original Message-
>> > From: Brian Brooks [mailto:brian.bro...@arm.com]
>> > Sent: Wednesday, June 28, 2017 5:17 PM
>> > To: Savolainen, Petri (Nokia - FI/Espoo) 
>> > Cc: Joyce Kong ; lng-odp@lists.linaro.org
>> > Subject: Re: [lng-odp] [PATCHv2] linux-gen: scheduler: modular scheduler
>> > interface
>> >
>> > On 06/28 07:24:08, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>> > >
>> > >
>> > > > -Original Message-
>> > > > From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>> > Joyce
>> > > > Kong
>> > > > Sent: Wednesday, June 28, 2017 5:14 AM
>> > > > To: lng-odp@lists.linaro.org
>> > > > Cc: Joyce Kong 
>> > > > Subject: [lng-odp] [PATCHv2] linux-gen: scheduler: modular scheduler
>> > > > interface
>> > > >
>> > > > The modular scheduler interface in odp_schedule_if.h includes
>> > functions
>> > > > from pktio and queue. It needs to be cleaned out. The pktio/queue
>> > related
>> > > > functions should be moved to pktio/queue internal header file.
>> > >
>> > > Sched_cb_xxx() functions are the interface from a scheduler towards
>> > other parts of the system. So, those calls are not in the scheduler
>> > interface file by mistake.
>> >
>> > Generally speaking, function declarations should exist in the .h
>> > file of the .c file that contains the definitions. These functions
>> > are defined in queue.c and called from schedule.c. The declarations
>> > should be in queue.h. It can be as simple as that.
>>
>> We try to enforce a tight scheduler interface (== currently 
>> odp_schedule_if.h). I decided to create only one file for simplicity. There 
>> could be additional files to define each output direction *interface* from a 
>> scheduler (queue_if_for_scheduler.h, pktio_if_for_scheduler.h, 
>> timer_if_for_scheduler.h) which includes only those functions that a 
>> scheduler may use. Moving things back to odp_xxx_internal.h would bring back 
>> the problem of clearly defining what functions (or types) a scheduler may 
>> use. Without clear interface file, each developer gets creative and access 
>> different parts of the system from different schedulers with different (e.g. 
>> locking) expectations ... and we are back in the original mess.
>
> I doesn't need to be that tight, controlled, or unfamiliar.
>
>   sched_cb_num_queues()
>   sched_cb_queue_prio()
>   sched_cb_queue_grp()
>   sched_cb_is_ordered()
>   sched_cb_is_atomic()
>   sched_cb_queue_handle()
>
> are safe if called from any other componenet, and s/sched_cb/queue/.
>
>   sched_cb_queue_destroy_finalize()
>   sched_cb_queue_deq_multi()
>
> can be something like:
>
>   struct sched_queue_ops {
> void (*sched_queue_destroy)(u32 qi);
> int  (*sched_queue_deq_multi)(u32 qi, ..);
>   };
>
>   void register_sched_queue_op(struct sched_queue_ops *ops)
>   {
> ..
>   }
>
> I don't think this needs to be done for pktio because pktio
> itself is already an abstraction layer over multiple pktio
> implementations. I'm not sure the existing queue abstraction
> layer supports the above functions, but they could. So, scratch
> the above example.
>
> The pktio functions just need to be renamed and moved to internal
> pktio.h file since they are defined there and called by the scheduler.
> It won't be the end of the world if some other component calls
> pktin_poll(). That would certainly be interesting and covered in
> code reviews. Perhaps it even makes sense to do that in a different
> software architecture.
>
>> > And, usually 'cb' or 'callback' is used in naming a function that is
>> > called through a function pointer. So, the naming is off here as well
>> > since these functions are always called via a direct function call.
>>
>> We need common prefixes for input and output direction scheduler interface 
>> calls. I did pick up sched_cb_ to make distinction from sched_fn, which is 
>> the input direction. Those are just names.
>
> That is combining the interfaces of two components into one and
> calling it the scheduler interface.
>
>> >
>> > If you have to caution the user as to why something is written the
>> > way it is, and that it is not a mistake, it better be a juicy topic.
>> > Following best practices such as placing functio

[lng-odp] Integrating DPDK with DDF

2017-06-24 Thread Honnappa Nagarahalli
Hi,
   I have updated the document with our discussion from the 6/22 call.
I have provided more details about my thoughts on the memory pool,
hopefully it is clear, if not please provide your comments and I try
to resolve them.

https://docs.google.com/a/linaro.org/document/d/1ocLcJLeHghkaUG5Lq8Cy0B5DgZoNZu1lBMtWCoogPOc/edit?usp=sharing

Thank you,
Honnappa


Re: [lng-odp] [API-NEXT PATCH v10 0/6] A scalable software scheduler

2017-06-24 Thread Honnappa Nagarahalli
Hi Maxim,
This patch fixes the distcheck issues you saw in travis CI. Bill
is on vacation, I am not sure if anyone would download this and test.
The previous patch sets have been tested to a good extent from Bill.
Can you merge it without his test this time?

Thank you,
Honnappa

On 23 June 2017 at 16:04, Brian Brooks  wrote:
> This work derives from Ola Liljedahl's prototype [1] which introduced a
> scalable scheduler design based on primarily lock-free algorithms and
> data structures designed to decrease contention. A thread searches
> through a data structure containing only queues that are both non-empty
> and allowed to be scheduled to that thread. Strict priority scheduling is
> respected, and (W)RR scheduling may be used within queues of the same 
> priority.
> Lastly, pre-scheduling or stashing is not employed since it is optional
> functionality that can be implemented in the application.
>
> In addition to scalable ring buffers, the algorithm also uses unbounded
> concurrent queues. LL/SC and CAS variants exist in cases where absense of
> ABA problem cannot be proved, and also in cases where the compiler's atomic
> built-ins may not be lowered to the desired instruction(s). Finally, a version
> of the algorithm that uses locks is also provided.
>
> Use --enable-schedule-scalable to conditionally compile this scheduler
> into the library.
>
> [1] https://lists.linaro.org/pipermail/lng-odp/2016-September/025682.html
>
> On checkpatch.pl:
>  - [2/6] and [5/6] have checkpatch.pl issues that are superfluous
>
> v10:
>  - Rebase against fixes for conditional compilation of arch sources
>  - Add Linaro copyright
>  - Support legacy compilers that do not support ARM ACLE
>  - Remove inclusion of odp_schedule_config.h
>  - Revert driver shm block change
>  - Use ordered lock count #define
>
> v9:
>  - Include patch to enable scalable scheduler in Travis CI
>  - Fix 'make distcheck'
>
> v8:
>  - Reword commit messages
>
> v7:
>  - Rebase against new modular queue interface
>  - Duplicate arch files under mips64 and powerpc
>  - Fix sched->order_lock()
>  - Loop until all deferred events have been enqueued
>  - Implement ord_enq_multi()
>  - Fix ordered_lock/unlock
>  - Revert stylistic changes
>  - Add default xfactor
>  - Remove changes to odp_sched_latency
>  - Remove ULL suffix to alleviate Clang build
>
> v6:
>  - Move conversions into scalable scheduler to alleviate #ifdefs
>  - Remove unnecessary prefetch
>  - Fix ARMv8 build
>
> v5:
>  - Allocate cache aligned memory using shm pool APIs
>  - Move more code to scalable scheduler specific files
>  - Remove CONFIG_SPLIT_READWRITE
>  - Fix 'make distcheck' issue
>
> v4:
>  - Fix a couple more checkpatch.pl issues
>
> v3:
>  - Only conditionally compile scalable scheduler and queue
>  - Move some code to arch/ dir
>  - Use a single shm block for queues instead of block-per-queue
>  - De-interleave odp_llqueue.h
>  - Use compiler macros to determine ATOM_BITSET_SIZE
>  - Incorporated queue size changes
>  - Dropped 'ODP_' prefix on config and moved to other files
>  - Dropped a few patches that were send independently to the list
>
> v2:
>  - Move ARMv8 issues and other fixes into separate patches
>  - Abstract away some #ifdefs
>  - Fix some checkpatch.pl warnings
>
> Brian Brooks (5):
>   test: odp_pktio_ordered: add queue size
>   linux-gen: sched scalable: add arch files
>   linux-gen: sched scalable: add a bitset
>   linux-gen: sched scalable: add a concurrent queue
>   linux-gen: sched scalable: add scalable scheduler
>
> Honnappa Nagarahalli (1):
>   travis: add scalable scheduler in CI
>
>  .travis.yml|1 +
>  platform/linux-generic/Makefile.am |   26 +
>  platform/linux-generic/arch/arm/odp_atomic.h   |  212 +++
>  platform/linux-generic/arch/arm/odp_cpu.h  |   71 +
>  platform/linux-generic/arch/arm/odp_cpu_idling.h   |   53 +
>  platform/linux-generic/arch/arm/odp_llsc.h |  253 +++
>  platform/linux-generic/arch/default/odp_cpu.h  |   43 +
>  platform/linux-generic/arch/mips64/odp_cpu.h   |   43 +
>  platform/linux-generic/arch/powerpc/odp_cpu.h  |   43 +
>  platform/linux-generic/arch/x86/odp_cpu.h  |   43 +
>  .../include/odp/api/plat/schedule_types.h  |4 +-
>  platform/linux-generic/include/odp_bitset.h|  212 +++
>  .../linux-generic/include/odp_config_internal.h|   15 +-
>  platform/linux-generic/include/odp_llqueue.h   |  311 +++
>  .../include/odp_queue_scalable_internal.h  |  104 +
>  platform/linux-generic/include/odp_schedule

[lng-odp] Vacation from 6/26 to 6/30

2017-06-23 Thread Honnappa Nagarahalli
Hi,
   I will be on vacation from 6/26 to 6/30. I will not have any email/IM access.

Thank you,
Honnappa


Re: [lng-odp] [PATCH API-NEXT v1 1/1] linux-generic: queue: modular queue interface

2017-06-22 Thread Honnappa Nagarahalli
On 22 June 2017 at 02:48, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
>
>
>> -Original Message-
>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>> Honnappa Nagarahalli
>> Sent: Thursday, June 22, 2017 12:20 AM
>> To: Maxim Uvarov 
>> Cc: lng-odp-forward 
>> Subject: Re: [lng-odp] [PATCH API-NEXT v1 1/1] linux-generic: queue:
>> modular queue interface
>>
>> Agree, will rebase and update once we have the scalable queues merged.
>>
>> On 21 June 2017 at 14:53, Maxim Uvarov  wrote:
>> > On 06/19/17 08:00, Github ODP bot wrote:
>> >>  log| 1269
>> 
>> >
>> > log has not be a part of patch.
>> >
>> > Maxim.
>
> To be sure, I repeat my opinion about this RFC again. The internal queue 
> pointer (or table index - format does not matter) is needed to avoid doing 
> the same (API handle <-> qentry) conversion multiple times. We are inside 
> implementation, and it improves performance when the number of conversions 
> per packet is minimized. That's why there were direct qentry access outside 
> of queue.c in the first place.

I would think the direct qentry access outside queue.c was a bad
choice from modularity perspective. It had resulted in spaghetti code
for a benefit which is extremely small or non existence.

The fear about the performance degradation needs to be measured and
proved. The intention here is to prove that, this change does not
affect performance with additions of few math operations.

>
> I'll look into the internal queue interface definition to optimize it. This 
> current version is just a dummy, copy-paste replacement of qentry->xxx with 
> fn_queue->xxx(). Which is OK for the first stage, but certainly not 
> optimal/final solution.

If the opinion is it is dummy and a copy-paste I would say you have
not understood the thought process behind this change.

This change removes the need for having an internal abstract type
while keeping the external handle as a 32b value (the fact that the
32b value is being stored in a 64b handle is a different matter). This
is a big deal in keeping the code simple and understandable,
especially when there is no impact on performance. This interface will
be used as an example for other modules, which means there will not be
any additional internal opaque data types as well.

To repeat, I will rebase this once scalable scheduler is merged.

>
> -Petri
>


Re: [lng-odp] Need to standardize tools version

2017-06-22 Thread Honnappa Nagarahalli
On 22 June 2017 at 02:18, Maxim Uvarov  wrote:
> Yes we can document that.
That would be greatly helpful, knowing exactly what are the check points.

If Travis passes patch can be send to mailing
> list. We should continuously improve Travis scripts to add more build and
> targets. We can add hooks to compile it and run on different hardware which
> we care about.
> Problem with Travis is that we do not control it. Images can be updated.
> Virtual machines can be down and etc.
>
> To check different tools version we update DEPENDENCIES and ./configure
> script.
>
> The other problem is that patch introduced some problem. Like it does not
> compile on new version of Ubuntu. Yes it's hard to predict that but it has
> to be fixed before merging if feedback was get on mailing list.

This is the change in compilation environment. We as a community need
to accept to introduce/upgrade to this new environment. Otherwise, if
all of us have environments that differ, it becomes difficult for one
contributor to satisfy all the environments.

>
> Best regards,
> Maxim.
>
>
>
> On 22 June 2017 at 08:42, shally verma  wrote:
>>
>> On Thu, Jun 22, 2017 at 3:03 AM, Honnappa Nagarahalli
>>  wrote:
>> > Why is it Hard? It is a matter of documenting/advertising which
>> > versions we are using in CI and making that as a minimum standard. If
>> > someone has a different compiler they can always submit the patches
>> > for their compilers.
>> >
>> I second this opinion as I also had same question in last weekly public
>> call.
>> if using a travis CI a way to go then it will be helpful if could be
>> documented as minimum acceptance criteria for patches with bit of
>> pointers on how to use it to help novice user like me.
>>
>> Thanks
>> Shally
>>
>>
>> > On 21 June 2017 at 14:49, Maxim Uvarov  wrote:
>> >> it's very hard to standardize tools and compiler version. For now to
>> >> validate patches we use Linaro CI (odp-check scripts) and Travis CI
>> >> which is based on some stable ubuntu version. Also we really want
>> >> that all people can download odp, compile it and run. It's very rare
>> >> case if different tools introduce issues but some times it happen.
>> >> If such issue is found before patch submission it has to be fixed
>> >> before.
>> >>
>> >> Maxim.
>> >>
>> >> On 06/21/17 22:19, Bill Fischofer wrote:
>> >>> On Wed, Jun 21, 2017 at 1:30 PM, Honnappa Nagarahalli
>> >>>  wrote:
>> >>>> On 21 June 2017 at 12:23, Bill Fischofer 
>> >>>> wrote:
>> >>>>> ODP is fairly open-ended in this regard because in theory we're only
>> >>>>> dependent on
>> >>>>>
>> >>>>> - A C99-conforming compiler
>> >>>>> - A platform that supports a reasonably recent Linux kernel
>> >>>>>
>> >>>>> Today we do test on 32 and 64 bit systems, and try to support both
>> >>>>> GCC
>> >>>>> and clang, however as newer versions of these tools get released we
>> >>>>> sometimes encounter problems. The same is true with older releases.
>> >>>>> We
>> >>>>> try to accommodate, especially when the fix to support a wider range
>> >>>>> of tools and platforms is relatively straightforward.
>> >>>>>
>> >>>>> It's not possible to test exhaustively on every possible combination
>> >>>>> so when problems occur we open and fix bugs. However, once we fix a
>> >>>>> bug we prefer to fix it only once, which means that in-flight
>> >>>>> patches
>> >>>>> should be checked to see if they have a similar problem and should
>> >>>>> be
>> >>>>> revised to avoid that problem as well. That way we don't fix the
>> >>>>> same
>> >>>>> problem multiple times.
>> >>>>>
>> >>>> Agree. For anyone to submit a patch, they need to have a reference of
>> >>>> what needs to be done. Scalable scheduler is an example, where we
>> >>>> have
>> >>>> been discovering at every patch that there is a new thing that needs
>> >>>> to be done to accept the patch. If it was known upfront, we can work
>> >>>> on them from day 1. This sets up the expectation and saves time fo

Re: [lng-odp] [PATCH] build: fix conditional compilation of sources

2017-06-22 Thread Honnappa Nagarahalli
Reviewed-by: Honnappa Nagarahalli 

On 22 June 2017 at 11:58, Maxim Uvarov  wrote:
> On 06/22/17 19:54, Brian Brooks wrote:
>> On 06/22 19:44:45, Maxim Uvarov wrote:
>>> On 06/22/17 19:19, Brian Brooks wrote:
>>>> On 06/22 19:06:01, Maxim Uvarov wrote:
>>>>> On 06/22/17 17:17, Brian Brooks wrote:
>>>>>> On 06/22 11:13:57, Maxim Uvarov wrote:
>>>>>>> On 22 June 2017 at 06:24, Brian Brooks  wrote:
>>>>>>>
>>>>>>>> Explicitly add all arch//* files to respective _SOURCES
>>>>>>>> variables instead of using @ARCH_DIR@ substitution.
>>>>>>>>
>>>>>>>> This patch fixes the broken build for ARM, PPC, and MIPS
>>>>>>>> introduced by [1] and the similar issue reported while
>>>>>>>> testing [2].
>>>>>>>>
>>>>>>>> From the Autoconf manual [3]:
>>>>>>>>
>>>>>>>>   You can't put a configure substitution (e.g., '@FOO@' or
>>>>>>>>   '$(FOO)' where FOO is defined via AC_SUBST) into a _SOURCES
>>>>>>>>   variable. The reason for this is a bit hard to explain, but
>>>>>>>>   suffice to say that it simply won't work.
>>>>>>>>
>>>>>
>>>>>
>>>>> not clean why $(srcdir) work and $(ARCH_DIR) will not work.
>>>>>
>>>>> I changed this in your patch and it works well:
>>>>>
>>>>> -odpapiinclude_HEADERS += $(srcdir)/arch/x86/odp/api/cpu_arch.h
>>>>> +odpapiinclude_HEADERS += $(srcdir)/arch/$(ARCH_DIR)/odp/api/cpu_arch.h
>>>>
>>>> Tried it on ARM and it breaks. If you read the Autoconf manual (above) it
>>>> explicitly states that you cannot use variable substitution in _SOURCES
>>>> (obviously also _HEADERS). As you point out, this is probably also only
>>>> for user-defined variables (e.g. configure.ac) instead of preset output
>>>> variables (e.g. srcdir).
>>>>
>>>
>>> ok thanks. then only one comment for alphabetical reorder.
>>
>> They are in alphabetical order according to arch: A > M > P > X
>> Do you see something else?
>>
>
> cpu flags before odp_
>
> +__LIB__libodp_linux_la_SOURCES += arch/x86/odp_cpu_arch.c \
> + arch/x86/odp_sysinfo_parse.c \
> + arch/x86/cpu_flags.c
>
>>> I think we can try to add arm-qemu to travis to also capture such bugs.
>>
>> I thought there were ARM machines in LNG CI? Is there a way for users
>> to trigger a CI run over there before submitting a patch?
>>
>
> it's possible to integrate it with github. And it has to do the same
> thing which travis do now. I created ticket to automation team to do it.
> But looks like it's very low priority for then. I can not do it because
> it requires admin rights to Jenkins I think.
>
> Maxim.
>
>
>>> Maxim.
>>>
>>>
>>>>> Maxim.
>>>>>
>>>>>
>>>>>>>> Here be dragons..
>>>>>>>>
>>>>>>>> [1] https://lists.linaro.org/pipermail/lng-odp/2017-April/030324.html
>>>>>>>> [2] https://lists.linaro.org/pipermail/lng-odp/2017-June/031598.html
>>>>>>>> [3] https://www.gnu.org/software/automake/manual/html_node/
>>>>>>>> Conditional-Sources.html
>>>>>>>>
>>>>>>>> Signed-off-by: Brian Brooks 
>>>>>>>> ---
>>>>>>>>  configure.ac   |  3 +++
>>>>>>>>  platform/linux-generic/Makefile.am | 40 ++
>>>>>>>> 
>>>>>>>>  2 files changed, 35 insertions(+), 8 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/configure.ac b/configure.ac
>>>>>>>> index 46c7bbd2..45812f66 100644
>>>>>>>> --- a/configure.ac
>>>>>>>> +++ b/configure.ac
>>>>>>>> @@ -225,6 +225,9 @@ AM_CONDITIONAL([HAVE_DOXYGEN], [test "x${DOXYGEN}" 
>>>>>>>> =
>>>>>>>> "xdoxygen"])
>>>>>>>>  AM_CONDITIONAL([user_guide], [test "x${user_guides}" = "xyes" ])
>>>>>>&g

Re: [lng-odp] [API-NEXT PATCH v4] timer: allow timer processing to run on worker cores

2017-06-22 Thread Honnappa Nagarahalli
On 22 June 2017 at 10:30, Maxim Uvarov  wrote:
> On 06/22/17 17:55, Brian Brooks wrote:
>> On 06/22 10:27:01, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>>> I was asking to make sure that performance impact has been checked also 
>>> when timers are not used, e.g. l2fwd performance before and after the 
>>> change. It would be also appropriate to test impact in the worst case: 
>>> l2fwd type application + a periodic 1sec timeout. Timer is on, but timeouts 
>>> come very unfrequently (compared to packets).
>>>
>>> It seems that no performance tests were run, although the change affects 
>>> performance of many applications (e.g. OFP has high packet rate with 
>>> timers). Configuration options should be set with  defaults that are 
>>> acceptable trade-off between packet processing performance and timeout 
>>> accuracy.
>>
>> If timers are not used, the overhead is just checking a RO variable
>> (post global init). If timers are used, CONFIG_ parameters have been
>> provided. The defaults for these parameters came from the work to
>> drastically reduce jitter of timer processing which is documented
>> here [1] and presented at Linaro Connect here [2].
>>
>> If you speculate that these defaults might need to be changed, e.g.
>> l2fwd, we welcome collaboration and data. But, this is not a blocking
>> issue for this patch right now.
>>
>> [1] 
>> https://docs.google.com/document/d/1sY7rOxqCNu-bMqjBiT5_keAIohrX1ZW-eL0oGLAQ4OM/edit?usp=sharing
>> [2] http://connect.linaro.org/resource/bud17/bud17-320/
>>
>
> 1) we have all adjustable configs here
> ./platform/linux-generic/include/odp_config_internal.h
> that might be also needs to be there.
>
That file has all the global config values. These are internal to this
timer implementation, hence they do not need to be moved.
>
> 2) Do we need something special in CI to check different config values?

Nope.
>
> 3) Why it's compile time config values and not run time?

These config values are particular to this timer implementation.
Similar to config values in
./platform/linux-generic/include/odp_config_internal.h, these also
will be compile time constants.

>
> Maxim.
>
>
>>> -Petri
>>>
>>>
>>> From: Maxim Uvarov [mailto:maxim.uva...@linaro.org]
>>> Sent: Thursday, June 22, 2017 11:22 AM
>>> To: Honnappa Nagarahalli 
>>> Cc: Savolainen, Petri (Nokia - FI/Espoo) ; 
>>> lng-odp-forward 
>>> Subject: Re: [lng-odp] [API-NEXT PATCH v4] timer: allow timer processing to 
>>> run on worker cores
>>>
>>> Petri, do you want to test performance before patch inclusion?
>>> Maxim.
>>>
>>> On 21 June 2017 at 21:52, Honnappa Nagarahalli 
>>> <mailto:honnappa.nagaraha...@linaro.org> wrote:
>>> We have not run any performance application. In our Linaro connect
>>> meeting, we presented numbers on how it improves the timer resolution.
>>> At this point, there is enough configuration options to control the
>>> effect of calling timer in the scheduler. For applications that do not
>>> want to use the timer, there should not be any change. For
>>> applications that use timers non-frequently, the check frequency can
>>> be controlled via the provided configuration options.
>>>
>>> On 20 June 2017 at 02:34, Savolainen, Petri (Nokia - FI/Espoo)
>>> <mailto:petri.savolai...@nokia.com> wrote:
>>>> Do you have some performance numbers? E.g. how much this slows down an 
>>>> application which does not use timers (e.g. l2fwd), or an application that 
>>>> uses only few, non-frequent timeouts?
>>>>
>>>> Additionally, init.h/feature.h is not yet in api-next - so this would not 
>>>> build yet.
>>>>
>>>>
>>>> -Petri
>>>>
>>>>
>>>>> -Original Message-
>>>>> From: lng-odp [mailto:mailto:lng-odp-boun...@lists.linaro.org] On Behalf 
>>>>> Of
>>>>> Honnappa Nagarahalli
>>>>> Sent: Tuesday, June 20, 2017 7:07 AM
>>>>> To: Bill Fischofer <mailto:bill.fischo...@linaro.org>
>>>>> Cc: lng-odp-forward <mailto:lng-odp@lists.linaro.org>
>>>>> Subject: Re: [lng-odp] [API-NEXT PATCH v4] timer: allow timer processing
>>>>> to run on worker cores
>>>>>
>>>>> Are you saying we should be good to merge this now?
>>>>>
>>>>> On 19 June 2017 at 17:42, Bill Fischofer 
>>>>> <mailto:bill.fischo...@linaro.org>
>>>>> wrote:
>>>>>> On Mon, Jun 19, 2017 at 4:19 PM, Honnappa Nagarahalli
>>>>>> <mailto:honnappa.nagaraha...@linaro.org> wrote:
>>>>>>> Hi Bill/Maxim,
>>>>>>>  I do not see any further comments, can we merge this to api-next?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Honnappa
>>>>
>>>>
>>>
>


Re: [lng-odp] Need to standardize tools version

2017-06-21 Thread Honnappa Nagarahalli
Why is it Hard? It is a matter of documenting/advertising which
versions we are using in CI and making that as a minimum standard. If
someone has a different compiler they can always submit the patches
for their compilers.

On 21 June 2017 at 14:49, Maxim Uvarov  wrote:
> it's very hard to standardize tools and compiler version. For now to
> validate patches we use Linaro CI (odp-check scripts) and Travis CI
> which is based on some stable ubuntu version. Also we really want
> that all people can download odp, compile it and run. It's very rare
> case if different tools introduce issues but some times it happen.
> If such issue is found before patch submission it has to be fixed before.
>
> Maxim.
>
> On 06/21/17 22:19, Bill Fischofer wrote:
>> On Wed, Jun 21, 2017 at 1:30 PM, Honnappa Nagarahalli
>>  wrote:
>>> On 21 June 2017 at 12:23, Bill Fischofer  wrote:
>>>> ODP is fairly open-ended in this regard because in theory we're only
>>>> dependent on
>>>>
>>>> - A C99-conforming compiler
>>>> - A platform that supports a reasonably recent Linux kernel
>>>>
>>>> Today we do test on 32 and 64 bit systems, and try to support both GCC
>>>> and clang, however as newer versions of these tools get released we
>>>> sometimes encounter problems. The same is true with older releases. We
>>>> try to accommodate, especially when the fix to support a wider range
>>>> of tools and platforms is relatively straightforward.
>>>>
>>>> It's not possible to test exhaustively on every possible combination
>>>> so when problems occur we open and fix bugs. However, once we fix a
>>>> bug we prefer to fix it only once, which means that in-flight patches
>>>> should be checked to see if they have a similar problem and should be
>>>> revised to avoid that problem as well. That way we don't fix the same
>>>> problem multiple times.
>>>>
>>> Agree. For anyone to submit a patch, they need to have a reference of
>>> what needs to be done. Scalable scheduler is an example, where we have
>>> been discovering at every patch that there is a new thing that needs
>>> to be done to accept the patch. If it was known upfront, we can work
>>> on them from day 1. This sets up the expectation and saves time for
>>> everyone, knowing that the patch works for this minimum acceptance
>>> criteria. For ex: I do not know how many times you have tried to
>>> compile the code and discovered that it does not compile. I would like
>>> to avoid those problems.
>>
>> That's what we're trying to do with CI and Travis. In this specific
>> case Petri discovered an issue that effected an older LTS level of
>> Ubuntu and provided a simple fix to the issue. So I don't see a
>> problem with propagating that fix as Brian seems to have confirmed
>> that the fix is good.
>>
>>>
>>>>
>>>> On Wed, Jun 21, 2017 at 11:58 AM, Honnappa Nagarahalli
>>>>  wrote:
>>>>> Along with this, we need to standardize 32b/64b compilations and the
>>>>> platforms on which we run the test cases.
>>>>> Thanks,
>>>>> Honnappa
>>>>>
>>>>> On 21 June 2017 at 11:38, Honnappa Nagarahalli
>>>>>  wrote:
>>>>>> Hi,
>>>>>>I think there is a need to identify tools and specific versions of
>>>>>> the tools from patch acceptance perspective. Any failures outside of
>>>>>> these versions should be the responsibility of the person facing the
>>>>>> issues, they should submit a patch for those versions and tools.
>>>>>>
>>>>>> Travis CI is a step in that direction. But I think we still allow
>>>>>> submissions of patches via email. So, for this case, should we
>>>>>> standardize the tools and versions being used in Travis CI as the
>>>>>> acceptance criteria?
>>>>>>
>>>>>> Thanks,
>>>>>> Honnappa
>


Re: [lng-odp] Need to standardize tools version

2017-06-21 Thread Honnappa Nagarahalli
I am not talking in the context of Petri's GCC 4.8 issue. In general,
if there are 16 people and all have their own environment, it is
difficult for a patch submitter to have all those 16 environments and
make it compile/work for all of them. I am not saying that we do not
have multiple versions, but all them needs to be in CI, known to
everyone upfront.

We need to have that minimum basic set against which the patch
compiles and works. Anything beyond that should be addressed by the
ones who have specific tools.


On 21 June 2017 at 14:19, Bill Fischofer  wrote:
> On Wed, Jun 21, 2017 at 1:30 PM, Honnappa Nagarahalli
>  wrote:
>> On 21 June 2017 at 12:23, Bill Fischofer  wrote:
>>> ODP is fairly open-ended in this regard because in theory we're only
>>> dependent on
>>>
>>> - A C99-conforming compiler
>>> - A platform that supports a reasonably recent Linux kernel
>>>
>>> Today we do test on 32 and 64 bit systems, and try to support both GCC
>>> and clang, however as newer versions of these tools get released we
>>> sometimes encounter problems. The same is true with older releases. We
>>> try to accommodate, especially when the fix to support a wider range
>>> of tools and platforms is relatively straightforward.
>>>
>>> It's not possible to test exhaustively on every possible combination
>>> so when problems occur we open and fix bugs. However, once we fix a
>>> bug we prefer to fix it only once, which means that in-flight patches
>>> should be checked to see if they have a similar problem and should be
>>> revised to avoid that problem as well. That way we don't fix the same
>>> problem multiple times.
>>>
>> Agree. For anyone to submit a patch, they need to have a reference of
>> what needs to be done. Scalable scheduler is an example, where we have
>> been discovering at every patch that there is a new thing that needs
>> to be done to accept the patch. If it was known upfront, we can work
>> on them from day 1. This sets up the expectation and saves time for
>> everyone, knowing that the patch works for this minimum acceptance
>> criteria. For ex: I do not know how many times you have tried to
>> compile the code and discovered that it does not compile. I would like
>> to avoid those problems.
>
> That's what we're trying to do with CI and Travis. In this specific
> case Petri discovered an issue that effected an older LTS level of
> Ubuntu and provided a simple fix to the issue. So I don't see a
> problem with propagating that fix as Brian seems to have confirmed
> that the fix is good.
>
>>
>>>
>>> On Wed, Jun 21, 2017 at 11:58 AM, Honnappa Nagarahalli
>>>  wrote:
>>>> Along with this, we need to standardize 32b/64b compilations and the
>>>> platforms on which we run the test cases.
>>>> Thanks,
>>>> Honnappa
>>>>
>>>> On 21 June 2017 at 11:38, Honnappa Nagarahalli
>>>>  wrote:
>>>>> Hi,
>>>>>I think there is a need to identify tools and specific versions of
>>>>> the tools from patch acceptance perspective. Any failures outside of
>>>>> these versions should be the responsibility of the person facing the
>>>>> issues, they should submit a patch for those versions and tools.
>>>>>
>>>>> Travis CI is a step in that direction. But I think we still allow
>>>>> submissions of patches via email. So, for this case, should we
>>>>> standardize the tools and versions being used in Travis CI as the
>>>>> acceptance criteria?
>>>>>
>>>>> Thanks,
>>>>> Honnappa


Re: [lng-odp] [API-NEXT PATCH v9 0/6] A scalable software scheduler

2017-06-21 Thread Honnappa Nagarahalli
--enable-schedule-scalable to conditionally compile this scheduler
>>> >>> into the library.
>>> >>>
>>> >>> [1] 
>>> >>> https://lists.linaro.org/pipermail/lng-odp/2016-September/025682.html
>>> >>>
>>> >>> On checkpatch.pl:
>>> >>>  - [2/6] and [5/6] have checkpatch.pl issues that are superfluous
>>> >>>
>>> >>> v9:
>>> >>>  - Include patch to enable scalable scheduler in Travis CI
>>> >>>  - Fix 'make distcheck'
>>> >>>
>>> >>> v8:
>>> >>>  - Reword commit messages
>>> >>>
>>> >>> v7:
>>> >>>  - Rebase against new modular queue interface
>>> >>>  - Duplicate arch files under mips64 and powerpc
>>> >>>  - Fix sched->order_lock()
>>> >>>  - Loop until all deferred events have been enqueued
>>> >>>  - Implement ord_enq_multi()
>>> >>>  - Fix ordered_lock/unlock
>>> >>>  - Revert stylistic changes
>>> >>>  - Add default xfactor
>>> >>>  - Remove changes to odp_sched_latency
>>> >>>  - Remove ULL suffix to alleviate Clang build
>>> >>>
>>> >>> v6:
>>> >>>  - Move conversions into scalable scheduler to alleviate #ifdefs
>>> >>>  - Remove unnecessary prefetch
>>> >>>  - Fix ARMv8 build
>>> >>>
>>> >>> v5:
>>> >>>  - Allocate cache aligned memory using shm pool APIs
>>> >>>  - Move more code to scalable scheduler specific files
>>> >>>  - Remove CONFIG_SPLIT_READWRITE
>>> >>>  - Fix 'make distcheck' issue
>>> >>>
>>> >>> v4:
>>> >>>  - Fix a couple more checkpatch.pl issues
>>> >>>
>>> >>> v3:
>>> >>>  - Only conditionally compile scalable scheduler and queue
>>> >>>  - Move some code to arch/ dir
>>> >>>  - Use a single shm block for queues instead of block-per-queue
>>> >>>  - De-interleave odp_llqueue.h
>>> >>>  - Use compiler macros to determine ATOM_BITSET_SIZE
>>> >>>  - Incorporated queue size changes
>>> >>>  - Dropped 'ODP_' prefix on config and moved to other files
>>> >>>  - Dropped a few patches that were send independently to the list
>>> >>>
>>> >>> v2:
>>> >>>  - Move ARMv8 issues and other fixes into separate patches
>>> >>>  - Abstract away some #ifdefs
>>> >>>  - Fix some checkpatch.pl warnings
>>> >>>
>>> >>> Brian Brooks (5):
>>> >>>   test: odp_pktio_ordered: add queue size
>>> >>>   linux-gen: sched scalable: add arch files
>>> >>>   linux-gen: sched scalable: add a bitset
>>> >>>   linux-gen: sched scalable: add a concurrent queue
>>> >>>   linux-gen: sched scalable: add scalable scheduler
>>> >>>
>>> >>> Honnappa Nagarahalli (1):
>>> >>>   travis: add scalable scheduler in CI
>>> >>>
>>> >>>  .travis.yml|1 +
>>> >>>  configure.ac   |1 +
>>> >>>  platform/linux-generic/Makefile.am |   17 +
>>> >>>  platform/linux-generic/arch/arm/odp_atomic.h   |  210 +++
>>> >>>  platform/linux-generic/arch/arm/odp_cpu.h  |   65 +
>>> >>>  platform/linux-generic/arch/arm/odp_cpu_idling.h   |   51 +
>>> >>>  platform/linux-generic/arch/arm/odp_llsc.h |  249 +++
>>> >>>  platform/linux-generic/arch/default/odp_cpu.h  |   41 +
>>> >>>  platform/linux-generic/arch/mips64/odp_cpu.h   |   41 +
>>> >>>  platform/linux-generic/arch/powerpc/odp_cpu.h  |   41 +
>>> >>>  platform/linux-generic/arch/x86/odp_cpu.h  |   41 +
>>> >>>  .../include/odp/api/plat/schedule_types.h  |4 +-
>>> >>>  platform/linux-generic/include/odp_bitset.h|  210 +++
>>> >>>  .../linux-generic/include/odp_config_internal.h|   17 +-
>>> >>>  platform/linux-generic/include/odp_llqueue.h   |  309 +++
>>> >>>  .../include/odp_queue_scalable_internal.h  |  102 +
>>> >>>  platform/linux-generic/include/odp_schedule_if.h   |2 +-
>>> >>>  .../linux-generic/include/odp_schedule_scalable.h  |  137 ++
>>> >>>  .../include/odp_schedule_scalable_config.h |   55 +
>>> >>>  .../include/odp_schedule_scalable_ordered.h|  132 ++
>>> >>>  platform/linux-generic/m4/odp_schedule.m4  |   55 +-
>>> >>>  platform/linux-generic/odp_queue_if.c  |8 +
>>> >>>  platform/linux-generic/odp_queue_scalable.c| 1020 ++
>>> >>>  platform/linux-generic/odp_schedule_if.c   |6 +
>>> >>>  platform/linux-generic/odp_schedule_scalable.c | 1978 
>>> >>> 
>>> >>>  .../linux-generic/odp_schedule_scalable_ordered.c  |  347 
>>> >>>  test/common_plat/performance/odp_pktio_ordered.c   |4 +
>>> >>>  27 files changed, 5122 insertions(+), 22 deletions(-)
>>> >>>  create mode 100644 platform/linux-generic/arch/arm/odp_atomic.h
>>> >>>  create mode 100644 platform/linux-generic/arch/arm/odp_cpu.h
>>> >>>  create mode 100644 platform/linux-generic/arch/arm/odp_cpu_idling.h
>>> >>>  create mode 100644 platform/linux-generic/arch/arm/odp_llsc.h
>>> >>>  create mode 100644 platform/linux-generic/arch/default/odp_cpu.h
>>> >>>  create mode 100644 platform/linux-generic/arch/mips64/odp_cpu.h
>>> >>>  create mode 100644 platform/linux-generic/arch/powerpc/odp_cpu.h
>>> >>>  create mode 100644 platform/linux-generic/arch/x86/odp_cpu.h
>>> >>>  create mode 100644 platform/linux-generic/include/odp_bitset.h
>>> >>>  create mode 100644 platform/linux-generic/include/odp_llqueue.h
>>> >>>  create mode 100644 
>>> >>> platform/linux-generic/include/odp_queue_scalable_internal.h
>>> >>>  create mode 100644 
>>> >>> platform/linux-generic/include/odp_schedule_scalable.h
>>> >>>  create mode 100644 
>>> >>> platform/linux-generic/include/odp_schedule_scalable_config.h
>>> >>>  create mode 100644 
>>> >>> platform/linux-generic/include/odp_schedule_scalable_ordered.h
>>> >>>  create mode 100644 platform/linux-generic/odp_queue_scalable.c
>>> >>>  create mode 100644 platform/linux-generic/odp_schedule_scalable.c
>>> >>>  create mode 100644 
>>> >>> platform/linux-generic/odp_schedule_scalable_ordered.c
>>> >>>
>>> >>> --
>>> >>> 2.13.1
>>> >>>


Re: [lng-odp] [PATCH API-NEXT v1 1/1] linux-generic: queue: modular queue interface

2017-06-21 Thread Honnappa Nagarahalli
Agree, will rebase and update once we have the scalable queues merged.

On 21 June 2017 at 14:53, Maxim Uvarov  wrote:
> On 06/19/17 08:00, Github ODP bot wrote:
>>  log| 1269 
>> 
>
> log has not be a part of patch.
>
> Maxim.


Re: [lng-odp] [API-NEXT PATCH v9 4/6] linux-gen: sched scalable: add a concurrent queue

2017-06-21 Thread Honnappa Nagarahalli
I cannot make a decision on this topic. This is what I have been told
and I am also told that Linaro is in sync on this. I will initiate
some discussions on this internally. Bill, do you want to check in
Linaro on this?

I suggest we go ahead with the patch and agree to change it when we
have a decision.

On 21 June 2017 at 12:37, Bill Fischofer  wrote:
> It's perfectly fine to have an ARM copyright as the original
> contributor, but it also needs a Linaro copyright to be incorporated
> into ODP. See traffic_mngr.c for an example of this.
>
> On Wed, Jun 21, 2017 at 9:27 AM, Honnappa Nagarahalli
>  wrote:
>> Hi Maxim,
>> This is a new file added by us. Hence the ARM copyright.
>> Thanks,
>> Honnappa
>>
>> On 21 June 2017 at 09:06, Maxim Uvarov  wrote:
>>> On 06/19/17 22:12, Brian Brooks wrote:
>>>>
>>>> --- /dev/null
>>>> +++ b/platform/linux-generic/include/odp_llqueue.h
>>>> @@ -0,0 +1,309 @@
>>>> +/* Copyright (c) 2017, ARM Limited.
>>>> + * All rights reserved.
>>>> + *
>>>> + * SPDX-License-Identifier:BSD-3-Clause
>>>> + */
>>>> +
>>>
>>>
>>> Has to be Linaro copyright.
>>>
>>> Maxim.


Re: [lng-odp] [API-NEXT PATCH v4] timer: allow timer processing to run on worker cores

2017-06-21 Thread Honnappa Nagarahalli
We have not run any performance application. In our Linaro connect
meeting, we presented numbers on how it improves the timer resolution.
At this point, there is enough configuration options to control the
effect of calling timer in the scheduler. For applications that do not
want to use the timer, there should not be any change. For
applications that use timers non-frequently, the check frequency can
be controlled via the provided configuration options.

On 20 June 2017 at 02:34, Savolainen, Petri (Nokia - FI/Espoo)
 wrote:
> Do you have some performance numbers? E.g. how much this slows down an 
> application which does not use timers (e.g. l2fwd), or an application that 
> uses only few, non-frequent timeouts?
>
> Additionally, init.h/feature.h is not yet in api-next - so this would not 
> build yet.
>
>
> -Petri
>
>
>> -Original Message-
>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>> Honnappa Nagarahalli
>> Sent: Tuesday, June 20, 2017 7:07 AM
>> To: Bill Fischofer 
>> Cc: lng-odp-forward 
>> Subject: Re: [lng-odp] [API-NEXT PATCH v4] timer: allow timer processing
>> to run on worker cores
>>
>> Are you saying we should be good to merge this now?
>>
>> On 19 June 2017 at 17:42, Bill Fischofer 
>> wrote:
>> > On Mon, Jun 19, 2017 at 4:19 PM, Honnappa Nagarahalli
>> >  wrote:
>> >> Hi Bill/Maxim,
>> >> I do not see any further comments, can we merge this to api-next?
>> >>
>> >> Thanks,
>> >> Honnappa
>
>


Re: [lng-odp] Need to standardize tools version

2017-06-21 Thread Honnappa Nagarahalli
On 21 June 2017 at 12:23, Bill Fischofer  wrote:
> ODP is fairly open-ended in this regard because in theory we're only
> dependent on
>
> - A C99-conforming compiler
> - A platform that supports a reasonably recent Linux kernel
>
> Today we do test on 32 and 64 bit systems, and try to support both GCC
> and clang, however as newer versions of these tools get released we
> sometimes encounter problems. The same is true with older releases. We
> try to accommodate, especially when the fix to support a wider range
> of tools and platforms is relatively straightforward.
>
> It's not possible to test exhaustively on every possible combination
> so when problems occur we open and fix bugs. However, once we fix a
> bug we prefer to fix it only once, which means that in-flight patches
> should be checked to see if they have a similar problem and should be
> revised to avoid that problem as well. That way we don't fix the same
> problem multiple times.
>
Agree. For anyone to submit a patch, they need to have a reference of
what needs to be done. Scalable scheduler is an example, where we have
been discovering at every patch that there is a new thing that needs
to be done to accept the patch. If it was known upfront, we can work
on them from day 1. This sets up the expectation and saves time for
everyone, knowing that the patch works for this minimum acceptance
criteria. For ex: I do not know how many times you have tried to
compile the code and discovered that it does not compile. I would like
to avoid those problems.

>
> On Wed, Jun 21, 2017 at 11:58 AM, Honnappa Nagarahalli
>  wrote:
>> Along with this, we need to standardize 32b/64b compilations and the
>> platforms on which we run the test cases.
>> Thanks,
>> Honnappa
>>
>> On 21 June 2017 at 11:38, Honnappa Nagarahalli
>>  wrote:
>>> Hi,
>>>I think there is a need to identify tools and specific versions of
>>> the tools from patch acceptance perspective. Any failures outside of
>>> these versions should be the responsibility of the person facing the
>>> issues, they should submit a patch for those versions and tools.
>>>
>>> Travis CI is a step in that direction. But I think we still allow
>>> submissions of patches via email. So, for this case, should we
>>> standardize the tools and versions being used in Travis CI as the
>>> acceptance criteria?
>>>
>>> Thanks,
>>> Honnappa


Re: [lng-odp] Need to standardize tools version

2017-06-21 Thread Honnappa Nagarahalli
Along with this, we need to standardize 32b/64b compilations and the
platforms on which we run the test cases.
Thanks,
Honnappa

On 21 June 2017 at 11:38, Honnappa Nagarahalli
 wrote:
> Hi,
>I think there is a need to identify tools and specific versions of
> the tools from patch acceptance perspective. Any failures outside of
> these versions should be the responsibility of the person facing the
> issues, they should submit a patch for those versions and tools.
>
> Travis CI is a step in that direction. But I think we still allow
> submissions of patches via email. So, for this case, should we
> standardize the tools and versions being used in Travis CI as the
> acceptance criteria?
>
> Thanks,
> Honnappa


[lng-odp] Need to standardize tools version

2017-06-21 Thread Honnappa Nagarahalli
Hi,
   I think there is a need to identify tools and specific versions of
the tools from patch acceptance perspective. Any failures outside of
these versions should be the responsibility of the person facing the
issues, they should submit a patch for those versions and tools.

Travis CI is a step in that direction. But I think we still allow
submissions of patches via email. So, for this case, should we
standardize the tools and versions being used in Travis CI as the
acceptance criteria?

Thanks,
Honnappa


Re: [lng-odp] [API-NEXT PATCH v9 4/6] linux-gen: sched scalable: add a concurrent queue

2017-06-21 Thread Honnappa Nagarahalli
I think we have talked about this earlier. We have been directed from
our executives that we should use ARM copyright if a member has done
this work and Linaro/LNG/FF/Bill are in sync on this.

Thanks,
Honnappa

On 21 June 2017 at 10:10, Maxim Uvarov  wrote:
> On 06/21/17 17:27, Honnappa Nagarahalli wrote:
>>
>> Hi Maxim,
>>  This is a new file added by us. Hence the ARM copyright.
>> Thanks,
>> Honnappa
>>
>
> Was this work done as Linaro member/assignee? If yes - it has to have Linaro
> copyright as I know.
>
> Maxim.
>
>
>> On 21 June 2017 at 09:06, Maxim Uvarov  wrote:
>>>
>>> On 06/19/17 22:12, Brian Brooks wrote:
>>>>
>>>>
>>>> --- /dev/null
>>>> +++ b/platform/linux-generic/include/odp_llqueue.h
>>>> @@ -0,0 +1,309 @@
>>>> +/* Copyright (c) 2017, ARM Limited.
>>>> + * All rights reserved.
>>>> + *
>>>> + * SPDX-License-Identifier:BSD-3-Clause
>>>> + */
>>>> +
>>>
>>>
>>>
>>> Has to be Linaro copyright.
>>>
>>> Maxim.
>
>


Re: [lng-odp] [API-NEXT PATCH v9 4/6] linux-gen: sched scalable: add a concurrent queue

2017-06-21 Thread Honnappa Nagarahalli
Hi Maxim,
This is a new file added by us. Hence the ARM copyright.
Thanks,
Honnappa

On 21 June 2017 at 09:06, Maxim Uvarov  wrote:
> On 06/19/17 22:12, Brian Brooks wrote:
>>
>> --- /dev/null
>> +++ b/platform/linux-generic/include/odp_llqueue.h
>> @@ -0,0 +1,309 @@
>> +/* Copyright (c) 2017, ARM Limited.
>> + * All rights reserved.
>> + *
>> + * SPDX-License-Identifier:BSD-3-Clause
>> + */
>> +
>
>
> Has to be Linaro copyright.
>
> Maxim.


  1   2   3   >