Re: [lng-odp] IPsec: handling dummy packets (NH=59)
Traffic Flow Confidentiality (TFC) is a feature of SAs according to RFC 4303 that must be negotiated on a per-SA basis before it is used. So This would need to be hooked into higher-level protocols. >From an ODP perspective, it would be an additional set of parameters on the odp_ipsec_sa_create() API. Not clear this is something we should worry about for Tiger Moth, but something to consider as an addition in the future. On Mon, Nov 20, 2017 at 8:37 AM, Dmitry Eremin-Solenikov < dmitry.ereminsoleni...@linaro.org> wrote: > Hello, > > I was thinking about another minor part of IPsec RFCs: dummy packets > used to mask traffic statistics. IPsec implementation is required to > drop ESP packets with NH = 59 (no next header) on receiver side and is > expected to be able to generate these packets on transmitter side. > Currently we do not provide a way to inject these packets in any way. > > Possible solutions: > > TX side: > - Add API call to transmit single packet. > > - Extend transmit parameters to specify next header (IPv4, IPv6 or > NoNH) for each packet to be transmitted (per-packet or per-odp call). > > - ??? > > RX side: > - Silently drop NoNH packets > > - Report NoNH packets to app via error or status event mechanism. > > - ??? > > -- > With best wishes > Dmitry >
[lng-odp] [PATCH API-NEXT v3 3/3] linux-gen: ipsec: use new odp checksum API
From: Dmitry Eremin-SolenikovUse odp_chksum_ones_comp16 which may be platform-optimized. Signed-off-by: Dmitry Eremin-Solenikov --- /** Email created from pull request 280 (lumag:chksum) ** https://github.com/Linaro/odp/pull/280 ** Patch: https://github.com/Linaro/odp/pull/280.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: 9fda6f107cef0c4074880995a90b30cbeac27ce8 **/ platform/linux-generic/odp_ipsec.c | 31 ++- 1 file changed, 2 insertions(+), 29 deletions(-) diff --git a/platform/linux-generic/odp_ipsec.c b/platform/linux-generic/odp_ipsec.c index 0ebc65341..32aae1719 100644 --- a/platform/linux-generic/odp_ipsec.c +++ b/platform/linux-generic/odp_ipsec.c @@ -7,6 +7,7 @@ #include "config.h" #include +#include #include @@ -103,34 +104,6 @@ static odp_ipsec_packet_result_t *ipsec_pkt_result(odp_packet_t packet) return _packet_hdr(packet)->ipsec_ctx; } -/** - * Checksum - * - * @param buffer calculate chksum for buffer - * @param lenbuffer length - * - * @return checksum value in network order - */ -static inline -odp_u16sum_t _odp_chksum(void *buffer, int len) -{ - uint16_t *buf = (uint16_t *)buffer; - uint32_t sum = 0; - uint16_t result; - - for (sum = 0; len > 1; len -= 2) - sum += *buf++; - - if (len == 1) - sum += *(unsigned char *)buf; - - sum = (sum >> 16) + (sum & 0x); - sum += (sum >> 16); - result = ~sum; - - return (__odp_force odp_u16sum_t) result; -} - static inline int _odp_ipv4_csum(odp_packet_t pkt, uint32_t offset, _odp_ipv4hdr_t *ip, @@ -150,7 +123,7 @@ static inline int _odp_ipv4_csum(odp_packet_t pkt, if (odp_unlikely(res < 0)) return res; - *chksum = _odp_chksum(buf, nleft); + *chksum = ~odp_chksum_ones_comp16(buf, nleft); return 0; }
[lng-odp] [PATCH API-NEXT v3 2/3] helper: use new odp checksum API
From: Dmitry Eremin-SolenikovUse odp_chksum_ones_comp16 which may be platform-optimized. This removes now unnecessary odph_cksum() function. Signed-off-by: Dmitry Eremin-Solenikov --- /** Email created from pull request 280 (lumag:chksum) ** https://github.com/Linaro/odp/pull/280 ** Patch: https://github.com/Linaro/odp/pull/280.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: 9fda6f107cef0c4074880995a90b30cbeac27ce8 **/ helper/include/odp/helper/chksum.h | 27 --- helper/include/odp/helper/ip.h | 2 +- 2 files changed, 1 insertion(+), 28 deletions(-) diff --git a/helper/include/odp/helper/chksum.h b/helper/include/odp/helper/chksum.h index 1bf950c8b..ed2de91fb 100644 --- a/helper/include/odp/helper/chksum.h +++ b/helper/include/odp/helper/chksum.h @@ -34,33 +34,6 @@ typedef enum { ODPH_CHKSUM_RETURN/**< Don't generate or verify chksum */ } odph_chksum_op_t; -/** - * Checksum - * - * @param buffer calculate chksum for buffer - * @param lenbuffer length - * - * @return checksum value in network order - */ -static inline odp_u16sum_t odph_chksum(void *buffer, int len) -{ - uint16_t *buf = (uint16_t *)buffer; - uint32_t sum = 0; - uint16_t result; - - for (sum = 0; len > 1; len -= 2) - sum += *buf++; - - if (len == 1) - sum += *(unsigned char *)buf; - - sum = (sum >> 16) + (sum & 0x); - sum += (sum >> 16); - result = ~sum; - - return (__odp_force odp_u16sum_t) result; -} - /** * General Purpose TCP/UDP checksum function * diff --git a/helper/include/odp/helper/ip.h b/helper/include/odp/helper/ip.h index c6eb9d767..b96aab280 100644 --- a/helper/include/odp/helper/ip.h +++ b/helper/include/odp/helper/ip.h @@ -125,7 +125,7 @@ static inline int odph_ipv4_csum(odp_packet_t pkt, if (odp_unlikely(res < 0)) return res; - *chksum = odph_chksum(buf, nleft); + *chksum = ~odp_chksum_ones_comp16(buf, nleft); return 0; }
[lng-odp] [PATCH API-NEXT v3 1/3] example: switch to using ODP chksum API
From: Dmitry Eremin-SolenikovODP now provides odp_chksum_ones_comp16, switch examples to use it instead of odph_chksum helper function. Signed-off-by: Dmitry Eremin-Solenikov --- /** Email created from pull request 280 (lumag:chksum) ** https://github.com/Linaro/odp/pull/280 ** Patch: https://github.com/Linaro/odp/pull/280.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: 9fda6f107cef0c4074880995a90b30cbeac27ce8 **/ example/generator/odp_generator.c | 7 --- example/ipsec/odp_ipsec_stream.c | 5 +++-- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c index 912fe7e9e..e1d98539f 100644 --- a/example/generator/odp_generator.c +++ b/example/generator/odp_generator.c @@ -355,7 +355,7 @@ static int setup_udp_pkt(odp_packet_t pkt, odp_pktout_config_opt_t *pktout_cfg) ip->id = odp_cpu_to_be_16(seq); if (!pktout_cfg->bit.ipv4_chksum) { ip->chksum = 0; - ip->chksum = odph_chksum(ip, ODPH_IPV4HDR_LEN); + ip->chksum = ~odp_chksum_ones_comp16(ip, ODPH_IPV4HDR_LEN); } if (pktout_cfg->bit.ipv4_chksum || pktout_cfg->bit.udp_chksum) { @@ -453,7 +453,7 @@ static int setup_icmp_pkt(odp_packet_t pkt, ip->id = odp_cpu_to_be_16(seq); if (!pktout_cfg->bit.ipv4_chksum) { ip->chksum = 0; - ip->chksum = odph_chksum(ip, ODPH_IPV4HDR_LEN); + ip->chksum = ~odp_chksum_ones_comp16(ip, ODPH_IPV4HDR_LEN); } /* icmp */ @@ -466,7 +466,8 @@ static int setup_icmp_pkt(odp_packet_t pkt, memcpy(tval_d, , sizeof(uint64_t)); icmp->chksum = 0; - icmp->chksum = odph_chksum(icmp, args->appl.payload + ODPH_ICMPHDR_LEN); + icmp->chksum = ~odp_chksum_ones_comp16(icmp, args->appl.payload + + ODPH_ICMPHDR_LEN); if (pktout_cfg->bit.ipv4_chksum) { odp_packet_l2_offset_set(pkt, 0); diff --git a/example/ipsec/odp_ipsec_stream.c b/example/ipsec/odp_ipsec_stream.c index e37fbee29..bc5d572f0 100644 --- a/example/ipsec/odp_ipsec_stream.c +++ b/example/ipsec/odp_ipsec_stream.c @@ -269,7 +269,8 @@ odp_packet_t create_ipv4_packet(stream_db_entry_t *stream, inner_ip->frag_offset = 0; inner_ip->src_addr = odp_cpu_to_be_32(stream->src_ip); inner_ip->dst_addr = odp_cpu_to_be_32(stream->dst_ip); - inner_ip->chksum = odph_chksum(inner_ip, sizeof(*inner_ip)); + inner_ip->chksum = ~odp_chksum_ones_comp16(inner_ip, + sizeof(*inner_ip)); data += sizeof(*inner_ip); } @@ -290,7 +291,7 @@ odp_packet_t create_ipv4_packet(stream_db_entry_t *stream, /* Close ICMP */ icmp->chksum = 0; - icmp->chksum = odph_chksum(icmp, data - (uint8_t *)icmp); + icmp->chksum = ~odp_chksum_ones_comp16(icmp, data - (uint8_t *)icmp); /* Close ESP if specified */ if (esp) {
[lng-odp] [PATCH API-NEXT v3 0/3] Use chksum API
github /** Email created from pull request 280 (lumag:chksum) ** https://github.com/Linaro/odp/pull/280 ** Patch: https://github.com/Linaro/odp/pull/280.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: 9fda6f107cef0c4074880995a90b30cbeac27ce8 **/ /github checkpatch.pl total: 0 errors, 0 warnings, 0 checks, 42 lines checked to_send-p-000.patch has no obvious style problems and is ready for submission. total: 0 errors, 0 warnings, 0 checks, 41 lines checked to_send-p-001.patch has no obvious style problems and is ready for submission. total: 0 errors, 0 warnings, 0 checks, 49 lines checked to_send-p-002.patch has no obvious style problems and is ready for submission. /checkpatch.pl
[lng-odp] IPsec: handling dummy packets (NH=59)
Hello, I was thinking about another minor part of IPsec RFCs: dummy packets used to mask traffic statistics. IPsec implementation is required to drop ESP packets with NH = 59 (no next header) on receiver side and is expected to be able to generate these packets on transmitter side. Currently we do not provide a way to inject these packets in any way. Possible solutions: TX side: - Add API call to transmit single packet. - Extend transmit parameters to specify next header (IPv4, IPv6 or NoNH) for each packet to be transmitted (per-packet or per-odp call). - ??? RX side: - Silently drop NoNH packets - Report NoNH packets to app via error or status event mechanism. - ??? -- With best wishes Dmitry
Re: [lng-odp] [PATCH API-NEXT v8] api: random early detection and back pressure
Petri Savolainen(psavol) replied on github web page: include/odp/api/spec/std_types.h @@ -32,6 +32,13 @@ extern "C" { */ /** + * @typedef odp_percent_t + * Use odp_percent_t for specifying fields that are percentages. It is a fixed + * point integer whose units are expressed as one-hundredth of a percent. + * Hence 100% is represented as integer value 1. + */ + Comment: Replace "@typedef odp_percent_t" doxygen tag with the short comment line ("Percent type"). The tag is not needed anymore as the typedef follows the comment. Also, remove the extra line feed. > Petri Savolainen(psavol) wrote: > bp_ prefixes can be removed from "bp_enable" and "bp_threshold". > > "BP is enabled when queue or pool value is equal to or greater than the max > backpressure threshold." > This should refer to "resource usage" (which has been defined on RED > documentation). "queue and pool value" are undefined terms. >> Petri Savolainen(psavol) wrote: >> Remove this extra line feed. >>> Petri Savolainen(psavol) wrote: >>> "... RED is enabled when the resource limit is equal to ..." >>> >>> This refers still to "resource limit", it should be "resource usage" which >>> is a term that has been define above. Petri Savolainen(psavol) wrote: red_ prefix can be dropped from the struct field names. When red param is used, the name of the variable is likely red or red_param, etc. So, after removing the prefix it looks like this red.enable = 1; red.threshold.packets.min = 1000; vs red.red_enable = 1; red.red_threshold ... > Petri Savolainen(psavol) wrote: > "Reaches between" is not valid since drop probability is defined also > when max. "... when the packets in the queue/pool reaches > between the specified threshold." -> "... when the packets in the > queue/pool cross specified threshold values." > > The "resource limit" needs to be specified: is it free or used space? Or > is it different for pools and queues: free space in a pool, used space in > a queue? May be the text is easier to understand with a bullet list: > * drop probability is 100%, when resource usage > threshold.max > * drop probability is 0%, when resource usage < threshold.min > * drop probability is between 0 ... 100%, when resource usage is between > threshold.min and threshold.max > > ... and then define what "resource usage" means. Pools: space used, > queues packet/bytes in queue >> Petri Savolainen(psavol) wrote: >> This should match the type enum: byte. Also better description is >> needed: e.g. "Sum of all data bytes of all packets". Packet size does >> not tell if it's the size of one or many packets. >>> Petri Savolainen(psavol) wrote: >>> This should match the type enum: packet Petri Savolainen(psavol) wrote: When percent is not abbreviated, then this should not be either. So, _PERCENT and _PACKET, or PCT and PKT. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > As we discussed during today's ARCH call. `max` and `min` can be > renamed `start` and `stop` to more accurately reflect the actions. > RED / backpressure begins when utilization hits the `start` threshold > and is deasserted when utilization drops back to the `stop` > threshold. So `start` >= `stop`. If the two are equal it means there > is no hysteresis. > > The question arises whether a third `max` threshold value should > exist at which drops / backpressure is held at 100%. The idea here is > to reserve a portion of the pool/queue resource for application use > independent of use by PktIOs. Since this may not be feasible in all > implementations this should probably be advertised with additional > capability info. >> Balasubramanian Manoharan(bala-manoharan) wrote: >> I can move the typedef here. >> Not sure if I understand the need to move to uint32_t, I have only >> defined 100% as 10,000 and we should be fine with uin16_t. Any other >> specific reason for using uint32_t? >>> Balasubramanian Manoharan(bala-manoharan) wrote: >>> The drop probability is 100% only when the pool is completely full >>> i.e there is no further buffer to allocate packet. The intention of >>> this description is that when resource usage is greater than >>> threshold.max then the drop probability is enabled and packets will >>> get dropped on an increasing drop probability and when it is less >>> than min threshold then drop probability is disabled. Petri Savolainen(psavol) wrote: I think it should be: * drop probability == 100%, when resource usage > threshold.max * drop probability == 0%, when
Re: [lng-odp] [PATCH API-NEXT v8] api: random early detection and back pressure
Petri Savolainen(psavol) replied on github web page: include/odp/api/spec/classification.h @@ -107,6 +108,61 @@ typedef union odp_cls_pmr_terms_t { uint64_t all_bits; } odp_cls_pmr_terms_t; +/** Random Early Detection (RED) + * Random Early Detection is enabled to initiate a drop probability for the + * incoming packet when the packets in the queue/pool cross the specified + * threshold values. RED is enabled when 'red_enable' boolean is true and + * the resource usage is equal to or greater than the minimum threshold value. + * Resource usage could be defined as the percentage of pool being full or the + * number of packets/bytes occupied in the queue depening on the platform + * capabilities. + * When RED is enabled for a particular flow then further incoming packets are + * assigned a drop probability based on the size of the pool/queue. + * + * Drop probability is configured as follows + * * Drop probability is 100%, when resource usage >= threshold.max + * * Drop probability is 0%, when resource usage <= threshold.min + * * Drop probability is between 0...100 % when resource usage is between + * threshold.min and threshold.max + * + * RED is logically configured in the CoS and could be implemented in either + * pool or queue linked to the CoS depending on platform capabilities. + * Application should make sure not to link multiple CoS with different RED or + * BP configuration to the same queue or pool. + */ +typedef struct odp_red_param_t { + /** A boolean to enable RED +* When true, RED is enabled and configured with RED parameters. +* Otherwise, RED parameters are ignored. */ + odp_bool_t red_enable; + + /** Threshold parameters for RED +* RED is enabled when the resource limit is equal to or greater than +* the minimum threshold value and is disabled otherwise +*/ + odp_threshold_t red_threshold; +} odp_red_param_t; + +/** Back pressure (BP) + * When back pressure is enabled for a particular flow, the HW can send + * back pressure information to the remote peer indicating a network congestion. + */ + +typedef struct odp_bp_param_t { + /** A boolean to enable Back pressure +* When true, back pressure is enabled and configured with the BP +* parameters. Otherwise BP parameters are ignored. +*/ + odp_bool_t bp_enable; + + /** Threshold value for back pressure. +* BP is enabled when queue or pool value is equal to or greater +* than the max backpressure threshold. +* Min threshold parameters are ignored for BP configuration. +*/ + odp_threshold_t bp_threshold; +} odp_bp_param_t; Comment: bp_ prefixes can be removed from "bp_enable" and "bp_threshold". "BP is enabled when queue or pool value is equal to or greater than the max backpressure threshold." This should refer to "resource usage" (which has been defined on RED documentation). "queue and pool value" are undefined terms. > Petri Savolainen(psavol) wrote: > Remove this extra line feed. >> Petri Savolainen(psavol) wrote: >> "... RED is enabled when the resource limit is equal to ..." >> >> This refers still to "resource limit", it should be "resource usage" which >> is a term that has been define above. >>> Petri Savolainen(psavol) wrote: >>> red_ prefix can be dropped from the struct field names. When red param is >>> used, the name of the variable is likely red or red_param, etc. So, after >>> removing the prefix it looks like this >>> >>> red.enable = 1; >>> red.threshold.packets.min = 1000; >>> >>> vs >>> >>> red.red_enable = 1; >>> red.red_threshold >>> ... Petri Savolainen(psavol) wrote: "Reaches between" is not valid since drop probability is defined also when max. "... when the packets in the queue/pool reaches between the specified threshold." -> "... when the packets in the queue/pool cross specified threshold values." The "resource limit" needs to be specified: is it free or used space? Or is it different for pools and queues: free space in a pool, used space in a queue? May be the text is easier to understand with a bullet list: * drop probability is 100%, when resource usage > threshold.max * drop probability is 0%, when resource usage < threshold.min * drop probability is between 0 ... 100%, when resource usage is between threshold.min and threshold.max ... and then define what "resource usage" means. Pools: space used, queues packet/bytes in queue > Petri Savolainen(psavol) wrote: > This should match the type enum: byte. Also better description is needed: > e.g. "Sum of all data bytes of all packets". Packet size does not tell if > it's the size of one or many packets. >> Petri Savolainen(psavol) wrote: >> This should match the type enum: packet >>> Petri Savolainen(psavol) wrote: >>> When percent is not abbreviated, then
Re: [lng-odp] [PATCH API-NEXT v8] api: random early detection and back pressure
Petri Savolainen(psavol) replied on github web page: include/odp/api/spec/threshold.h @@ -0,0 +1,105 @@ +/* Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * ODP threshold descriptor + */ + +#ifndef ODP_API_THRESHOLD_H_ +#define ODP_API_THRESHOLD_H_ +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Supported threshold types + * + * Supported threshold types in a bit field structure. + */ +typedef union odp_threshold_types_t { + /** bitfields for different threshold types */ + struct { + /** Percentage of the total size of pool or queue */ + uint8_t percent:1; + + /** Total number of all transient packets */ + uint8_t packet:1; + + /** Total size of all transient packets in bytes */ + uint8_t bytes:1; + }; + + /** All bits of the bit field structure */ + uint8_t all_bits; +} odp_threshold_types_t; + +/** + * ODP Threshold types + * + * Different types of threshold measurements + */ +typedefenum { + /** Percentage of the total size of pool or queue */ Comment: Change tab to space between typedef and enum. Also typically the typedef'ed enum/struct/union has also name, so that a debugger does not refer to it as "enum does not ...". typedef enum odp_threshold_type_t { ... } odp_threshold_type_t; > Petri Savolainen(psavol) wrote: > Replace "@typedef odp_percent_t" doxygen tag with the short comment line > ("Percent type"). The tag is not needed anymore as the typedef follows the > comment. > > Also, remove the extra line feed. >> Petri Savolainen(psavol) wrote: >> bp_ prefixes can be removed from "bp_enable" and "bp_threshold". >> >> "BP is enabled when queue or pool value is equal to or greater than the max >> backpressure threshold." >> This should refer to "resource usage" (which has been defined on RED >> documentation). "queue and pool value" are undefined terms. >>> Petri Savolainen(psavol) wrote: >>> Remove this extra line feed. Petri Savolainen(psavol) wrote: "... RED is enabled when the resource limit is equal to ..." This refers still to "resource limit", it should be "resource usage" which is a term that has been define above. > Petri Savolainen(psavol) wrote: > red_ prefix can be dropped from the struct field names. When red param is > used, the name of the variable is likely red or red_param, etc. So, after > removing the prefix it looks like this > > red.enable = 1; > red.threshold.packets.min = 1000; > > vs > > red.red_enable = 1; > red.red_threshold > ... >> Petri Savolainen(psavol) wrote: >> "Reaches between" is not valid since drop probability is defined also >> when max. "... when the packets in the queue/pool reaches >> between the specified threshold." -> "... when the packets in the >> queue/pool cross specified threshold values." >> >> The "resource limit" needs to be specified: is it free or used space? Or >> is it different for pools and queues: free space in a pool, used space >> in a queue? May be the text is easier to understand with a bullet list: >> * drop probability is 100%, when resource usage > threshold.max >> * drop probability is 0%, when resource usage < threshold.min >> * drop probability is between 0 ... 100%, when resource usage is between >> threshold.min and threshold.max >> >> ... and then define what "resource usage" means. Pools: space used, >> queues packet/bytes in queue >>> Petri Savolainen(psavol) wrote: >>> This should match the type enum: byte. Also better description is >>> needed: e.g. "Sum of all data bytes of all packets". Packet size does >>> not tell if it's the size of one or many packets. Petri Savolainen(psavol) wrote: This should match the type enum: packet > Petri Savolainen(psavol) wrote: > When percent is not abbreviated, then this should not be either. So, > _PERCENT and _PACKET, or PCT and PKT. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> As we discussed during today's ARCH call. `max` and `min` can be >> renamed `start` and `stop` to more accurately reflect the actions. >> RED / backpressure begins when utilization hits the `start` >> threshold and is deasserted when utilization drops back to the >> `stop` threshold. So `start` >= `stop`. If the two are equal it >> means there is no hysteresis. >> >> The question arises whether a third `max` threshold value should >> exist at which drops / backpressure is held at 100%. The idea here >> is to reserve a portion of the pool/queue resource for application >> use independent of use by PktIOs.
Re: [lng-odp] [PATCH API-NEXT v8] api: random early detection and back pressure
Petri Savolainen(psavol) replied on github web page: include/odp/api/spec/classification.h @@ -107,6 +108,61 @@ typedef union odp_cls_pmr_terms_t { uint64_t all_bits; } odp_cls_pmr_terms_t; +/** Random Early Detection (RED) + * Random Early Detection is enabled to initiate a drop probability for the + * incoming packet when the packets in the queue/pool cross the specified + * threshold values. RED is enabled when 'red_enable' boolean is true and + * the resource usage is equal to or greater than the minimum threshold value. + * Resource usage could be defined as the percentage of pool being full or the + * number of packets/bytes occupied in the queue depening on the platform + * capabilities. + * When RED is enabled for a particular flow then further incoming packets are + * assigned a drop probability based on the size of the pool/queue. + * + * Drop probability is configured as follows + * * Drop probability is 100%, when resource usage >= threshold.max + * * Drop probability is 0%, when resource usage <= threshold.min + * * Drop probability is between 0...100 % when resource usage is between + * threshold.min and threshold.max + * + * RED is logically configured in the CoS and could be implemented in either + * pool or queue linked to the CoS depending on platform capabilities. + * Application should make sure not to link multiple CoS with different RED or + * BP configuration to the same queue or pool. + */ +typedef struct odp_red_param_t { + /** A boolean to enable RED +* When true, RED is enabled and configured with RED parameters. +* Otherwise, RED parameters are ignored. */ + odp_bool_t red_enable; + + /** Threshold parameters for RED +* RED is enabled when the resource limit is equal to or greater than +* the minimum threshold value and is disabled otherwise +*/ + odp_threshold_t red_threshold; +} odp_red_param_t; + +/** Back pressure (BP) + * When back pressure is enabled for a particular flow, the HW can send + * back pressure information to the remote peer indicating a network congestion. + */ + Comment: Remove this extra line feed. > Petri Savolainen(psavol) wrote: > "... RED is enabled when the resource limit is equal to ..." > > This refers still to "resource limit", it should be "resource usage" which is > a term that has been define above. >> Petri Savolainen(psavol) wrote: >> red_ prefix can be dropped from the struct field names. When red param is >> used, the name of the variable is likely red or red_param, etc. So, after >> removing the prefix it looks like this >> >> red.enable = 1; >> red.threshold.packets.min = 1000; >> >> vs >> >> red.red_enable = 1; >> red.red_threshold >> ... >>> Petri Savolainen(psavol) wrote: >>> "Reaches between" is not valid since drop probability is defined also when >>> max. "... when the packets in the queue/pool reaches between the >>> specified threshold." -> "... when the packets in the queue/pool cross >>> specified threshold values." >>> >>> The "resource limit" needs to be specified: is it free or used space? Or is >>> it different for pools and queues: free space in a pool, used space in a >>> queue? May be the text is easier to understand with a bullet list: >>> * drop probability is 100%, when resource usage > threshold.max >>> * drop probability is 0%, when resource usage < threshold.min >>> * drop probability is between 0 ... 100%, when resource usage is between >>> threshold.min and threshold.max >>> >>> ... and then define what "resource usage" means. Pools: space used, queues >>> packet/bytes in queue Petri Savolainen(psavol) wrote: This should match the type enum: byte. Also better description is needed: e.g. "Sum of all data bytes of all packets". Packet size does not tell if it's the size of one or many packets. > Petri Savolainen(psavol) wrote: > This should match the type enum: packet >> Petri Savolainen(psavol) wrote: >> When percent is not abbreviated, then this should not be either. So, >> _PERCENT and _PACKET, or PCT and PKT. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> As we discussed during today's ARCH call. `max` and `min` can be >>> renamed `start` and `stop` to more accurately reflect the actions. RED >>> / backpressure begins when utilization hits the `start` threshold and >>> is deasserted when utilization drops back to the `stop` threshold. So >>> `start` >= `stop`. If the two are equal it means there is no >>> hysteresis. >>> >>> The question arises whether a third `max` threshold value should exist >>> at which drops / backpressure is held at 100%. The idea here is to >>> reserve a portion of the pool/queue resource for application use >>> independent of use by PktIOs. Since this may not be feasible in all >>> implementations this should probably be advertised with additional >>>
Re: [lng-odp] [PATCH API-NEXT v8] api: random early detection and back pressure
Petri Savolainen(psavol) replied on github web page: include/odp/api/spec/classification.h @@ -107,6 +108,55 @@ typedef union odp_cls_pmr_terms_t { uint64_t all_bits; } odp_cls_pmr_terms_t; +/** Random Early Detection (RED) + * Random Early Detection is enabled to initiate a drop probability + * for the incoming packet when the packets in the queue/pool reaches + * a specified threshold. + * When RED is enabled for a particular flow then further incoming + * packets are assigned a drop probability based on the size of the + * pool/queue and the drop probability becomes 100% when the queue/pool + * is full. + * RED is logically configured in the CoS and could be implemented + * in either pool or queue linked to the CoS depending on + * platform capabilities. Application should make sure not to link + * multiple CoS with different RED or BP configuration to the same queue + * or pool. + * RED is enabled when the resource limit is equal to or greater than + * the maximum threshold value and is disabled when resource limit + * is less than or equal to minimum threshold value. */ +typedef struct odp_red_param_t { + /** A boolean to enable RED +* When true, RED is enabled and configured with RED parameters. +* Otherwise, RED parameters are ignored. */ + odp_bool_t red_enable; + + /** Threshold parameters for RED +* RED is enabled when the resource limit is equal to or greater than +* the maximum threshold value and is disabled when resource limit +* is less than or equal to minimum threshold value. */ + odp_threshold_t red_threshold; Comment: "... RED is enabled when the resource limit is equal to ..." This refers still to "resource limit", it should be "resource usage" which is a term that has been define above. > Petri Savolainen(psavol) wrote: > red_ prefix can be dropped from the struct field names. When red param is > used, the name of the variable is likely red or red_param, etc. So, after > removing the prefix it looks like this > > red.enable = 1; > red.threshold.packets.min = 1000; > > vs > > red.red_enable = 1; > red.red_threshold > ... >> Petri Savolainen(psavol) wrote: >> "Reaches between" is not valid since drop probability is defined also when >> max. "... when the packets in the queue/pool reaches between the >> specified threshold." -> "... when the packets in the queue/pool cross >> specified threshold values." >> >> The "resource limit" needs to be specified: is it free or used space? Or is >> it different for pools and queues: free space in a pool, used space in a >> queue? May be the text is easier to understand with a bullet list: >> * drop probability is 100%, when resource usage > threshold.max >> * drop probability is 0%, when resource usage < threshold.min >> * drop probability is between 0 ... 100%, when resource usage is between >> threshold.min and threshold.max >> >> ... and then define what "resource usage" means. Pools: space used, queues >> packet/bytes in queue >>> Petri Savolainen(psavol) wrote: >>> This should match the type enum: byte. Also better description is needed: >>> e.g. "Sum of all data bytes of all packets". Packet size does not tell if >>> it's the size of one or many packets. Petri Savolainen(psavol) wrote: This should match the type enum: packet > Petri Savolainen(psavol) wrote: > When percent is not abbreviated, then this should not be either. So, > _PERCENT and _PACKET, or PCT and PKT. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> As we discussed during today's ARCH call. `max` and `min` can be renamed >> `start` and `stop` to more accurately reflect the actions. RED / >> backpressure begins when utilization hits the `start` threshold and is >> deasserted when utilization drops back to the `stop` threshold. So >> `start` >= `stop`. If the two are equal it means there is no hysteresis. >> >> The question arises whether a third `max` threshold value should exist >> at which drops / backpressure is held at 100%. The idea here is to >> reserve a portion of the pool/queue resource for application use >> independent of use by PktIOs. Since this may not be feasible in all >> implementations this should probably be advertised with additional >> capability info. >>> Balasubramanian Manoharan(bala-manoharan) wrote: >>> I can move the typedef here. >>> Not sure if I understand the need to move to uint32_t, I have only >>> defined 100% as 10,000 and we should be fine with uin16_t. Any other >>> specific reason for using uint32_t? Balasubramanian Manoharan(bala-manoharan) wrote: The drop probability is 100% only when the pool is completely full i.e there is no further buffer to allocate packet. The intention of this description is that when resource usage is greater than threshold.max then the drop
Re: [lng-odp] [PATCH API-NEXT v8] api: random early detection and back pressure
Petri Savolainen(psavol) replied on github web page: include/odp/api/spec/classification.h @@ -107,6 +108,55 @@ typedef union odp_cls_pmr_terms_t { uint64_t all_bits; } odp_cls_pmr_terms_t; +/** Random Early Detection (RED) + * Random Early Detection is enabled to initiate a drop probability + * for the incoming packet when the packets in the queue/pool reaches + * a specified threshold. + * When RED is enabled for a particular flow then further incoming + * packets are assigned a drop probability based on the size of the + * pool/queue and the drop probability becomes 100% when the queue/pool + * is full. + * RED is logically configured in the CoS and could be implemented + * in either pool or queue linked to the CoS depending on + * platform capabilities. Application should make sure not to link + * multiple CoS with different RED or BP configuration to the same queue + * or pool. + * RED is enabled when the resource limit is equal to or greater than + * the maximum threshold value and is disabled when resource limit + * is less than or equal to minimum threshold value. */ +typedef struct odp_red_param_t { + /** A boolean to enable RED +* When true, RED is enabled and configured with RED parameters. +* Otherwise, RED parameters are ignored. */ + odp_bool_t red_enable; + + /** Threshold parameters for RED +* RED is enabled when the resource limit is equal to or greater than +* the maximum threshold value and is disabled when resource limit +* is less than or equal to minimum threshold value. */ + odp_threshold_t red_threshold; Comment: red_ prefix can be dropped from the struct field names. When red param is used, the name of the variable is likely red or red_param, etc. So, after removing the prefix it looks like this red.enable = 1; red.threshold.packets.min = 1000; vs red.red_enable = 1; red.red_threshold ... > Petri Savolainen(psavol) wrote: > "Reaches between" is not valid since drop probability is defined also when > max. "... when the packets in the queue/pool reaches between the > specified threshold." -> "... when the packets in the queue/pool cross > specified threshold values." > > The "resource limit" needs to be specified: is it free or used space? Or is > it different for pools and queues: free space in a pool, used space in a > queue? May be the text is easier to understand with a bullet list: > * drop probability is 100%, when resource usage > threshold.max > * drop probability is 0%, when resource usage < threshold.min > * drop probability is between 0 ... 100%, when resource usage is between > threshold.min and threshold.max > > ... and then define what "resource usage" means. Pools: space used, queues > packet/bytes in queue >> Petri Savolainen(psavol) wrote: >> This should match the type enum: byte. Also better description is needed: >> e.g. "Sum of all data bytes of all packets". Packet size does not tell if >> it's the size of one or many packets. >>> Petri Savolainen(psavol) wrote: >>> This should match the type enum: packet Petri Savolainen(psavol) wrote: When percent is not abbreviated, then this should not be either. So, _PERCENT and _PACKET, or PCT and PKT. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > As we discussed during today's ARCH call. `max` and `min` can be renamed > `start` and `stop` to more accurately reflect the actions. RED / > backpressure begins when utilization hits the `start` threshold and is > deasserted when utilization drops back to the `stop` threshold. So > `start` >= `stop`. If the two are equal it means there is no hysteresis. > > The question arises whether a third `max` threshold value should exist at > which drops / backpressure is held at 100%. The idea here is to reserve a > portion of the pool/queue resource for application use independent of use > by PktIOs. Since this may not be feasible in all implementations this > should probably be advertised with additional capability info. >> Balasubramanian Manoharan(bala-manoharan) wrote: >> I can move the typedef here. >> Not sure if I understand the need to move to uint32_t, I have only >> defined 100% as 10,000 and we should be fine with uin16_t. Any other >> specific reason for using uint32_t? >>> Balasubramanian Manoharan(bala-manoharan) wrote: >>> The drop probability is 100% only when the pool is completely full i.e >>> there is no further buffer to allocate packet. The intention of this >>> description is that when resource usage is greater than threshold.max >>> then the drop probability is enabled and packets will get dropped on an >>> increasing drop probability and when it is less than min threshold then >>> drop probability is disabled. Petri Savolainen(psavol) wrote: I think it should be: * drop probability == 100%,
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
bogdanPricope replied on github web page: platform/linux-generic/pktio/tap.c line 10 @@ -92,23 +92,30 @@ static int tap_pktio_open(odp_pktio_t id ODP_UNUSED, int fd, skfd, flags; uint32_t mtu; struct ifreq ifr; - pktio_ops_tap_data_t *tap = odp_ops_data(pktio_entry, tap); + pktio_ops_tap_data_t *tap = NULL; if (strncmp(devname, "tap:", 4) != 0) return -1; + if (pool == ODP_POOL_INVALID) Comment: I agree, this check should be part of odp_pktio_open() or setup_pktio_entry() but this should be done in another PR. Here, I only move this check upper in the function (is not a new code). > nagarahalli wrote > This file is for allocating the shmem for storing the driver's global data. > > The name 'pool' seems to be creating confusion with the packet/buffer pool. > May be we should change the name of > '_odp_ishm_pool_create/odpdrv_shm_pool_create'. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> This is a parameter passed through from `odp_pktio_open()`. The spec simply >> says that this must be of type `ODP_POOL_PACKET`, so results are undefined >> if the caller doesn't abide by the spec. A courtesy validation check doesn't >> seem unreasonable, but any validation checking should be done in the main >> API, not in each individual driver. As such drivers should assume that >> `pool` is valid at entry since these entry points cannot be invoked directly >> by applications. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> `ODP_OPS_DATA_FREE()` Bill Fischofer(Bill-Fischofer-Linaro) wrote: Again since this is a macro, the name should be `ODP_OPS_DATA_ALLOC()`. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Macros should be CAPITALIZED to indicate that they are macros rather than > callable APIs. So `ODP_OPS_DATA()` would be used here. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Where does this number come from? Is it subject to configuration? >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> 2017, not 2013 here. Bill Fischofer(Bill-Fischofer-Linaro) wrote: Are these intended to be new ODP APIs? If so they need to be specified in `odp/api/spec/packet.h`. Or are they supposed to be new Southbound APIs for use by drivers? Need some clarification here. The use of the `odp_` prefix implies that these are Northbound APIs. If these are Southbound APIs, then the prefix should be `odpdrv_`. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Both pools and shmem's are ODP objects. The difference is a pool is a > structured collection of objects that can be allocated and freed from > the pool and that contain both data and metadata, while a shmem is a > "slab" of memory that has no structure beyond how the application > chooses to use it. Given this distinction, a pool seems more useful > here. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> This definitely should not be part of the API spec. It's an >> implementation artifact. >>> nagarahalli wrote >>> This should be part of odp_pktio_ops_subsystem.h file. nagarahalli wrote Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? > nagarahalli wrote > '_p' is not required as the macro is returning 'ops_data'. Makes > the macro simple as well. > > Similarly for odp_ops_data_free can just take 'ops_data' as input. > > This will be inline with future plans to not expose > 'pktio_entry_t' to the drivers. >> He Yi(heyi-linaro) wrote: >> In future, since each pktio_ops module will not expose their >> private data type, this macro can be changed to >> >`#define odp_ops_data(_entry, _pdata) \ >> pdata = (typeof(_pdata))(_entry->s.ops_data)` >> >> So the odp_pktio_ops_subsystem.h header won't need to know all >> pktio_ops' private data structure declaration. Can be considered >> next time. >>> He Yi(heyi-linaro) wrote: >>> Like this! He Yi(heyi-linaro) wrote: Look good no more comments from me to this commit after Honnappa and Josep's, this is a step forward for the pktio ops data This commit also reveals how complex in ODP to allocate an arbitrary sized piece of memory, need to prepare a pool (and guess the largest usage), lookup this pool in every allocation/free by name, and do the allocation/free after then. > He Yi(heyi-linaro) wrote: > seems no need to add an extra
Re: [lng-odp] Github shows commits in wrong order
On 11/20/17 13:16, Andriy Berestovskyy wrote: > Hey guys, > I guess it is a known GitHub "feature": > > https://help.github.com/articles/why-are-my-commits-in-the-wrong-order/ > > Andriy > yep github team also replied with that link. And it's not clear if they will fix it and when. Maxim. > On 20.11.2017 10:15, Maxim Uvarov wrote: >> created ticked for this on github. >> >> On 20 November 2017 at 12:02, Savolainen, Petri (Nokia - FI/Espoo) < >> petri.savolai...@nokia.com> wrote: >> -Original Message- From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Savolainen, Petri (Nokia - FI/Espoo) Sent: Friday, November 17, 2017 10:24 AM To: lng-odp@lists.linaro.org Subject: [lng-odp] Github shows commits in wrong order Hi, Here's evidence that Github shows sometimes commits in a wrong order. The first commit of this patch set is "api: std_types: add odp_percent_t data type", but it shows in the Github web page as the last one! It's annoying and counterproductive that reviewers cannot trust on the patch order. -Petri >>> >>> Github has done another spin on the commit order lottery. Now, the first >>> patch ("add odp_percent_t") is in the middle. >>> >>> -Petri >>>
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
nagarahalli replied on github web page: platform/linux-generic/Makefile.am line 4 @@ -175,6 +175,7 @@ noinst_HEADERS = \ include/odp_name_table_internal.h \ include/odp_packet_internal.h \ include/odp_packet_io_internal.h \ + include/odp_packet_io_pool.h \ Comment: This file is for allocating the shmem for storing the driver's global data. The name 'pool' seems to be creating confusion with the packet/buffer pool. May be we should change the name of '_odp_ishm_pool_create/odpdrv_shm_pool_create'. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > This is a parameter passed through from `odp_pktio_open()`. The spec simply > says that this must be of type `ODP_POOL_PACKET`, so results are undefined if > the caller doesn't abide by the spec. A courtesy validation check doesn't > seem unreasonable, but any validation checking should be done in the main > API, not in each individual driver. As such drivers should assume that `pool` > is valid at entry since these entry points cannot be invoked directly by > applications. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> `ODP_OPS_DATA_FREE()` >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> Again since this is a macro, the name should be `ODP_OPS_DATA_ALLOC()`. Bill Fischofer(Bill-Fischofer-Linaro) wrote: Macros should be CAPITALIZED to indicate that they are macros rather than callable APIs. So `ODP_OPS_DATA()` would be used here. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Where does this number come from? Is it subject to configuration? >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> 2017, not 2013 here. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> Are these intended to be new ODP APIs? If so they need to be specified >>> in `odp/api/spec/packet.h`. Or are they supposed to be new Southbound >>> APIs for use by drivers? Need some clarification here. The use of the >>> `odp_` prefix implies that these are Northbound APIs. If these are >>> Southbound APIs, then the prefix should be `odpdrv_`. Bill Fischofer(Bill-Fischofer-Linaro) wrote: Both pools and shmem's are ODP objects. The difference is a pool is a structured collection of objects that can be allocated and freed from the pool and that contain both data and metadata, while a shmem is a "slab" of memory that has no structure beyond how the application chooses to use it. Given this distinction, a pool seems more useful here. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > This definitely should not be part of the API spec. It's an > implementation artifact. >> nagarahalli wrote >> This should be part of odp_pktio_ops_subsystem.h file. >>> nagarahalli wrote >>> Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? nagarahalli wrote '_p' is not required as the macro is returning 'ops_data'. Makes the macro simple as well. Similarly for odp_ops_data_free can just take 'ops_data' as input. This will be inline with future plans to not expose 'pktio_entry_t' to the drivers. > He Yi(heyi-linaro) wrote: > In future, since each pktio_ops module will not expose their > private data type, this macro can be changed to > >`#define odp_ops_data(_entry, _pdata) \ > pdata = (typeof(_pdata))(_entry->s.ops_data)` > > So the odp_pktio_ops_subsystem.h header won't need to know all > pktio_ops' private data structure declaration. Can be considered > next time. >> He Yi(heyi-linaro) wrote: >> Like this! >>> He Yi(heyi-linaro) wrote: >>> Look good no more comments from me to this commit after >>> Honnappa and Josep's, this is a step forward for the pktio ops >>> data >>> >>> This commit also reveals how complex in ODP to allocate an >>> arbitrary sized piece of memory, need to prepare a pool (and >>> guess the largest usage), lookup this pool in every >>> allocation/free by name, and do the allocation/free after then. He Yi(heyi-linaro) wrote: seems no need to add an extra macro here? ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern we can just use to generate a static function use the same macro: static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) > nagarahalli wrote > Temporarily, roundup the size to cache line size. This way > all the memory allocations will be cache line aligned.
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/include/odp_pktio_ops_subsystem.h @@ -87,12 +87,8 @@ typedef ODP_MODULE_CLASS(pktio_ops) { odp_api_proto(pktio_ops, print) print; } pktio_ops_module_t; -/* Maximum size of pktio specific ops data.*/ -#define ODP_PKTIO_ODPS_DATA_MAX_SIZE 8 - /* Extract pktio ops data from pktio entry structure */ -#define odp_ops_data(_p, _mod) \ - ((pktio_ops_ ## _mod ## _data_t *)(uintptr_t)_p->s.ops_data) +#define odp_ops_data(_p, _mod) _p->s.ops_data #define odp_ops_data_alloc(_p, _size) \ Comment: Again since this is a macro, the name should be `ODP_OPS_DATA_ALLOC()`. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Macros should be CAPITALIZED to indicate that they are macros rather than > callable APIs. So `ODP_OPS_DATA()` would be used here. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Where does this number come from? Is it subject to configuration? >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> 2017, not 2013 here. Bill Fischofer(Bill-Fischofer-Linaro) wrote: Are these intended to be new ODP APIs? If so they need to be specified in `odp/api/spec/packet.h`. Or are they supposed to be new Southbound APIs for use by drivers? Need some clarification here. The use of the `odp_` prefix implies that these are Northbound APIs. If these are Southbound APIs, then the prefix should be `odpdrv_`. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Both pools and shmem's are ODP objects. The difference is a pool is a > structured collection of objects that can be allocated and freed from the > pool and that contain both data and metadata, while a shmem is a "slab" > of memory that has no structure beyond how the application chooses to use > it. Given this distinction, a pool seems more useful here. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> This definitely should not be part of the API spec. It's an >> implementation artifact. >>> nagarahalli wrote >>> This should be part of odp_pktio_ops_subsystem.h file. nagarahalli wrote Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? > nagarahalli wrote > '_p' is not required as the macro is returning 'ops_data'. Makes the > macro simple as well. > > Similarly for odp_ops_data_free can just take 'ops_data' as input. > > This will be inline with future plans to not expose 'pktio_entry_t' > to the drivers. >> He Yi(heyi-linaro) wrote: >> In future, since each pktio_ops module will not expose their private >> data type, this macro can be changed to >> >`#define odp_ops_data(_entry, _pdata) \ >> pdata = (typeof(_pdata))(_entry->s.ops_data)` >> >> So the odp_pktio_ops_subsystem.h header won't need to know all >> pktio_ops' private data structure declaration. Can be considered >> next time. >>> He Yi(heyi-linaro) wrote: >>> Like this! He Yi(heyi-linaro) wrote: Look good no more comments from me to this commit after Honnappa and Josep's, this is a step forward for the pktio ops data This commit also reveals how complex in ODP to allocate an arbitrary sized piece of memory, need to prepare a pool (and guess the largest usage), lookup this pool in every allocation/free by name, and do the allocation/free after then. > He Yi(heyi-linaro) wrote: > seems no need to add an extra macro here? > > ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern > we can just use to generate a static function use the same macro: > static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) >> nagarahalli wrote >> Temporarily, roundup the size to cache line size. This way all >> the memory allocations will be cache line aligned. >>> nagarahalli wrote >>> Should be odp_ops_data_alloc(_p, size). >>> >>> As per the pkt I/O changes document, _p (pktio_entry_t) is not >>> required to be exposed to drivers. Do you plan to do it as part >>> of this PR? nagarahalli wrote Same here, this can be part of odp_pktio_term_global > nagarahalli wrote > This functionality can be done in odp_pktio_global_init > function. This will avoid the changes to modular framework as > well. > > When we have done additional enhancements to shared memory, > this code will be deleted. So, can be part of > odp_pktio_global_init without
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/pktio/tap.c line 10 @@ -92,23 +92,30 @@ static int tap_pktio_open(odp_pktio_t id ODP_UNUSED, int fd, skfd, flags; uint32_t mtu; struct ifreq ifr; - pktio_ops_tap_data_t *tap = odp_ops_data(pktio_entry, tap); + pktio_ops_tap_data_t *tap = NULL; if (strncmp(devname, "tap:", 4) != 0) return -1; + if (pool == ODP_POOL_INVALID) Comment: This is a parameter passed through from `odp_pktio_open()`. The spec simply says that this must be of type `ODP_POOL_PACKET`, so results are undefined if the caller doesn't abide by the spec. A courtesy validation check doesn't seem unreasonable, but any validation checking should be done in the main API, not in each individual driver. As such drivers should assume that `pool` is valid at entry since these entry points cannot be invoked directly by applications. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > `ODP_OPS_DATA_FREE()` >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Again since this is a macro, the name should be `ODP_OPS_DATA_ALLOC()`. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> Macros should be CAPITALIZED to indicate that they are macros rather than >>> callable APIs. So `ODP_OPS_DATA()` would be used here. Bill Fischofer(Bill-Fischofer-Linaro) wrote: Where does this number come from? Is it subject to configuration? > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > 2017, not 2013 here. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Are these intended to be new ODP APIs? If so they need to be specified >> in `odp/api/spec/packet.h`. Or are they supposed to be new Southbound >> APIs for use by drivers? Need some clarification here. The use of the >> `odp_` prefix implies that these are Northbound APIs. If these are >> Southbound APIs, then the prefix should be `odpdrv_`. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> Both pools and shmem's are ODP objects. The difference is a pool is a >>> structured collection of objects that can be allocated and freed from >>> the pool and that contain both data and metadata, while a shmem is a >>> "slab" of memory that has no structure beyond how the application >>> chooses to use it. Given this distinction, a pool seems more useful >>> here. Bill Fischofer(Bill-Fischofer-Linaro) wrote: This definitely should not be part of the API spec. It's an implementation artifact. > nagarahalli wrote > This should be part of odp_pktio_ops_subsystem.h file. >> nagarahalli wrote >> Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? >>> nagarahalli wrote >>> '_p' is not required as the macro is returning 'ops_data'. Makes >>> the macro simple as well. >>> >>> Similarly for odp_ops_data_free can just take 'ops_data' as input. >>> >>> This will be inline with future plans to not expose 'pktio_entry_t' >>> to the drivers. He Yi(heyi-linaro) wrote: In future, since each pktio_ops module will not expose their private data type, this macro can be changed to >`#define odp_ops_data(_entry, _pdata) \ pdata = (typeof(_pdata))(_entry->s.ops_data)` So the odp_pktio_ops_subsystem.h header won't need to know all pktio_ops' private data structure declaration. Can be considered next time. > He Yi(heyi-linaro) wrote: > Like this! >> He Yi(heyi-linaro) wrote: >> Look good no more comments from me to this commit after Honnappa >> and Josep's, this is a step forward for the pktio ops data >> >> This commit also reveals how complex in ODP to allocate an >> arbitrary sized piece of memory, need to prepare a pool (and >> guess the largest usage), lookup this pool in every >> allocation/free by name, and do the allocation/free after then. >>> He Yi(heyi-linaro) wrote: >>> seems no need to add an extra macro here? >>> >>> ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern >>> we can just use to generate a static function use the same >>> macro: >>> static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) nagarahalli wrote Temporarily, roundup the size to cache line size. This way all the memory allocations will be cache line aligned. > nagarahalli wrote > Should be odp_ops_data_alloc(_p, size). > > As per the pkt I/O changes document, _p (pktio_entry_t) is > not required to be
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/include/odp_pktio_ops_subsystem.h @@ -88,10 +89,40 @@ typedef ODP_MODULE_CLASS(pktio_ops) { /* Maximum size of pktio specific ops data.*/ #define ODP_PKTIO_ODPS_DATA_MAX_SIZE 8 +/* Pktio ops data pool */ +#define ODP_PKTIO_OPS_DATA_POOL_SIZE 16 +#define ODP_PKTIO_OPS_DATA_POOL_NAME "ODP_PKTIO_OPS_DATA" + /* Extract pktio ops data from pktio entry structure */ #define odp_ops_data(_p, _mod) \ Comment: Macros should be CAPITALIZED to indicate that they are macros rather than callable APIs. So `ODP_OPS_DATA()` would be used here. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Where does this number come from? Is it subject to configuration? >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> 2017, not 2013 here. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> Are these intended to be new ODP APIs? If so they need to be specified in >>> `odp/api/spec/packet.h`. Or are they supposed to be new Southbound APIs for >>> use by drivers? Need some clarification here. The use of the `odp_` prefix >>> implies that these are Northbound APIs. If these are Southbound APIs, then >>> the prefix should be `odpdrv_`. Bill Fischofer(Bill-Fischofer-Linaro) wrote: Both pools and shmem's are ODP objects. The difference is a pool is a structured collection of objects that can be allocated and freed from the pool and that contain both data and metadata, while a shmem is a "slab" of memory that has no structure beyond how the application chooses to use it. Given this distinction, a pool seems more useful here. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > This definitely should not be part of the API spec. It's an > implementation artifact. >> nagarahalli wrote >> This should be part of odp_pktio_ops_subsystem.h file. >>> nagarahalli wrote >>> Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? nagarahalli wrote '_p' is not required as the macro is returning 'ops_data'. Makes the macro simple as well. Similarly for odp_ops_data_free can just take 'ops_data' as input. This will be inline with future plans to not expose 'pktio_entry_t' to the drivers. > He Yi(heyi-linaro) wrote: > In future, since each pktio_ops module will not expose their private > data type, this macro can be changed to > >`#define odp_ops_data(_entry, _pdata) \ > pdata = (typeof(_pdata))(_entry->s.ops_data)` > > So the odp_pktio_ops_subsystem.h header won't need to know all > pktio_ops' private data structure declaration. Can be considered next > time. >> He Yi(heyi-linaro) wrote: >> Like this! >>> He Yi(heyi-linaro) wrote: >>> Look good no more comments from me to this commit after Honnappa >>> and Josep's, this is a step forward for the pktio ops data >>> >>> This commit also reveals how complex in ODP to allocate an >>> arbitrary sized piece of memory, need to prepare a pool (and guess >>> the largest usage), lookup this pool in every allocation/free by >>> name, and do the allocation/free after then. He Yi(heyi-linaro) wrote: seems no need to add an extra macro here? ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern we can just use to generate a static function use the same macro: static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) > nagarahalli wrote > Temporarily, roundup the size to cache line size. This way all > the memory allocations will be cache line aligned. >> nagarahalli wrote >> Should be odp_ops_data_alloc(_p, size). >> >> As per the pkt I/O changes document, _p (pktio_entry_t) is not >> required to be exposed to drivers. Do you plan to do it as part >> of this PR? >>> nagarahalli wrote >>> Same here, this can be part of odp_pktio_term_global nagarahalli wrote This functionality can be done in odp_pktio_global_init function. This will avoid the changes to modular framework as well. When we have done additional enhancements to shared memory, this code will be deleted. So, can be part of odp_pktio_global_init without affecting the modular framework. > Josep Puigdemont(joseppc) wrote: > ah, yes, true, I didn't think about this detail... >> bogdanPricope wrote >> @joseppc Btw, 'odp_ops_data_alloc(_p, _mod)' vs. >> odp_ops_data_alloc(_p, _size) ?
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/include/odp_pktio_ops_subsystem.h @@ -92,6 +94,32 @@ typedef ODP_MODULE_CLASS(pktio_ops) { #define odp_ops_data(_p, _mod) \ ((pktio_ops_ ## _mod ## _data_t *)(uintptr_t)_p->s.ops_data) +#define odp_ops_data_alloc(_p, _size) \ +({ \ + odpdrv_shm_pool_t _pool;\ + \ + _p->s.ops_data = NULL; \ + _pool = odpdrv_shm_pool_lookup(ODP_PKTIO_OPS_DATA_POOL_NAME); \ + if (_pool != ODPDRV_SHM_POOL_INVALID) \ + _p->s.ops_data = odpdrv_shm_pool_alloc(_pool, \ + ROUNDUP_CACHE_LINE(_size)); \ + \ + _p->s.ops_data; \ +}) + +#define odp_ops_data_free(_p) \ +({ \ Comment: `ODP_OPS_DATA_FREE()` > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Again since this is a macro, the name should be `ODP_OPS_DATA_ALLOC()`. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Macros should be CAPITALIZED to indicate that they are macros rather than >> callable APIs. So `ODP_OPS_DATA()` would be used here. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> Where does this number come from? Is it subject to configuration? Bill Fischofer(Bill-Fischofer-Linaro) wrote: 2017, not 2013 here. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Are these intended to be new ODP APIs? If so they need to be specified in > `odp/api/spec/packet.h`. Or are they supposed to be new Southbound APIs > for use by drivers? Need some clarification here. The use of the `odp_` > prefix implies that these are Northbound APIs. If these are Southbound > APIs, then the prefix should be `odpdrv_`. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Both pools and shmem's are ODP objects. The difference is a pool is a >> structured collection of objects that can be allocated and freed from >> the pool and that contain both data and metadata, while a shmem is a >> "slab" of memory that has no structure beyond how the application >> chooses to use it. Given this distinction, a pool seems more useful >> here. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> This definitely should not be part of the API spec. It's an >>> implementation artifact. nagarahalli wrote This should be part of odp_pktio_ops_subsystem.h file. > nagarahalli wrote > Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? >> nagarahalli wrote >> '_p' is not required as the macro is returning 'ops_data'. Makes the >> macro simple as well. >> >> Similarly for odp_ops_data_free can just take 'ops_data' as input. >> >> This will be inline with future plans to not expose 'pktio_entry_t' >> to the drivers. >>> He Yi(heyi-linaro) wrote: >>> In future, since each pktio_ops module will not expose their >>> private data type, this macro can be changed to >>> >`#define odp_ops_data(_entry, _pdata) \ >>> pdata = (typeof(_pdata))(_entry->s.ops_data)` >>> >>> So the odp_pktio_ops_subsystem.h header won't need to know all >>> pktio_ops' private data structure declaration. Can be considered >>> next time. He Yi(heyi-linaro) wrote: Like this! > He Yi(heyi-linaro) wrote: > Look good no more comments from me to this commit after Honnappa > and Josep's, this is a step forward for the pktio ops data > > This commit also reveals how complex in ODP to allocate an > arbitrary sized piece of memory, need to prepare a pool (and > guess the largest usage), lookup this pool in every > allocation/free by name, and do the allocation/free after then. >> He Yi(heyi-linaro) wrote: >> seems no need to add an extra macro here? >> >> ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern >> we can just use to generate a static function use the same macro: >> static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) >>> nagarahalli wrote >>> Temporarily, roundup the size to cache line size. This way all >>> the memory allocations will be cache line aligned. nagarahalli wrote Should be
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/odp_packet_io_pool.c @@ -0,0 +1,54 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include "config.h" + +#include +#include +#include +#include +#include "odp_packet_io_pool.h" +#include "odp_debug_internal.h" + +#define ODP_PKTIO_OPS_DATA_POOL_SIZE 16 Comment: Where does this number come from? Is it subject to configuration? > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > 2017, not 2013 here. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Are these intended to be new ODP APIs? If so they need to be specified in >> `odp/api/spec/packet.h`. Or are they supposed to be new Southbound APIs for >> use by drivers? Need some clarification here. The use of the `odp_` prefix >> implies that these are Northbound APIs. If these are Southbound APIs, then >> the prefix should be `odpdrv_`. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> Both pools and shmem's are ODP objects. The difference is a pool is a >>> structured collection of objects that can be allocated and freed from the >>> pool and that contain both data and metadata, while a shmem is a "slab" of >>> memory that has no structure beyond how the application chooses to use it. >>> Given this distinction, a pool seems more useful here. Bill Fischofer(Bill-Fischofer-Linaro) wrote: This definitely should not be part of the API spec. It's an implementation artifact. > nagarahalli wrote > This should be part of odp_pktio_ops_subsystem.h file. >> nagarahalli wrote >> Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? >>> nagarahalli wrote >>> '_p' is not required as the macro is returning 'ops_data'. Makes the >>> macro simple as well. >>> >>> Similarly for odp_ops_data_free can just take 'ops_data' as input. >>> >>> This will be inline with future plans to not expose 'pktio_entry_t' to >>> the drivers. He Yi(heyi-linaro) wrote: In future, since each pktio_ops module will not expose their private data type, this macro can be changed to >`#define odp_ops_data(_entry, _pdata) \ pdata = (typeof(_pdata))(_entry->s.ops_data)` So the odp_pktio_ops_subsystem.h header won't need to know all pktio_ops' private data structure declaration. Can be considered next time. > He Yi(heyi-linaro) wrote: > Like this! >> He Yi(heyi-linaro) wrote: >> Look good no more comments from me to this commit after Honnappa and >> Josep's, this is a step forward for the pktio ops data >> >> This commit also reveals how complex in ODP to allocate an arbitrary >> sized piece of memory, need to prepare a pool (and guess the largest >> usage), lookup this pool in every allocation/free by name, and do >> the allocation/free after then. >>> He Yi(heyi-linaro) wrote: >>> seems no need to add an extra macro here? >>> >>> ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern >>> we can just use to generate a static function use the same macro: >>> static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) nagarahalli wrote Temporarily, roundup the size to cache line size. This way all the memory allocations will be cache line aligned. > nagarahalli wrote > Should be odp_ops_data_alloc(_p, size). > > As per the pkt I/O changes document, _p (pktio_entry_t) is not > required to be exposed to drivers. Do you plan to do it as part > of this PR? >> nagarahalli wrote >> Same here, this can be part of odp_pktio_term_global >>> nagarahalli wrote >>> This functionality can be done in odp_pktio_global_init >>> function. This will avoid the changes to modular framework as >>> well. >>> >>> When we have done additional enhancements to shared memory, >>> this code will be deleted. So, can be part of >>> odp_pktio_global_init without affecting the modular framework. Josep Puigdemont(joseppc) wrote: ah, yes, true, I didn't think about this detail... > bogdanPricope wrote > @joseppc Btw, 'odp_ops_data_alloc(_p, _mod)' vs. > odp_ops_data_alloc(_p, _size) ? >> bogdanPricope wrote >> No, this pool is used to allocate packets (for recv side). >>> Josep Puigdemont(joseppc) wrote: >>> Ok, I may be splitting hairs now, but I thought we were >>> just checking whether the pool parameter passed to >>>
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/odp_packet_io_pool.c @@ -0,0 +1,54 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. Comment: 2017, not 2013 here. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Are these intended to be new ODP APIs? If so they need to be specified in > `odp/api/spec/packet.h`. Or are they supposed to be new Southbound APIs for > use by drivers? Need some clarification here. The use of the `odp_` prefix > implies that these are Northbound APIs. If these are Southbound APIs, then > the prefix should be `odpdrv_`. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> Both pools and shmem's are ODP objects. The difference is a pool is a >> structured collection of objects that can be allocated and freed from the >> pool and that contain both data and metadata, while a shmem is a "slab" of >> memory that has no structure beyond how the application chooses to use it. >> Given this distinction, a pool seems more useful here. >>> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >>> This definitely should not be part of the API spec. It's an implementation >>> artifact. nagarahalli wrote This should be part of odp_pktio_ops_subsystem.h file. > nagarahalli wrote > Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? >> nagarahalli wrote >> '_p' is not required as the macro is returning 'ops_data'. Makes the >> macro simple as well. >> >> Similarly for odp_ops_data_free can just take 'ops_data' as input. >> >> This will be inline with future plans to not expose 'pktio_entry_t' to >> the drivers. >>> He Yi(heyi-linaro) wrote: >>> In future, since each pktio_ops module will not expose their private >>> data type, this macro can be changed to >>> >`#define odp_ops_data(_entry, _pdata) \ >>> pdata = (typeof(_pdata))(_entry->s.ops_data)` >>> >>> So the odp_pktio_ops_subsystem.h header won't need to know all >>> pktio_ops' private data structure declaration. Can be considered next >>> time. He Yi(heyi-linaro) wrote: Like this! > He Yi(heyi-linaro) wrote: > Look good no more comments from me to this commit after Honnappa and > Josep's, this is a step forward for the pktio ops data > > This commit also reveals how complex in ODP to allocate an arbitrary > sized piece of memory, need to prepare a pool (and guess the largest > usage), lookup this pool in every allocation/free by name, and do the > allocation/free after then. >> He Yi(heyi-linaro) wrote: >> seems no need to add an extra macro here? >> >> ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern >> we can just use to generate a static function use the same macro: >> static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) >>> nagarahalli wrote >>> Temporarily, roundup the size to cache line size. This way all the >>> memory allocations will be cache line aligned. nagarahalli wrote Should be odp_ops_data_alloc(_p, size). As per the pkt I/O changes document, _p (pktio_entry_t) is not required to be exposed to drivers. Do you plan to do it as part of this PR? > nagarahalli wrote > Same here, this can be part of odp_pktio_term_global >> nagarahalli wrote >> This functionality can be done in odp_pktio_global_init >> function. This will avoid the changes to modular framework as >> well. >> >> When we have done additional enhancements to shared memory, this >> code will be deleted. So, can be part of odp_pktio_global_init >> without affecting the modular framework. >>> Josep Puigdemont(joseppc) wrote: >>> ah, yes, true, I didn't think about this detail... bogdanPricope wrote @joseppc Btw, 'odp_ops_data_alloc(_p, _mod)' vs. odp_ops_data_alloc(_p, _size) ? > bogdanPricope wrote > No, this pool is used to allocate packets (for recv side). >> Josep Puigdemont(joseppc) wrote: >> Ok, I may be splitting hairs now, but I thought we were just >> checking whether the pool parameter passed to pktio_open was >> valid, and bail out if not. We are not actually using this >> pool to allocate this pktio's private data, right? >>> bogdanPricope wrote >>> Alloc/free vs. array has this disadvantage: you need to >>> allocate the memory at some point and free it if the >>> operation fails for any reason. It is better to delay the >>>
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/include/odp_packet_io_pool.h @@ -0,0 +1,27 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * ODP packet IO pool + */ + +#ifndef ODP_PACKET_IO_POOL_H_ +#define ODP_PACKET_IO_POOL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +int odp_packet_io_pool_create(void); +int odp_packet_io_pool_destroy(void); Comment: Are these intended to be new ODP APIs? If so they need to be specified in `odp/api/spec/packet.h`. Or are they supposed to be new Southbound APIs for use by drivers? Need some clarification here. The use of the `odp_` prefix implies that these are Northbound APIs. If these are Southbound APIs, then the prefix should be `odpdrv_`. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > Both pools and shmem's are ODP objects. The difference is a pool is a > structured collection of objects that can be allocated and freed from the > pool and that contain both data and metadata, while a shmem is a "slab" of > memory that has no structure beyond how the application chooses to use it. > Given this distinction, a pool seems more useful here. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> This definitely should not be part of the API spec. It's an implementation >> artifact. >>> nagarahalli wrote >>> This should be part of odp_pktio_ops_subsystem.h file. nagarahalli wrote Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? > nagarahalli wrote > '_p' is not required as the macro is returning 'ops_data'. Makes the > macro simple as well. > > Similarly for odp_ops_data_free can just take 'ops_data' as input. > > This will be inline with future plans to not expose 'pktio_entry_t' to > the drivers. >> He Yi(heyi-linaro) wrote: >> In future, since each pktio_ops module will not expose their private >> data type, this macro can be changed to >> >`#define odp_ops_data(_entry, _pdata) \ >> pdata = (typeof(_pdata))(_entry->s.ops_data)` >> >> So the odp_pktio_ops_subsystem.h header won't need to know all >> pktio_ops' private data structure declaration. Can be considered next >> time. >>> He Yi(heyi-linaro) wrote: >>> Like this! He Yi(heyi-linaro) wrote: Look good no more comments from me to this commit after Honnappa and Josep's, this is a step forward for the pktio ops data This commit also reveals how complex in ODP to allocate an arbitrary sized piece of memory, need to prepare a pool (and guess the largest usage), lookup this pool in every allocation/free by name, and do the allocation/free after then. > He Yi(heyi-linaro) wrote: > seems no need to add an extra macro here? > > ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern > we can just use to generate a static function use the same macro: > static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) >> nagarahalli wrote >> Temporarily, roundup the size to cache line size. This way all the >> memory allocations will be cache line aligned. >>> nagarahalli wrote >>> Should be odp_ops_data_alloc(_p, size). >>> >>> As per the pkt I/O changes document, _p (pktio_entry_t) is not >>> required to be exposed to drivers. Do you plan to do it as part of >>> this PR? nagarahalli wrote Same here, this can be part of odp_pktio_term_global > nagarahalli wrote > This functionality can be done in odp_pktio_global_init function. > This will avoid the changes to modular framework as well. > > When we have done additional enhancements to shared memory, this > code will be deleted. So, can be part of odp_pktio_global_init > without affecting the modular framework. >> Josep Puigdemont(joseppc) wrote: >> ah, yes, true, I didn't think about this detail... >>> bogdanPricope wrote >>> @joseppc Btw, 'odp_ops_data_alloc(_p, _mod)' vs. >>> odp_ops_data_alloc(_p, _size) ? bogdanPricope wrote No, this pool is used to allocate packets (for recv side). > Josep Puigdemont(joseppc) wrote: > Ok, I may be splitting hairs now, but I thought we were just > checking whether the pool parameter passed to pktio_open was > valid, and bail out if not. We are not actually using this > pool to allocate this pktio's private data, right? >> bogdanPricope wrote >> Alloc/free vs. array has this disadvantage: you need to >> allocate the memory at some
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: platform/linux-generic/Makefile.am line 4 @@ -175,6 +175,7 @@ noinst_HEADERS = \ include/odp_name_table_internal.h \ include/odp_packet_internal.h \ include/odp_packet_io_internal.h \ + include/odp_packet_io_pool.h \ Comment: Both pools and shmem's are ODP objects. The difference is a pool is a structured collection of objects that can be allocated and freed from the pool and that contain both data and metadata, while a shmem is a "slab" of memory that has no structure beyond how the application chooses to use it. Given this distinction, a pool seems more useful here. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > This definitely should not be part of the API spec. It's an implementation > artifact. >> nagarahalli wrote >> This should be part of odp_pktio_ops_subsystem.h file. >>> nagarahalli wrote >>> Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? nagarahalli wrote '_p' is not required as the macro is returning 'ops_data'. Makes the macro simple as well. Similarly for odp_ops_data_free can just take 'ops_data' as input. This will be inline with future plans to not expose 'pktio_entry_t' to the drivers. > He Yi(heyi-linaro) wrote: > In future, since each pktio_ops module will not expose their private data > type, this macro can be changed to > >`#define odp_ops_data(_entry, _pdata) \ > pdata = (typeof(_pdata))(_entry->s.ops_data)` > > So the odp_pktio_ops_subsystem.h header won't need to know all pktio_ops' > private data structure declaration. Can be considered next time. >> He Yi(heyi-linaro) wrote: >> Like this! >>> He Yi(heyi-linaro) wrote: >>> Look good no more comments from me to this commit after Honnappa and >>> Josep's, this is a step forward for the pktio ops data >>> >>> This commit also reveals how complex in ODP to allocate an arbitrary >>> sized piece of memory, need to prepare a pool (and guess the largest >>> usage), lookup this pool in every allocation/free by name, and do the >>> allocation/free after then. He Yi(heyi-linaro) wrote: seems no need to add an extra macro here? ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern we can just use to generate a static function use the same macro: static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) > nagarahalli wrote > Temporarily, roundup the size to cache line size. This way all the > memory allocations will be cache line aligned. >> nagarahalli wrote >> Should be odp_ops_data_alloc(_p, size). >> >> As per the pkt I/O changes document, _p (pktio_entry_t) is not >> required to be exposed to drivers. Do you plan to do it as part of >> this PR? >>> nagarahalli wrote >>> Same here, this can be part of odp_pktio_term_global nagarahalli wrote This functionality can be done in odp_pktio_global_init function. This will avoid the changes to modular framework as well. When we have done additional enhancements to shared memory, this code will be deleted. So, can be part of odp_pktio_global_init without affecting the modular framework. > Josep Puigdemont(joseppc) wrote: > ah, yes, true, I didn't think about this detail... >> bogdanPricope wrote >> @joseppc Btw, 'odp_ops_data_alloc(_p, _mod)' vs. >> odp_ops_data_alloc(_p, _size) ? >>> bogdanPricope wrote >>> No, this pool is used to allocate packets (for recv side). Josep Puigdemont(joseppc) wrote: Ok, I may be splitting hairs now, but I thought we were just checking whether the pool parameter passed to pktio_open was valid, and bail out if not. We are not actually using this pool to allocate this pktio's private data, right? > bogdanPricope wrote > Alloc/free vs. array has this disadvantage: you need to > allocate the memory at some point and free it if the > operation fails for any reason. It is better to delay the > allocation until after some common checks. > > stncmp: open calls are not on fast path ... no reason to > optimize the performance ... but repeated memory alloc/free > may affect some pool implementations >> bogdanPricope wrote >> It will require a cast when is called. >> A pktio_type may implement another way to allocate memory >> starting form the
Re: [lng-odp] [PATCH 2.0 v3] Replace pktio ops_data array implementation with memory pool implementation
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page: include/odp/api/spec/packet_io.h @@ -1188,6 +1188,11 @@ uint64_t odp_pktin_ts_res(odp_pktio_t pktio); */ odp_time_t odp_pktin_ts_from_ns(odp_pktio_t pktio, uint64_t ns); +/** + * Packet IO operations data pool name + */ +#define ODP_PKTIO_OPS_DATA_POOL_NAME "ODP_PKTIO_OPS_DATA" Comment: This definitely should not be part of the API spec. It's an implementation artifact. > nagarahalli wrote > This should be part of odp_pktio_ops_subsystem.h file. >> nagarahalli wrote >> Should we rename it to odp_packet_io_shm.h and odp_packet_io_shm.c? >>> nagarahalli wrote >>> '_p' is not required as the macro is returning 'ops_data'. Makes the macro >>> simple as well. >>> >>> Similarly for odp_ops_data_free can just take 'ops_data' as input. >>> >>> This will be inline with future plans to not expose 'pktio_entry_t' to the >>> drivers. He Yi(heyi-linaro) wrote: In future, since each pktio_ops module will not expose their private data type, this macro can be changed to >`#define odp_ops_data(_entry, _pdata) \ pdata = (typeof(_pdata))(_entry->s.ops_data)` So the odp_pktio_ops_subsystem.h header won't need to know all pktio_ops' private data structure declaration. Can be considered next time. > He Yi(heyi-linaro) wrote: > Like this! >> He Yi(heyi-linaro) wrote: >> Look good no more comments from me to this commit after Honnappa and >> Josep's, this is a step forward for the pktio ops data >> >> This commit also reveals how complex in ODP to allocate an arbitrary >> sized piece of memory, need to prepare a pool (and guess the largest >> usage), lookup this pool in every allocation/free by name, and do the >> allocation/free after then. >>> He Yi(heyi-linaro) wrote: >>> seems no need to add an extra macro here? >>> >>> ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) is extern >>> we can just use to generate a static function use the same macro: >>> static ODP_SUBSYSTEM_FOREACH_TEMPLATE(...) nagarahalli wrote Temporarily, roundup the size to cache line size. This way all the memory allocations will be cache line aligned. > nagarahalli wrote > Should be odp_ops_data_alloc(_p, size). > > As per the pkt I/O changes document, _p (pktio_entry_t) is not > required to be exposed to drivers. Do you plan to do it as part of > this PR? >> nagarahalli wrote >> Same here, this can be part of odp_pktio_term_global >>> nagarahalli wrote >>> This functionality can be done in odp_pktio_global_init function. >>> This will avoid the changes to modular framework as well. >>> >>> When we have done additional enhancements to shared memory, this >>> code will be deleted. So, can be part of odp_pktio_global_init >>> without affecting the modular framework. Josep Puigdemont(joseppc) wrote: ah, yes, true, I didn't think about this detail... > bogdanPricope wrote > @joseppc Btw, 'odp_ops_data_alloc(_p, _mod)' vs. > odp_ops_data_alloc(_p, _size) ? >> bogdanPricope wrote >> No, this pool is used to allocate packets (for recv side). >>> Josep Puigdemont(joseppc) wrote: >>> Ok, I may be splitting hairs now, but I thought we were just >>> checking whether the pool parameter passed to pktio_open was >>> valid, and bail out if not. We are not actually using this pool >>> to allocate this pktio's private data, right? bogdanPricope wrote Alloc/free vs. array has this disadvantage: you need to allocate the memory at some point and free it if the operation fails for any reason. It is better to delay the allocation until after some common checks. stncmp: open calls are not on fast path ... no reason to optimize the performance ... but repeated memory alloc/free may affect some pool implementations > bogdanPricope wrote > It will require a cast when is called. > A pktio_type may implement another way to allocate memory > starting form the name of the pool / is not mandatory to use > those macros but are helpful for existing pktio types. >> bogdanPricope wrote >> _p comes form (pktio_entry_t *) >>> Josep Puigdemont(joseppc) wrote: >>> It probably belongs to its own patch, but now that you are >>> at it, it could even be moved even further up, as it is >>> probably faster than checking for "tap:" in the
Re: [lng-odp] [PATCH API-NEXT v1] api: pktio: max frame length
Petri Savolainen(psavol) replied on github web page: test/performance/odp_l2fwd.c line 5 @@ -108,6 +108,7 @@ typedef struct { int error_check;/**< Check packet errors */ int sched_mode; /**< Scheduler mode */ int num_groups; /**< Number of scheduling groups */ + int verbose;/**< Verbose output */ } appl_args_t; Comment: I used this option to check the frame sizes. E.g. dpdk pktio returned IP MTU size (1500), not max ethernet frame size (1514). This an easy way to check it on various pktios as l2fwd takes interface as a command line parameter. > Bill Fischofer(Bill-Fischofer-Linaro) wrote: > It's not obvious what these changes have to do with the stated purpose of > this PR. This looks like a separate PR altogether. >> Bill Fischofer(Bill-Fischofer-Linaro) wrote: >> This seems a worthwhile cleanup but seems orthogonal to the rest of this PR. >>> muvarov wrote >>> @psavol then I'm ok. Petri Savolainen(psavol) wrote: int odp_pktin_event_queue(odp_pktio_t pktio, odp_queue_t queues[], int num); int odp_pktin_queue(odp_pktio_t pktio, odp_pktin_queue_t queues[], int num); uint64_t odp_pktin_wait_time(uint64_t nsec); uint64_t odp_pktin_ts_res(odp_pktio_t pktio); odp_time_t odp_pktin_ts_from_ns(odp_pktio_t pktio, uint64_t ns); These are not the first pktin/pkout functions that have pktio as param. Previously, I considered odp_pktio_rx_xxx() and odp_pktio_tx_xxx() prefixes for these, but odp_pktin_xxx() and odp_pktout_xxx() was more compact and as intuitive. Also these are not capabilities but the current configured frame lengths. Later on, we'll add configuration options for changing these and capabilities to indicate min/max values. This is now a simple replacement of MTU function. > muvarov wrote > bad naming. odp_pktin_xxx functions take argument odp_pktin_queue_t not > odp_pktio_t. If you want to keep odp_pktio_t as argument for function, > then odp_pktio_capability_t is more logical place for it, https://github.com/Linaro/odp/pull/298#discussion_r151921265 updated_at 2017-11-20 08:00:57
[lng-odp] [PATCH API-NEXT v8 3/4] api: classification: add random early detection and back pressure
From: Balasubramanian ManoharanAdds random early detection and Back pressure feature to CoS Signed-off-by: Balasubramanian Manoharan --- /** Email created from pull request 277 (bala-manoharan:RED) ** https://github.com/Linaro/odp/pull/277 ** Patch: https://github.com/Linaro/odp/pull/277.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: 806896e052ccb172565d49630ce6033f4e6ec9cd **/ include/odp/api/spec/classification.h | 74 +++ 1 file changed, 74 insertions(+) diff --git a/include/odp/api/spec/classification.h b/include/odp/api/spec/classification.h index 0c4a95c5f..16bf3e6ea 100644 --- a/include/odp/api/spec/classification.h +++ b/include/odp/api/spec/classification.h @@ -20,6 +20,7 @@ extern "C" { #include #include +#include /** @defgroup odp_classification ODP CLASSIFICATION * Classification operations. * @{ @@ -107,6 +108,61 @@ typedef union odp_cls_pmr_terms_t { uint64_t all_bits; } odp_cls_pmr_terms_t; +/** Random Early Detection (RED) + * Random Early Detection is enabled to initiate a drop probability for the + * incoming packet when the packets in the queue/pool cross the specified + * threshold values. RED is enabled when 'red_enable' boolean is true and + * the resource usage is equal to or greater than the minimum threshold value. + * Resource usage could be defined either as the percentage of pool being full + * or the number of packets/bytes occupied in the queue depening on the platform + * capabilities. + * When RED is enabled for a particular flow then further incoming packets are + * assigned a drop probability based on the size of the pool/queue. + * + * Drop probability is configured as follows + * * Drop probability is 100%, when resource usage >= threshold.max + * * Drop probability is 0%, when resource usage <= threshold.min + * * Drop probability is between 0...100 % when resource usage is between + * threshold.min and threshold.max + * + * RED is logically configured in the CoS and could be implemented in either + * pool or queue linked to the CoS depending on platform capabilities. + * Application should make sure not to link multiple CoS with different RED or + * BP configuration to the same queue or pool. + */ +typedef struct odp_red_param_t { + /** A boolean to enable RED +* When true, RED is enabled and configured with RED parameters. +* Otherwise, RED parameters are ignored. */ + odp_bool_t enable; + + /** Threshold parameters for RED +* RED is enabled when the resource usage is equal to or greater than +* the minimum threshold value and is disabled otherwise +*/ + odp_threshold_t threshold; +} odp_red_param_t; + +/** Back pressure (BP) + * When back pressure is enabled for a particular flow, the HW can send + * back pressure information to the remote peer indicating a network congestion. + */ +typedef struct odp_bp_param_t { + /** A boolean to enable Back pressure +* When true, back pressure is enabled and configured with the BP +* parameters. Otherwise BP parameters are ignored. +*/ + odp_bool_t enable; + + /** Threshold value for back pressure. +* BP is enabled when the resource usage is equal to or greater than the +* max backpressure threshold. Min threshold parameters are ignored for +* BP configuration. +* @see odp_red_param_t for 'resource usage' documentation. +*/ + odp_threshold_t threshold; +} odp_bp_param_t; + /** * Classification capabilities * This capability structure defines system level classification capability @@ -135,6 +191,18 @@ typedef struct odp_cls_capability_t { /** A Boolean to denote support of PMR range */ odp_bool_t pmr_range_supported; + + /** Support for Random Early Detection */ + odp_support_t random_early_detection; + + /** Supported threshold type for RED */ + odp_threshold_types_t threshold_red; + + /** Support for Back Pressure to the remote peer */ + odp_support_t back_pressure; + + /** Supported threshold type for BP */ + odp_threshold_types_t threshold_bp; } odp_cls_capability_t; /** @@ -206,6 +274,12 @@ typedef struct odp_cls_cos_param { /** Drop policy associated with CoS */ odp_cls_drop_t drop_policy; + + /** Random Early Detection configuration */ + odp_red_param_t red; + + /** Back Pressure configuration */ + odp_bp_param_t bp; } odp_cls_cos_param_t; /**
[lng-odp] [PATCH API-NEXT v8 4/4] linux-generic: classification: implement random early detection and back pressure
From: Balasubramanian Manoharanlinux-generic does not support random early detection and back pressure Signed-off-by: Balasubramanian Manoharan --- /** Email created from pull request 277 (bala-manoharan:RED) ** https://github.com/Linaro/odp/pull/277 ** Patch: https://github.com/Linaro/odp/pull/277.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: 806896e052ccb172565d49630ce6033f4e6ec9cd **/ platform/linux-generic/odp_classification.c | 4 1 file changed, 4 insertions(+) diff --git a/platform/linux-generic/odp_classification.c b/platform/linux-generic/odp_classification.c index a5cba56a4..025f12593 100644 --- a/platform/linux-generic/odp_classification.c +++ b/platform/linux-generic/odp_classification.c @@ -190,6 +190,10 @@ int odp_cls_capability(odp_cls_capability_t *capability) capability->supported_terms.bit.tcp_sport = 1; capability->supported_terms.bit.sip_addr = 1; capability->supported_terms.bit.dip_addr = 1; + capability->random_early_detection = ODP_SUPPORT_NO; + capability->back_pressure = ODP_SUPPORT_NO; + capability->threshold_red.all_bits = 0; + capability->threshold_bp.all_bits = 0; return 0; }
[lng-odp] [PATCH API-NEXT v8 0/4] api: random early detection and back pressure
adds RED and BP configuration to both pool and queue parameters github /** Email created from pull request 277 (bala-manoharan:RED) ** https://github.com/Linaro/odp/pull/277 ** Patch: https://github.com/Linaro/odp/pull/277.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: 806896e052ccb172565d49630ce6033f4e6ec9cd **/ /github checkpatch.pl total: 0 errors, 0 warnings, 0 checks, 22 lines checked to_send-p-000.patch has no obvious style problems and is ready for submission. total: 0 errors, 0 warnings, 0 checks, 153 lines checked to_send-p-001.patch has no obvious style problems and is ready for submission. total: 0 errors, 0 warnings, 0 checks, 98 lines checked to_send-p-002.patch has no obvious style problems and is ready for submission. total: 0 errors, 0 warnings, 0 checks, 10 lines checked to_send-p-003.patch has no obvious style problems and is ready for submission. /checkpatch.pl
[lng-odp] [PATCH 2.0 v3 2/2] linux-gen: pktio: use pktio operations data pool
From: Bogdan PricopePktio instances need a memory area where to store configuration or state information. This patch replaces utilization of a fixed size memory area with a variable size memory area allocated from a predefined pool. This reduces the memory size needed to store pktio array. Signed-off-by: Bogdan Pricope --- /** Email created from pull request 297 (bogdanPricope:2_0_ops_alloc_pr) ** https://github.com/Linaro/odp/pull/297 ** Patch: https://github.com/Linaro/odp/pull/297.patch ** Base sha: 6cd43041e55bd73a02ca202f835e590b3ad5c354 ** Merge commit sha: 7b05757dc100acc261053cf8e9cb253091ceaf91 **/ platform/linux-dpdk/pktio/dpdk.c | 57 ++- .../linux-generic/include/odp_packet_io_internal.h | 4 +- .../include/odp_pktio_ops_subsystem.h | 8 +-- platform/linux-generic/pktio/dpdk.c| 75 --- platform/linux-generic/pktio/ipc.c | 41 +++ platform/linux-generic/pktio/loopback.c| 40 +++ platform/linux-generic/pktio/netmap.c | 83 ++ platform/linux-generic/pktio/pcap.c| 26 --- platform/linux-generic/pktio/socket.c | 58 +++ platform/linux-generic/pktio/socket_mmap.c | 58 +++ platform/linux-generic/pktio/tap.c | 46 +++- 11 files changed, 256 insertions(+), 240 deletions(-) diff --git a/platform/linux-dpdk/pktio/dpdk.c b/platform/linux-dpdk/pktio/dpdk.c index a6e2573e5..832ac7acf 100644 --- a/platform/linux-dpdk/pktio/dpdk.c +++ b/platform/linux-dpdk/pktio/dpdk.c @@ -99,8 +99,7 @@ static void _dpdk_print_port_mac(uint8_t portid) static int input_queues_config_pkt_dpdk(pktio_entry_t *pktio_entry, const odp_pktin_queue_param_t *p) { - pktio_ops_dpdk_data_t *pkt_dpdk = - odp_ops_data(pktio_entry, dpdk); + pktio_ops_dpdk_data_t *pkt_dpdk = pktio_entry->s.ops_data; odp_pktin_mode_t mode = pktio_entry->s.param.in_mode; /** @@ -129,8 +128,7 @@ static int input_queues_config_pkt_dpdk(pktio_entry_t *pktio_entry, static int output_queues_config_pkt_dpdk(pktio_entry_t *pktio_entry, const odp_pktout_queue_param_t *p) { - pktio_ops_dpdk_data_t *pkt_dpdk = - odp_ops_data(pktio_entry, dpdk); + pktio_ops_dpdk_data_t *pkt_dpdk = pktio_entry->s.ops_data; if (p->op_mode == ODP_PKTIO_OP_MT_UNSAFE) pkt_dpdk->lockless_tx = 1; @@ -147,8 +145,7 @@ static int setup_pkt_dpdk(odp_pktio_t pktio ODP_UNUSED, { uint8_t portid = 0; struct rte_eth_dev_info dev_info; - pktio_ops_dpdk_data_t *pkt_dpdk = - odp_ops_data(pktio_entry, dpdk); + pktio_ops_dpdk_data_t *pkt_dpdk = NULL; int i; if (!_dpdk_netdev_is_valid(netdev)) { @@ -157,10 +154,14 @@ static int setup_pkt_dpdk(odp_pktio_t pktio ODP_UNUSED, return -1; } - if (odp_unlikely(pkt_dpdk == NULL)) { + pktio_entry->s.ops_data = ODP_OPS_DATA_ALLOC(sizeof(*pkt_dpdk)); + if (odp_unlikely(pktio_entry->s.ops_data == NULL)) { ODP_ERR("Failed to allocate pktio_ops_dpdk_data_t struct"); return -1; } + pkt_dpdk = pktio_entry->s.ops_data; + + memset(pkt_dpdk, 0, sizeof(*pkt_dpdk)); portid = atoi(netdev); pkt_dpdk->portid = portid; @@ -168,6 +169,7 @@ static int setup_pkt_dpdk(odp_pktio_t pktio ODP_UNUSED, rte_eth_dev_info_get(portid, _info); if (dev_info.driver_name == NULL) { ODP_DBG("No driver found for interface: %s\n", netdev); + ODP_OPS_DATA_FREE(pktio_entry->s.ops_data); return -1; } if (!strcmp(dev_info.driver_name, "rte_ixgbe_pmd")) @@ -191,20 +193,18 @@ static int setup_pkt_dpdk(odp_pktio_t pktio ODP_UNUSED, static int close_pkt_dpdk(pktio_entry_t *pktio_entry) { - const pktio_ops_dpdk_data_t *pkt_dpdk = - odp_ops_data(pktio_entry, dpdk); + const pktio_ops_dpdk_data_t *pkt_dpdk = pktio_entry->s.ops_data; if (pktio_entry->s.state == PKTIO_STATE_STOPPED) rte_eth_dev_close(pkt_dpdk->portid); - return 0; + return ODP_OPS_DATA_FREE(pktio_entry->s.ops_data); } static int start_pkt_dpdk(pktio_entry_t *pktio_entry) { int ret, i; - pktio_ops_dpdk_data_t *pkt_dpdk = - odp_ops_data(pktio_entry, dpdk); + pktio_ops_dpdk_data_t *pkt_dpdk = pktio_entry->s.ops_data; uint8_t portid = pkt_dpdk->portid; int sid = rte_eth_dev_socket_id(pkt_dpdk->portid); int socket_id = sid < 0 ? 0 : sid; @@ -312,8 +312,7 @@ static int start_pkt_dpdk(pktio_entry_t *pktio_entry) static int stop_pkt_dpdk(pktio_entry_t *pktio_entry) { -
[lng-odp] [PATCH 2.0 v3 0/2] Replace pktio ops_data array implementation with memory pool implementation
Replace pktio ops_data array implementation with memory pool implementation. github /** Email created from pull request 297 (bogdanPricope:2_0_ops_alloc_pr) ** https://github.com/Linaro/odp/pull/297 ** Patch: https://github.com/Linaro/odp/pull/297.patch ** Base sha: 6cd43041e55bd73a02ca202f835e590b3ad5c354 ** Merge commit sha: 7b05757dc100acc261053cf8e9cb253091ceaf91 **/ /github checkpatch.pl total: 0 errors, 0 warnings, 0 checks, 197 lines checked to_send-p-000.patch has no obvious style problems and is ready for submission. total: 0 errors, 0 warnings, 0 checks, 1409 lines checked to_send-p-001.patch has no obvious style problems and is ready for submission. /checkpatch.pl
[lng-odp] [PATCH 2.0 v3 1/2] linux-gen: pktio: add pktio operations data pool
From: Bogdan PricopePktio instances need a memory area where to store configuration or state information. The size and structure of this area depends on pktio type. This pool is meant to be used for allocation of such memory areas. Signed-off-by: Bogdan Pricope --- /** Email created from pull request 297 (bogdanPricope:2_0_ops_alloc_pr) ** https://github.com/Linaro/odp/pull/297 ** Patch: https://github.com/Linaro/odp/pull/297.patch ** Base sha: 6cd43041e55bd73a02ca202f835e590b3ad5c354 ** Merge commit sha: 7b05757dc100acc261053cf8e9cb253091ceaf91 **/ platform/linux-dpdk/Makefile.am| 3 ++ platform/linux-generic/Makefile.am | 3 ++ .../linux-generic/include/odp_packet_io_pool.h | 27 ++ .../include/odp_packet_io_pool_access.h| 58 ++ platform/linux-generic/odp_packet_io.c | 9 platform/linux-generic/odp_packet_io_pool.c| 55 6 files changed, 155 insertions(+) create mode 100644 platform/linux-generic/include/odp_packet_io_pool.h create mode 100644 platform/linux-generic/include/odp_packet_io_pool_access.h create mode 100644 platform/linux-generic/odp_packet_io_pool.c diff --git a/platform/linux-dpdk/Makefile.am b/platform/linux-dpdk/Makefile.am index f27d2b50c..ee6c36e21 100644 --- a/platform/linux-dpdk/Makefile.am +++ b/platform/linux-dpdk/Makefile.am @@ -206,6 +206,8 @@ noinst_HEADERS = \ ${top_srcdir}/platform/linux-generic/include/odp_packet_io_internal.h \ ${srcdir}/include/odp_errno_define.h \ ${top_srcdir}/platform/linux-generic/include/odp_packet_io_ring_internal.h \ + ${top_srcdir}/platform/linux-generic/include/odp_packet_io_pool.h \ + ${top_srcdir}/platform/linux-generic/include/odp_packet_io_pool_access.h \ ${top_srcdir}/platform/linux-generic/include/odp_pkt_queue_internal.h \ ${srcdir}/include/odp_pool_internal.h \ ${top_srcdir}/platform/linux-generic/include/odp_pool_subsystem.h \ @@ -260,6 +262,7 @@ __LIB__libodp_dpdk_la_SOURCES = \ ../linux-generic/pktio/subsystem.c \ odp_packet_flags.c \ ../linux-generic/odp_packet_io.c \ + ../linux-generic/odp_packet_io_pool.c \ ../linux-generic/pktio/loopback.c \ ../linux-generic/odp_pkt_queue.c \ pool/dpdk.c \ diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 56ef03c6f..a4aaa7693 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -175,6 +175,8 @@ noinst_HEADERS = \ include/odp_name_table_internal.h \ include/odp_packet_internal.h \ include/odp_packet_io_internal.h \ + include/odp_packet_io_pool.h \ + include/odp_packet_io_pool_access.h \ include/odp_packet_io_ring_internal.h \ pktio/ethtool.h \ pktio/common.h \ @@ -244,6 +246,7 @@ __LIB__libodp_linux_la_SOURCES = \ odp_packet.c \ odp_packet_flags.c \ odp_packet_io.c \ + odp_packet_io_pool.c \ pktio/ethtool.c \ pktio/subsystem.c \ pktio/ipc.c \ diff --git a/platform/linux-generic/include/odp_packet_io_pool.h b/platform/linux-generic/include/odp_packet_io_pool.h new file mode 100644 index 0..59ad053b1 --- /dev/null +++ b/platform/linux-generic/include/odp_packet_io_pool.h @@ -0,0 +1,27 @@ +/* Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * ODP packet IO pool + */ + +#ifndef ODP_PACKET_IO_POOL_H_ +#define ODP_PACKET_IO_POOL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +int _odp_packet_io_pool_create(void); +int _odp_packet_io_pool_destroy(void); + +#ifdef __cplusplus +} +#endif + +#endif /* ODP_PACKET_IO_POOL_H_*/ diff --git a/platform/linux-generic/include/odp_packet_io_pool_access.h b/platform/linux-generic/include/odp_packet_io_pool_access.h new file mode 100644 index 0..cdd7dd968 --- /dev/null +++ b/platform/linux-generic/include/odp_packet_io_pool_access.h @@ -0,0 +1,58 @@ +/* Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * ODP packet IO pool + */ + +#ifndef ODP_PACKET_IO_POOL_ACCESS_H_ +#define ODP_PACKET_IO_POOL_ACCESS_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include + +/** + * Packet IO operations data pool name + */ +#define
[lng-odp] [PATCH API-NEXT v2 0/2] Use chksum API
github /** Email created from pull request 280 (lumag:chksum) ** https://github.com/Linaro/odp/pull/280 ** Patch: https://github.com/Linaro/odp/pull/280.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: ab5a9654088994bd3991d6f6435f8cab6a2cbd65 **/ /github checkpatch.pl total: 0 errors, 0 warnings, 0 checks, 41 lines checked to_send-p-000.patch has no obvious style problems and is ready for submission. total: 0 errors, 0 warnings, 0 checks, 49 lines checked to_send-p-001.patch has no obvious style problems and is ready for submission. /checkpatch.pl
[lng-odp] [PATCH API-NEXT v2 1/2] helper: use new odp checksum API
From: Dmitry Eremin-SolenikovUse odp_chksum_ones_comp16 which may be platform-optimized. This removes now unnecessary odph_cksum() function. Signed-off-by: Dmitry Eremin-Solenikov --- /** Email created from pull request 280 (lumag:chksum) ** https://github.com/Linaro/odp/pull/280 ** Patch: https://github.com/Linaro/odp/pull/280.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: ab5a9654088994bd3991d6f6435f8cab6a2cbd65 **/ helper/include/odp/helper/chksum.h | 27 --- helper/include/odp/helper/ip.h | 2 +- 2 files changed, 1 insertion(+), 28 deletions(-) diff --git a/helper/include/odp/helper/chksum.h b/helper/include/odp/helper/chksum.h index 1bf950c8b..ed2de91fb 100644 --- a/helper/include/odp/helper/chksum.h +++ b/helper/include/odp/helper/chksum.h @@ -34,33 +34,6 @@ typedef enum { ODPH_CHKSUM_RETURN/**< Don't generate or verify chksum */ } odph_chksum_op_t; -/** - * Checksum - * - * @param buffer calculate chksum for buffer - * @param lenbuffer length - * - * @return checksum value in network order - */ -static inline odp_u16sum_t odph_chksum(void *buffer, int len) -{ - uint16_t *buf = (uint16_t *)buffer; - uint32_t sum = 0; - uint16_t result; - - for (sum = 0; len > 1; len -= 2) - sum += *buf++; - - if (len == 1) - sum += *(unsigned char *)buf; - - sum = (sum >> 16) + (sum & 0x); - sum += (sum >> 16); - result = ~sum; - - return (__odp_force odp_u16sum_t) result; -} - /** * General Purpose TCP/UDP checksum function * diff --git a/helper/include/odp/helper/ip.h b/helper/include/odp/helper/ip.h index c6eb9d767..b96aab280 100644 --- a/helper/include/odp/helper/ip.h +++ b/helper/include/odp/helper/ip.h @@ -125,7 +125,7 @@ static inline int odph_ipv4_csum(odp_packet_t pkt, if (odp_unlikely(res < 0)) return res; - *chksum = odph_chksum(buf, nleft); + *chksum = ~odp_chksum_ones_comp16(buf, nleft); return 0; }
[lng-odp] [PATCH API-NEXT v2 2/2] linux-gen: ipsec: use new odp checksum API
From: Dmitry Eremin-SolenikovUse odp_chksum_ones_comp16 which may be platform-optimized. Signed-off-by: Dmitry Eremin-Solenikov --- /** Email created from pull request 280 (lumag:chksum) ** https://github.com/Linaro/odp/pull/280 ** Patch: https://github.com/Linaro/odp/pull/280.patch ** Base sha: d4b364849c4abb4c71e0c5260e1a793ebb8dc97d ** Merge commit sha: ab5a9654088994bd3991d6f6435f8cab6a2cbd65 **/ platform/linux-generic/odp_ipsec.c | 31 ++- 1 file changed, 2 insertions(+), 29 deletions(-) diff --git a/platform/linux-generic/odp_ipsec.c b/platform/linux-generic/odp_ipsec.c index e57736c2a..b67cd3158 100644 --- a/platform/linux-generic/odp_ipsec.c +++ b/platform/linux-generic/odp_ipsec.c @@ -7,6 +7,7 @@ #include "config.h" #include +#include #include @@ -101,34 +102,6 @@ static odp_ipsec_packet_result_t *ipsec_pkt_result(odp_packet_t packet) return _packet_hdr(packet)->ipsec_ctx; } -/** - * Checksum - * - * @param buffer calculate chksum for buffer - * @param lenbuffer length - * - * @return checksum value in network order - */ -static inline -odp_u16sum_t _odp_chksum(void *buffer, int len) -{ - uint16_t *buf = (uint16_t *)buffer; - uint32_t sum = 0; - uint16_t result; - - for (sum = 0; len > 1; len -= 2) - sum += *buf++; - - if (len == 1) - sum += *(unsigned char *)buf; - - sum = (sum >> 16) + (sum & 0x); - sum += (sum >> 16); - result = ~sum; - - return (__odp_force odp_u16sum_t) result; -} - static inline int _odp_ipv4_csum(odp_packet_t pkt, uint32_t offset, _odp_ipv4hdr_t *ip, @@ -148,7 +121,7 @@ static inline int _odp_ipv4_csum(odp_packet_t pkt, if (odp_unlikely(res < 0)) return res; - *chksum = _odp_chksum(buf, nleft); + *chksum = ~odp_chksum_ones_comp16(buf, nleft); return 0; }
Re: [lng-odp] Github shows commits in wrong order
> -Original Message- > From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of > Savolainen, Petri (Nokia - FI/Espoo) > Sent: Friday, November 17, 2017 10:24 AM > To: lng-odp@lists.linaro.org > Subject: [lng-odp] Github shows commits in wrong order > > Hi, > > Here's evidence that Github shows sometimes commits in a wrong order. The > first commit of this patch set is "api: std_types: add odp_percent_t data > type", but it shows in the Github web page as the last one! > > It's annoying and counterproductive that reviewers cannot trust on the > patch order. > > > -Petri Github has done another spin on the commit order lottery. Now, the first patch ("add odp_percent_t") is in the middle. -Petri
[lng-odp] [PATCH v1] [RFC 2/2] DDF cleanup and movement
From: Yi HeCleanup DDF duplicated align, atomic, barrier, byteorder, compiler, hints, shm, spinlock, std_types and sync, driver can use ODP APIs directly, move DDF into frameworks/driver. Signed-off-by: Yi He --- /** Email created from pull request 300 (heyi-linaro:native-drivers) ** https://github.com/Linaro/odp/pull/300 ** Patch: https://github.com/Linaro/odp/pull/300.patch ** Base sha: 4bd608048e6fa77d3154b8d90b85fc2b66c6bf1a ** Merge commit sha: 271cc8133b1e637126d59ea9c1b71c02d68ad4a3 **/ {include/odp/drv/spec => frameworks/driver}/README | 0 .../drv_driver.c => frameworks/driver/driver.c | 5 +- .../spec => frameworks/driver/include}/driver.h| 3 + .../driver/include/driver_internal.h | 0 frameworks/driver/include/driver_types.h | 52 ++ .../driver/include/pci_internal.h | 2 + .../drv_pci.c => frameworks/driver/pci.c | 2 +- .../drv_pci_uio.c => frameworks/driver/pci_uio.c | 3 +- include/Makefile.am| 17 +- include/odp/drv/spec/align.h | 78 --- include/odp/drv/spec/atomic.h | 634 - include/odp/drv/spec/barrier.h | 66 --- include/odp/drv/spec/byteorder.h | 181 -- include/odp/drv/spec/compiler.h| 51 -- include/odp/drv/spec/hints.h | 119 include/odp/drv/spec/shm.h | 330 --- include/odp/drv/spec/spinlock.h| 86 --- include/odp/drv/spec/std_types.h | 40 -- include/odp/drv/spec/sync.h| 91 --- include/odp_drv.h | 36 -- platform/linux-dpdk/Makefile.am| 46 +- platform/linux-generic/Makefile.am | 52 +- platform/linux-generic/_ishm.c | 1 - platform/linux-generic/_ishmpool.c | 1 - platform/linux-generic/_modules.c | 2 +- platform/linux-generic/drv_atomic.c| 26 - platform/linux-generic/drv_barrier.c | 50 -- platform/linux-generic/drv_shm.c | 146 - platform/linux-generic/drv_spinlock.c | 39 -- platform/linux-generic/include/odp/drv/README | 2 - platform/linux-generic/include/odp/drv/align.h | 58 -- platform/linux-generic/include/odp/drv/atomic.h| 430 -- platform/linux-generic/include/odp/drv/barrier.h | 30 - platform/linux-generic/include/odp/drv/byteorder.h | 146 - platform/linux-generic/include/odp/drv/compiler.h | 34 -- platform/linux-generic/include/odp/drv/driver.h| 37 -- platform/linux-generic/include/odp/drv/hints.h | 34 -- .../include/odp/drv/plat/atomic_types.h| 88 --- .../include/odp/drv/plat/barrier_types.h | 38 -- .../include/odp/drv/plat/byteorder_types.h | 84 --- .../include/odp/drv/plat/driver_types.h| 52 -- .../linux-generic/include/odp/drv/plat/shm_types.h | 49 -- .../include/odp/drv/plat/spinlock_types.h | 33 -- .../include/odp/drv/plat/strong_types.h| 35 -- platform/linux-generic/include/odp/drv/shm.h | 36 -- platform/linux-generic/include/odp/drv/spinlock.h | 28 - platform/linux-generic/include/odp/drv/std_types.h | 42 -- platform/linux-generic/include/odp/drv/sync.h | 49 -- .../linux-generic/include/odp_pktio_ops_virtio.h | 2 +- platform/linux-generic/pktio/{ => virtio}/virtio.c | 2 +- .../linux-generic/pktio/{ => virtio}/virtio_pci.c | 16 +- .../linux-generic/pktio/{ => virtio}/virtio_pci.h | 2 +- 52 files changed, 100 insertions(+), 3386 deletions(-) rename {include/odp/drv/spec => frameworks/driver}/README (100%) rename platform/linux-generic/drv_driver.c => frameworks/driver/driver.c (99%) rename {include/odp/drv/spec => frameworks/driver/include}/driver.h (99%) rename platform/linux-generic/include/drv_driver_internal.h => frameworks/driver/include/driver_internal.h (100%) create mode 100644 frameworks/driver/include/driver_types.h rename platform/linux-generic/include/drv_pci_internal.h => frameworks/driver/include/pci_internal.h (99%) rename platform/linux-generic/drv_pci.c => frameworks/driver/pci.c (99%) rename platform/linux-generic/drv_pci_uio.c => frameworks/driver/pci_uio.c (99%) delete mode 100644 include/odp/drv/spec/align.h delete mode 100644 include/odp/drv/spec/atomic.h delete mode 100644 include/odp/drv/spec/barrier.h delete mode 100644 include/odp/drv/spec/byteorder.h delete mode 100644 include/odp/drv/spec/compiler.h delete mode 100644 include/odp/drv/spec/hints.h delete mode 100644 include/odp/drv/spec/shm.h delete mode 100644 include/odp/drv/spec/spinlock.h delete mode 100644 include/odp/drv/spec/std_types.h delete mode 100644 include/odp/drv/spec/sync.h delete mode
[lng-odp] [PATCH v1] [RFC 1/2] temporarily disable DDF validations and samples
From: Yi HeSigned-off-by: Yi He --- /** Email created from pull request 300 (heyi-linaro:native-drivers) ** https://github.com/Linaro/odp/pull/300 ** Patch: https://github.com/Linaro/odp/pull/300.patch ** Base sha: 4bd608048e6fa77d3154b8d90b85fc2b66c6bf1a ** Merge commit sha: 271cc8133b1e637126d59ea9c1b71c02d68ad4a3 **/ example/Makefile.am | 4 +--- example/m4/configure.m4 | 2 -- test/m4/configure.m4| 6 +- test/validation/Makefile.am | 3 +-- 4 files changed, 3 insertions(+), 12 deletions(-) diff --git a/example/Makefile.am b/example/Makefile.am index b6e8d81fe..695e029c9 100644 --- a/example/Makefile.am +++ b/example/Makefile.am @@ -9,8 +9,6 @@ SUBDIRS = classifier \ switch \ time \ timer \ - traffic_mgmt \ - ddf_ifs \ - ddf_app + traffic_mgmt noinst_HEADERS = example_debug.h diff --git a/example/m4/configure.m4 b/example/m4/configure.m4 index 208c5d6d7..cab7f8b8c 100644 --- a/example/m4/configure.m4 +++ b/example/m4/configure.m4 @@ -19,6 +19,4 @@ AC_CONFIG_FILES([example/classifier/Makefile example/time/Makefile example/timer/Makefile example/traffic_mgmt/Makefile -example/ddf_ifs/Makefile -example/ddf_app/Makefile example/Makefile]) diff --git a/test/m4/configure.m4 b/test/m4/configure.m4 index a9e345bf6..ef6096e87 100644 --- a/test/m4/configure.m4 +++ b/test/m4/configure.m4 @@ -32,8 +32,4 @@ AC_CONFIG_FILES([test/Makefile test/validation/api/thread/Makefile test/validation/api/time/Makefile test/validation/api/timer/Makefile -test/validation/api/traffic_mngr/Makefile -test/validation/drv/Makefile -test/validation/drv/drvatomic/Makefile -test/validation/drv/drvdriver/Makefile -test/validation/drv/drvshmem/Makefile]) +test/validation/api/traffic_mngr/Makefile]) diff --git a/test/validation/Makefile.am b/test/validation/Makefile.am index 17e0444be..328543780 100644 --- a/test/validation/Makefile.am +++ b/test/validation/Makefile.am @@ -1,4 +1,3 @@ if test_vald -SUBDIRS = api \ - drv +SUBDIRS = api endif