On 9/12/2024 1:50 PM, Varghese, Vipin wrote:
[Public]
Snipped
Based on the discussions we agreed on sharing version-2 FRC for
extending API as `rte_get_next_lcore_extnd` with extra argument as
`flags`.
As per my ideation, for the API ` rte_get_next_sibling_core`, the above
On 2024-09-12 15:30, Bruce Richardson wrote:
On Thu, Sep 12, 2024 at 01:59:34PM +0200, Mattias Rönnblom wrote:
On 2024-09-12 13:17, Varghese, Vipin wrote:
[AMD Official Use Only - AMD Internal Distribution Only]
Thank you Mattias for the information, as shared by in the reply
with
Anatoly w
On Thu, 12 Sep 2024 14:12:55 +0200
Mattias Rönnblom wrote:
> On 2024-09-12 13:23, Varghese, Vipin wrote:
> > [Public]
> >
> > Snipped
> >>
> >>
> >> To to be clear; it's something like this I think of when I say "DOM-style"
> >> API.
> >>
> >> #ifndef RTE_HWTOPO_H
> >> #define RTE_HWTOPO_H
On Thu, Sep 12, 2024 at 01:59:34PM +0200, Mattias Rönnblom wrote:
> On 2024-09-12 13:17, Varghese, Vipin wrote:
> > [AMD Official Use Only - AMD Internal Distribution Only]
> >
> >
> > > Thank you Mattias for the information, as shared by in the reply
> > > with
> > > >> Anatoly we want
On 2024-09-12 11:17, Bruce Richardson wrote:
On Thu, Sep 12, 2024 at 02:19:07AM +, Varghese, Vipin wrote:
[Public]
> > > >
> > > >
> > > >>>
> > > >>>
> > > >>> Thank you Mattias for the comments and question, please let
me
> > > >>> try to exp
On 2024-09-12 13:23, Varghese, Vipin wrote:
[Public]
Snipped
To to be clear; it's something like this I think of when I say "DOM-style" API.
#ifndef RTE_HWTOPO_H
#define RTE_HWTOPO_H
struct rte_hwtopo_node;
enum rte_hwtopo_node_type {
RTE_HWTOPO_NODE_TYPE_CPU_CORE,
RTE_HWTOPO_N
On 2024-09-12 13:17, Varghese, Vipin wrote:
[AMD Official Use Only - AMD Internal Distribution Only]
Thank you Mattias for the information, as shared by in the reply
with
>> Anatoly we want expose a new API `rte_get_next_lcore_ex` which
>> intakes a extra argument `u32 flags`.
T
[Public]
Snipped
> >
> >
> >Based on the discussions we agreed on sharing version-2 FRC for
> >extending API as `rte_get_next_lcore_extnd` with extra argument as
> >`flags`.
> >
> >As per my ideation, for the API ` rte_get_next_sibling_core`, the above
> >API can easily with f
[Public]
Snipped
>
>
> To to be clear; it's something like this I think of when I say "DOM-style"
> API.
>
> #ifndef RTE_HWTOPO_H
> #define RTE_HWTOPO_H
>
> struct rte_hwtopo_node;
>
> enum rte_hwtopo_node_type {
> RTE_HWTOPO_NODE_TYPE_CPU_CORE,
> RTE_HWTOPO_NODE_TYPE_CACHE,
> RT
[AMD Official Use Only - AMD Internal Distribution Only]
> Thank you Mattias for the information, as shared by in the reply
> with
> >> Anatoly we want expose a new API `rte_get_next_lcore_ex` which
> >> intakes a extra argument `u32 flags`.
> The flags can be RTE_GET_LCORE_L1 (S
On Thu, Sep 12, 2024 at 02:19:07AM +, Varghese, Vipin wrote:
>[Public]
>
>
>
>
>
>> > > >
>
>> > > >
>
>> > > >>>
>
>> > > >>>
>
>> > > >>> Thank you Mattias for the comments and question, please let
>me
>
>> > > >>> try to explain the same below
>
On 2024-09-12 08:38, Mattias Rönnblom wrote:
On 2024-09-12 03:33, Varghese, Vipin wrote:
[Public]
Snipped
Thank you Mattias for the comments and question, please let me
try to explain the same below
We shouldn't have a separate CPU/cache hierarchy API instead?
Based on the intention
On 2024-09-12 03:33, Varghese, Vipin wrote:
[Public]
Snipped
Thank you Mattias for the comments and question, please let me
try to explain the same below
We shouldn't have a separate CPU/cache hierarchy API instead?
Based on the intention to bring in CPU lcores which share same L3
(f
[AMD Official Use Only - AMD Internal Distribution Only]
> >
> > For the naming, would "rte_get_next_sibling_core" (or lcore if you
> > prefer) be a clearer name than just adding "ex" on to the end of the
> > existing function?
> >
> > Looking logically, I'm not sure about the BOOST_ENABLED and
[Public]
> > What use case do you have in mind? What's on top of my list is a scenario
> where a DPDK app gets a bunch of cores (e.g., -l ) and tries to figure
> out how best make use of them. It's not going to "skip" (ignore, leave unused)
> SMT siblings, or skip non-boosted cores, it would ju
[Public]
> > > >
> > > >
> > > >>>
> > > >>>
> > > >>> Thank you Mattias for the comments and question, please let me
> > > >>> try to explain the same below
> > > >>>
> > > We shouldn't have a separate CPU/cache hierarchy API instead?
> > > >>>
> > > >>> Based on the intention to bring
[Public]
Snipped
>
>
>
> >>
> >>
> >> Thank you Mattias for the comments and question, please let me
> >> try to explain the same below
> >>
> >>> We shouldn't have a separate CPU/cache hierarchy API instead?
> >>
> >> Based on the intention to bri
[AMD Official Use Only - AMD Internal Distribution Only]
>
> On Wed, 11 Sep 2024 03:13:14 +
> "Varghese, Vipin" wrote:
>
> > > Agreed. This one of those cases where the existing project hwloc
> > > which is part of open-mpi is more complete and well supported. It
> > > supports multiple OS'
> > > >>> Thank you Mattias for the comments and question, please let me try
> > > >>> to explain the same below
> > > >>>
> > > We shouldn't have a separate CPU/cache hierarchy API instead?
> > > >>>
> > > >>> Based on the intention to bring in CPU lcores which share same L3
> > > >>> (for
> On Sep 11, 2024, at 10:55 AM, Mattias Rönnblom wrote:
>
> On 2024-09-11 05:26, Varghese, Vipin wrote:
>> [AMD Official Use Only - AMD Internal Distribution Only]
>>
>>>
>>> On 2024-09-09 16:22, Varghese, Vipin wrote:
[AMD Official Use Only - AMD Internal Distribution Only]
On Wed, Sep 11, 2024 at 03:26:20AM +, Varghese, Vipin wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
>
>
>
> >
> > On 2024-09-09 16:22, Varghese, Vipin wrote:
> > > [AMD Official Use Only - AMD Internal Distribution Only]
> > >
> > >
> > >
> > >>>
> > >>>
> > >>> Thank y
On 2024-09-11 05:26, Varghese, Vipin wrote:
[AMD Official Use Only - AMD Internal Distribution Only]
On 2024-09-09 16:22, Varghese, Vipin wrote:
[AMD Official Use Only - AMD Internal Distribution Only]
Thank you Mattias for the comments and question, please let me try
to explain the s
On Wed, 11 Sep 2024 03:13:14 +
"Varghese, Vipin" wrote:
> > Agreed. This one of those cases where the existing project hwloc which is
> > part
> > of open-mpi is more complete and well supported. It supports multiple OS's
> > and can deal with more quirks.
>
> Thank you Stephen for the in
[AMD Official Use Only - AMD Internal Distribution Only]
>
> On 2024-09-09 16:22, Varghese, Vipin wrote:
> > [AMD Official Use Only - AMD Internal Distribution Only]
> >
> >
> >
> >>>
> >>>
> >>> Thank you Mattias for the comments and question, please let me try
> >>> to explain the same below
[AMD Official Use Only - AMD Internal Distribution Only]
> > >
> > > Thank you Mattias for the comments and question, please let me try
> > > to explain the same below
> > >
> > >> We shouldn't have a separate CPU/cache hierarchy API instead?
> > >
> > > Based on the intention to bring in CPU l
On 2024-09-09 16:22, Varghese, Vipin wrote:
[AMD Official Use Only - AMD Internal Distribution Only]
Thank you Mattias for the comments and question, please let me try to
explain the same below
We shouldn't have a separate CPU/cache hierarchy API instead?
Based on the intention to bring
[AMD Official Use Only - AMD Internal Distribution Only]
> >
> >
> > Thank you Mattias for the comments and question, please let me try to
> > explain the same below
> >
> >> We shouldn't have a separate CPU/cache hierarchy API instead?
> >
> > Based on the intention to bring in CPU lcores whic
[AMD Official Use Only - AMD Internal Distribution Only]
>
> >> Yes, this does help clarify things a lot as to why current NUMA
> >> support would be insufficient to express what you are describing.
> >>
> >> However, in that case I would echo sentiment others have expressed
> >> already as thi
Yes, this does help clarify things a lot as to why current NUMA support
would be insufficient to express what you are describing.
However, in that case I would echo sentiment others have expressed
already as this kind of deep sysfs parsing doesn't seem like it would be
in scope for EAL, it sou
On 9/5/2024 3:45 PM, Burakov, Anatoly wrote:
> On 9/5/2024 3:05 PM, Ferruh Yigit wrote:
>> On 9/3/2024 9:50 AM, Burakov, Anatoly wrote:
>>> On 9/2/2024 5:33 PM, Varghese, Vipin wrote:
>>
>
> Hi Ferruh,
>
>>>
>>> I feel like there's a disconnect between my understanding of the problem
>>
On 9/5/2024 3:05 PM, Ferruh Yigit wrote:
On 9/3/2024 9:50 AM, Burakov, Anatoly wrote:
On 9/2/2024 5:33 PM, Varghese, Vipin wrote:
Hi Ferruh,
I feel like there's a disconnect between my understanding of the problem
space, and yours, so I'm going to ask a very basic question:
Assuming th
On 9/3/2024 9:50 AM, Burakov, Anatoly wrote:
> On 9/2/2024 5:33 PM, Varghese, Vipin wrote:
>>
> I recently looked into how Intel's Sub-NUMA Clustering would work
> within
> DPDK, and found that I actually didn't have to do anything, because
> the
> SNC "clusters" present t
On Wed, 4 Sep 2024 11:30:59 +0200
Mattias Rönnblom wrote:
> On 2024-09-02 02:39, Varghese, Vipin wrote:
> >
> >
> > Thank you Mattias for the comments and question, please let me try to
> > explain the same below
> >
> >> We shouldn't have a separate CPU/cache hierarchy API instead?
> >
On 2024-09-02 02:39, Varghese, Vipin wrote:
Thank you Mattias for the comments and question, please let me try to
explain the same below
We shouldn't have a separate CPU/cache hierarchy API instead?
Based on the intention to bring in CPU lcores which share same L3 (for
better cache hits
On 9/2/2024 5:33 PM, Varghese, Vipin wrote:
I recently looked into how Intel's Sub-NUMA Clustering would work
within
DPDK, and found that I actually didn't have to do anything, because the
SNC "clusters" present themselves as NUMA nodes, which DPDK already
supports natively.
yes, this is c
I recently looked into how Intel's Sub-NUMA Clustering would work
within
DPDK, and found that I actually didn't have to do anything, because the
SNC "clusters" present themselves as NUMA nodes, which DPDK already
supports natively.
yes, this is correct. In Intel Xeon Platinum BIOS one can e
On 9/2/2024 3:08 AM, Varghese, Vipin wrote:
Thank you Antaloy for the response. Let me try to share my understanding.
I recently looked into how Intel's Sub-NUMA Clustering would work within
DPDK, and found that I actually didn't have to do anything, because the
SNC "clusters" present themsel
Thank you Antaloy for the response. Let me try to share my understanding.
I recently looked into how Intel's Sub-NUMA Clustering would work within
DPDK, and found that I actually didn't have to do anything, because the
SNC "clusters" present themselves as NUMA nodes, which DPDK already
support
Thank you Mattias for the comments and question, please let me try to
explain the same below
We shouldn't have a separate CPU/cache hierarchy API instead?
Based on the intention to bring in CPU lcores which share same L3 (for
better cache hits and less noisy neighbor) current API focuses
On 8/27/2024 5:10 PM, Vipin Varghese wrote:
As core density continues to increase, chiplet-based
core packing has become a key trend. In AMD SoC EPYC
architectures, core complexes within the same chiplet
share a Last-Level Cache (LLC). By packing logical cores
within the same LLC, we can enhance
On 2024-08-27 17:10, Vipin Varghese wrote:
As core density continues to increase, chiplet-based
core packing has become a key trend. In AMD SoC EPYC
architectures, core complexes within the same chiplet
share a Last-Level Cache (LLC). By packing logical cores
within the same LLC, we can enhance p
As core density continues to increase, chiplet-based
core packing has become a key trend. In AMD SoC EPYC
architectures, core complexes within the same chiplet
share a Last-Level Cache (LLC). By packing logical cores
within the same LLC, we can enhance pipeline processing
stages due to reduced late
42 matches
Mail list logo