> Yaakov wrote:
> I've heard of PIFO (Push In First out) but not PIPO. Is this a typo or 
> something new?
> I agree that there are mechanisms that are optimized for hardware, but I have 
> come up with a very nice hardware implementation for PEDF  and prefer to find 
> hardware implementations for optimal schedulers, rather than to determine 
> schedulers based on optimal hardware.

PEDF ?

There was a long debate in the congestion control technology in TSV about the 
scalability
issues of flow-aware AQM mechanisms and AFAIK, ultimately, the AQM mechanisms 
that won out
where the ones that did not require this (such as PIE).  We also had round 1 of
deterministic services (if i may call them that) using rfc2212 with RSVP fail 
because
of real or assumed infeasibility to scale on multiple dimensions. IMHO it 
actually
failed also because of assumed infeasibility becasue there was no good analysis 
and
documentation of what actually could be made to work (badmouthing...).

>From this experience i can only recommend to make sure that we do understand 
>what and
how something is feasible to be implemented a what scale and speed. 

For example, i am not aware to have seen general purpose EDF hardware at scale. 
But
i am very interested for any pointers. On the research side, this one is the
oldest one for PIFO i know:

http://web.mit.edu/pifo/pifo-sigcomm.pdf

[ i am of course mentioning PIFO because by using deadline as the PIFO rank, 
one has
  a simple approach to implement EDF].

In this work, if i remember my analysis correctly, the scale still depends on 
the
number of flows and the ability to identify packets to flows (need to read it 
again though).

This _may_ be acceptable in specific use-cases but should IMHO be well 
understood
and documented, especially when the view from the outside is that by using 
e.g.: SR
packet headers it is "looking" as if there is really no per-flow scaling aspect 
to
the hardware requiremens. After all, the idea of source routing with SR and 
having the
state in the packet is to eliminate the need to have) and scale it in the 
router.

Cheers
    Toerless

> >> Sorry that's a typo. I mean PIFO (although we do have a paper under review 
> >> using the name PIPO). Yes I agree those are just abstract primitives. The 
> >> actual implementation, if customized to a particular algorithm, would be 
> >> simpler.
> 
> Y(J)S
> 
> From: Haoyu Song <haoyu.s...@futurewei.com<mailto:haoyu.s...@futurewei.com>>
> Sent: 05/03/2021 22:46
> To: Yaakov Stein <yaako...@rad.com<mailto:yaako...@rad.com>>; 
> det...@ietf.org<mailto:det...@ietf.org>; 
> spr...@ietf.org<mailto:spr...@ietf.org>; pce@ietf.org<mailto:pce@ietf.org>
> Subject: RE: new draft on segment routing approach to TSN
> 
> 
> CAUTION: External sender. Do not click links or open attachments unless you 
> know the content is safe.
> Hi Yaakov,
> 
> Just got a chance to read your draft. I agree with the comments of the others 
> that this is a very interesting work. I'll just add a few points.
> 
> 
>   1.  The use of clock time as deadline requires network synchronization, and 
> accurate measurement of per-link propagation time, which can somehow limit 
> the application scope of this work. Alternatively, one can simply budget a 
> device latency which require a router/switch to obey. In case the overall 
> budget is evenly divided by the hops, a single parameter is enough. Of 
> course, if one wants to customize the budget on each hop (which might be 
> necessary considering the different capability/capacity of each hop), a stack 
> is still needed.
>   2.  Mechanism should be provisioned to track where the timing requirement 
> is violated and by how much (e.g., using timestamp or flag). This would be 
> very useful for troubleshooting.
>   3.  Recently programmable scheduler research has proposed several 
> primitives such as PIPO and PIEO and provided feasible hardware 
> implementations. The scheme proposed in this draft can easily fit into these 
> primitives.
> 
> Best regards,
> Haoyu
> From: spring <spring-boun...@ietf.org<mailto:spring-boun...@ietf.org>> On 
> Behalf Of Yaakov Stein
> Sent: Tuesday, February 23, 2021 5:14 AM
> To: det...@ietf.org<mailto:det...@ietf.org>; 
> spr...@ietf.org<mailto:spr...@ietf.org>; pce@ietf.org<mailto:pce@ietf.org>
> Subject: [spring] new draft on segment routing approach to TSN
> 
> All,
> 
> I would like to call your attention to a new ID 
> https://www.ietf.org/archive/id/draft-stein-srtsn-00.txt<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-stein-srtsn-00.txt&data=04%7C01%7Chaoyu.song%40futurewei.com%7C6aeaab604ad842c5ca0f08d8e122721c%7C0fee8ff2a3b240189c753a1d5591fedc%7C1%7C0%7C637506885364131187%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZHZdFoS63OmpW1tEcp9BBksGzRGfs8IF9NpJnS%2FlXa8%3D&reserved=0>
> which describes using a stack-based approach (similar to segment routing) to 
> time sensitive networking.
> It furthermore proposes combining segment routing with this approach to TSN
> resulting in a unified approach to forwarding and scheduling.
> 
> The draft is information at this point, since it discusses the concepts and 
> does not yet pin down the precise formats.
> 
> Apologies for simultaneously sending to 3 lists,
> but I am not sure which WG is the most appropriate for discussions of this 
> topic.
> 
>   *   DetNet is most relevant since the whole point is to control end-to-end 
> latency of a time-sensitive flow.
>   *   Spring is also directly relevant due to the use of a stack in the 
> header and the combined approach just mentioned.
>   *   PCE is relevant to the case of a central server jointly computing an 
> optimal path and local deadline stack.
> I'll let the chairs decide where discussions should be held.
> 
> Y(J)S
> 

> _______________________________________________
> detnet mailing list
> det...@ietf.org
> https://www.ietf.org/mailman/listinfo/detnet


-- 
---
t...@cs.fau.de

_______________________________________________
Pce mailing list
Pce@ietf.org
https://www.ietf.org/mailman/listinfo/pce

Reply via email to