Hi Chris, > > In my opinion it would be contrary to the design philosophy. > > My reasoning is this. VPP is a forwarder, it's place in the network stack is the SW ASIC equivalent. Ok. That's interesting. In the end this is about building things right?
Sure. My mantra is often that functionality can often best be added through layering and composition rather than building one app to rule them all. Here's my $.02: - VPP provides packet processing as part of this it: - Provides a FIB. - Owns and provides interfaces which have addresses - Moves packets between interfaces based on the FIB. - It does a ton of other really non-ASICy stuff too but let's keep it simple - A good interface for managing the above resources allows multiple non-conflicting clients. - Multiple clients should not have to have a full-mesh of connectivity and knowledge of each others state to infer the state that exists inside VPP. Perhaps I misunderstand, but it seems to me that full mesh connectivity is what you are after, with VPP providing the mesh. i.e. all clients can send state to all other clients via VPP. For example, most entities in a control plane need to know which interfaces exist. A good example of an API for the above is rtnetlink http://man7.org/linux/man-pages/man7/rtnetlink.7.html BSD has one as well https://www.freebsd.org/cgi/man.cgi?query=route&sektion=4&manpath=netbsd Right, netlink is a means for clients to broadcast/multicast state to all other clients that are interested - this is what I mean when I say 'message bus' These APIs (to the same resources VPP provides -- FIB and Interfaces) work well and allow for multiple applications that do different things all to be written easily and be plugged together. This would not be possible if all those applications had to have intimate knowledge and communicate between of each other. Right, they communicate using the common data representation and tx/rx semantics of the message bus. It's just a good design. I would call this an API to a shared resource. Calling it a message bus feels more like the seed for an April 1st RFC. I don't think it helps get a good API built to focus on how it could be misused. Just because one *could* use an API to a shared resource to exchange information (odd as that would be to do) doesn't mean that the API is broken, it just means someone would be misusing it. I think we are disagreeing on two points; 1) the definition of 'the API'. For me the VPP API is the means by which clients program VPP. If I understand you correctly the API is a means to disseminate information to various entities in the system. I claim one should layer the latter on the former. 2) VPP as a shared resource. VPP is not the source of truth. Generally, If A programmes B with state X then A is still the source of truth for X, it doesn't transfer to B. So if a client wants to know, e.g. the IP addresses applied to each interface, it is best to query the interface-manager. Again, my opinion is but one of many, and I feel like I'm starting to sound like a stuck record __ so I'll lay low for a while and keep the airways open. /neale Thanks, Chris. > It's primary goal is thus to forward packets as fast as possible, it's secondary goal is to be programmable as fast as possible (particularly for routes, of which there can be millions). Any other functions that might be expected of VPP that run against these objectives are not 'allowed'. Being a messages bus, netlink broadcaster etc is against its second objective - we don't want to spend cycles broadcasting/multicasting notifications from client x to a set of clients y through a set of filters z. > > Of course that's not to say one can't build a multicasting layer on top of VPP, in a separate process, that performs these functions. > > Please also note: > https://wiki.fd.io/view/VPP/RM > when considering using multiple clients with VPP. > > Naturally, my opinion is but one of many... > > /neale > > > On 27/02/2020 17:56, "Christian Hopps" <cho...@chopps.org> wrote: > > > >> On Feb 27, 2020, at 11:45 AM, Neale Ranns (nranns) <nra...@cisco.com> wrote: >> >> >> Hi Chris, >> >> There is a design philosophy against sending notifications to agents about information that comes from agents. This is in contrast to notifications to agents about events that occur in the data-plane, like DHCP lease, new ARP/ND, learned L2 addresses, etc. > > That doesn't really help me understand and worries me more. > > Would providing the same functionality as exists in the netlink socket (for routes and interfaces) be against VPP design philosophy? > > Thanks, > Chris. > > > >> >> /neale >> >> From: <vpp-dev@lists.fd.io> on behalf of Christian Hopps <cho...@chopps.org> >> Date: Thursday 27 February 2020 at 16:32 >> To: "Neale Ranns (nranns)" <nra...@cisco.com> >> Cc: Christian Hopps <cho...@chopps.org>, "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io> >> Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches? >> >> >> >>> On Feb 27, 2020, at 9:41 AM, Neale Ranns via Lists.Fd.Io <nranns=cisco....@lists.fd.io> wrote: >>> >>> >>> >>> From: <vpp-dev@lists.fd.io> on behalf of Christian Hopps <cho...@chopps.org> >>> Date: Thursday 27 February 2020 at 15:16 >>> To: "Neale Ranns (nranns)" <nra...@cisco.com> >>> Cc: Christian Hopps <cho...@chopps.org>, Andrew 👽 Yourtchenko <ayour...@gmail.com>, "Dave Barach (dbarach)" <dbar...@cisco.com>, vpp-dev <vpp-dev@lists.fd.io> >>> Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches? >>> >>> [snip] >>>> >>>> In my case we're not really talking about an API that is "bleeding edge", but rather "waited around for someone to need/implement it". Doing a route/fib entry lookup isn't very bleeding edge given what VPP does. :) >>>> >>>> True 😊 but the general usage model for VPP is that there is one agent/client giving it all the state it needs to function. So why would that agent need lookup functions, it has all the data already. The dump APIs serve to repopulate the agent with all state should it crash. >>> >>> I suppose that's why I needed to add this API then. :) >>> >>> We're using VPP more like a replacement for the kernel networking stack, with multiple networking clients interfacing to it, rather than just one monolithic application. >>> >>> Ok. Just don’t fall into the pit that is ‘I want VPP to tell client X when client Y does something’ – VPP is not a message bus 😊 >> >> I'd like to be careful here. If VPP is serving as a replacement for the networking stack with multiple clients interfacing to it, then it does serve as the single-source-of-truth on things like interface state, routes, etc. So, I do want to hear inside my IKE daemon from VPP that a route, intefrace, etc, may have changed, I don't want to have to interface the IKE daemon to N possible pieces of software that might modify routes, interfaces, etc. >> >> I'm only being careful here b/c if one looks at e.g., netlink functionality and then at the VPP api there are some gaps (e.g., interface address add/del and route add delete events are missing I believe). I'm assuming that these gaps exist b/c, as you say, people have generally not needed them, but not b/c there a design philosophy against them. >> >> Thanks, >> Chris. >> >>> >>> /neale >>> >>> >>> Thanks, >>> Chris. >>> >>>> >>>> /neale >>> >> >> > > >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15610): https://lists.fd.io/g/vpp-dev/message/15610 Mute This Topic: https://lists.fd.io/mt/71535081/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-