Hi Jensen, Qin and all,
Thanks for the discussion. Please see my comments inline.
Best,
Kai
-----Original Messages-----
From:"Jensen Zhang" <jingxuan.n.zh...@gmail.com>
Sent Time:2022-09-06 21:13:01 (Tuesday)
To: "Qin Wu" <bill...@huawei.com>
Cc: "IETF ALTO" <alto@ietf.org>
Subject: Re: [alto] Open discussions of ALTO O&M data model
Hi Qin,
Sorry for my late reply. See my comments inline.
On Sun, Aug 21, 2022 at 8:44 AM Qin Wu <bill...@huawei.com> wrote:
Hi, Jensen:
Thank for summarizing the discussion in last IETF meeting, please see my
comments inline.
发件人: alto [mailto:alto-boun...@ietf.org] 代表 Jensen Zhang
发送时间: 2022年8月16日 21:04
收件人: IETF ALTO <alto@ietf.org>
主题: [alto] Open discussions of ALTO O&M data model
Hi ALTOers,
>From the WG session in IETF 114, we had a lot of discussions about the open
>issues for ALTO O&M. Authors appreciate all the comments and are working on
>the next revision.
We quickly summarize the major debates and are willing to have more discussions
to move this work forward. To be more efficient, we may separate the
discussions to different threads later.
1. How to handle data types defined by IANA registries
There are actually two arguments:
1.a. Which statement is better used to define the IANA-related data types
(e.g., cost modes, cost metrics)? Two options: enumeration typedef or identity
The main limitation of the enumeration is the extensibility. As ALTO may have
multiple ongoing extensions, it will be required to add new private values for
existing data types for experimenting purpose. Identity is better choice to
support this.
1.b. Whether put the data type definitions to an IANA-maintained YANG module
>From the guidelines provided by Med
>(https://datatracker.ietf.org/doc/html/draft-boucadair-netmod-iana-registries-03),
> IANA-maintained module is RECOMMENDED.
[Qin Wu] If you chose to use identity data type, I think it is not necessary
for you to use IANA maintained YANG module, IANA maintained YANG module allows
you later on make update to the data type if needed.
If you don’t expect to have frequent changes to the data types, It looks
identity is best option but not necessary to create IANA maintained module.
Otherwise, it seems overdesign to me.
From my understanding, identity is to allow non-standard YANG modules to add
new values to the registry. IANA maintained module is to allow IANA to keep the
registry updated by standard YANG modules. So I don't think creating IANA
maintained module is overdesign.
I think the point to use the IANA maintained module is that updates to the data
types defined by IANA registry would be more frequent than updates to the OAM
YANG module.
But let's see how other ALTOers will react.
I remember that we had some discussion on this issue during IETF 114. The idea
of using identity instead of enum is to enable new values to be used without
breaking other packages that depend on the module that defines the enum.
However, it may be possible that IANA manages not one but multiple modules.
Then new extensions can be added as individual modules and packages.
An example can be seen below:
Before module2 is being standardized:
- module1 is a standard module managed by IANA
- module2 is a private module with some extensions
module1 (iana-alto-base-xxx) module2 (private-alto-ext-xxx)
| |
|----------------------------------------\ |
v v v
alto-server1 alto-server-2
After:
- module1 is a standard module managed by IANA
- module2 becomes a standard module managed by IANA
module1 (iana-alto-base-xxx) module2 (iana-alto-ext-xxx)
| |
|----------------------------------------\ |
v v v
alto-server1 alto-server-2
Then module1 is untouched and alto-server1 can continue to work without the
need to update its own dependency. If alto-server1 wants to support the
extension, it can add module2 to its dependencies.
2. Whether and how to supply server-to-server communication for multi-domain
settings
There is no draft defining any standard for ALTO eastern-western bound API
(server-to-server communication). Defining data model for this may be too
early. But this problem is important in practice. We have several potential
choices:
2.a. Each ALTO server connects data sources for its own domain, and build
interdomain connections with each other (using eastern-western bound API)
2.b. A single ALTO server connects data sources from multiple domains. The data
sources provide interdomain information for ALTO server to build global network
view.
[Qin Wu] You might refer to multi-domain case in RFC7165, it did describe a few
requirements and use cases for ALTO eastern-western bound API, but I think it
leave the door open for the solution.
I think if you use other protocol than ALTO to define ALTO eastern-western
bound API, it is apparent not in the scope of ALTO WG, it you use ALTO protocol
to define server to server communication, I think it is in the scope ALTO OAM
YANG.
I agree. From my experience, ALTO eastern-western bound API should be based on
other protocols than ALTO. Therefore, the OAM to it should not be in the scope
of ALTO OAM. But ALTO OAM may still need to configure some meta information for
it.
Also don’t forget ALTO discovery mechanism, one is intra-domain discovery
mechanism ,the other is inter domain discovery mechanism.
3. How to build connection between data sources and algorithm data model
Consider each algorithm data model defines an interface of an ALTO service
implementation. It declares types for a list of arguments. Those arguments can
be references to data collected from data sources.
In real practice, there are two cases using data to calculate ALTO information
resources:
3.a. ALTO service (algorithm plugin) directly reads data from data sources to
calculate ALTO information resources.
https://datatracker.ietf.org/doc/html/draft-hzx-alto-network-topo-00 can be one
of such examples
3.b. ALTO server preprocesses data collected from data sources and writes to a
data broker. Algorithm plugin reads data from data broker to calculate ALTO
information resources. FlowDirector
(https://dl.acm.org/doi/10.1145/3359989.3365430) can be such an example.
These two cases may coexist in the same ALTO server implementation.
[Qin Wu] We did discus this in Philadelphia meeting, ALTO focus on query
interface, we did’t specify where the data come from and how to collect it. I
don’t think
We should put too much constraints on how the data is collected and exported,
stored and fed into ALTO server. Therefore based on our discussion in ALTO
weekly meeting on Tuesday in this week, one suggestion to address this, factor
out common data retrieval mechanism, move implementation specific or protocol
specific parameters to the Appendix as an example.
Indeed. The current data model for the data source may be overdesign. I think
we can focus on defining the basic shape of the data source model:
+--rw data-source
+--rw source-id
+--rw source-type
+--rw (update-policy)
| +--:(reactive)
| +--:(proactive)
| +--:(on-demand)
+--rw (source-protocol-stack)?
Then, we move any specific source-type and source-protocol-stack choices to the
appendix.
Cheers,
Jensen
Supporting 3.a in O&M data model is easy. Sec 7 of the draft provides such an
example. However, Consider the O&M data model MUST NOT assume the
schema/interface of the data broker is fixed, it will be hard to support 3.b
One potential solution is to allow the data model to define references to data
in the data broker, and dependencies between data in the data broker and the
data sources.
Looking forward to seeing feedback and further discussions.
Best regards,
Jensen
_______________________________________________
alto mailing list
alto@ietf.org
https://www.ietf.org/mailman/listinfo/alto