Thanks, Inline On 08/31/2016 01:09 PM, Sela, Guy wrote: > Hi, > > I've being reading this document: > > https://github.com/opendaylight/mdsal/blob/master/src/site/asciidoc/co > nceptual-data-tree.adoc > > > > I have a few questions. > > > > 1) > > Very general question: > > Is the Conceptual Data Tree something new that is planned to get into > ODL, or is it just an explanation of how the Data Stores are currently > implemented? > > If it's new, when is it expected to get in?
It is an evolution of how individual data stores are integrated into the system, rather than having each data store expose a DataBroker, the concept of sharding is integrated and data store implementations contribute shards. The timeframe is Boron, with IMDS already integrated and CDS expected to be integrated in SR1. <GS> Just to make sure I understand: IMDS means In-Memory Data Store, and this is relevant only if there is one instance of ODL (i.e., no cluster). CDS means Clustered Data Store, and when running in a cluster, this store must be used, so state can be shared within the members of the cluster. ? Are there any sample projects that use the API of the IMDS? > In the end of the document there is a description of > DOMDataTreeProducer/Listener. Are these alternatives to the current > DataTreeChangeListener? Or do they solve a different problem domain? They are an evolution of TransactionChain and DataTreeChangeListener. <GS> Will the old ones be deprecated? Or is it an alternative? > 2) > > What is the difference between YangInstanceIdentifier and > InstanceIdentifier? > > Is it related to different abstraction layers? Different API layers: YangInstanceIdentifier is binding-independent, InstanceIdentifier is its binding-aware counterpart -- a fact visible from their home packages. <GS> understood. > 3) > > About this section: > > *Federation, Replication and High Availability* > > Support for various multi-node scenarios is a concern outside of core > MD-SAL. If a particular scenario requires the shard layout to be > replicated (either fully or partially), it is up to Shard providers to > maintain an omnipresent service on each node, which in turn is > responsible for dynamically registering DOMDataTreeShard instances > with the Shard Registry service. > > *Since the Shard Layout is strictly local to a particular OpenDaylight > instance, an OpenDaylight cluster is not strictly consistent in its > mapping of **YangInstanceIdentifier** to data. When a query for the > entire data tree is executed, the returned result will vary between > member instances based on the differences of their Shard Layouts. This > allows each node to project its local operational details, as well as > the partitioning of the data set being worked on based on workload and > node availability.* > > Partial symmetry of the conceptual data tree can still be maintained > to the extent that a particular deployment requires. For example the > Shard containing the OpenFlow topology can be configured to be > registered on all cluster members, leading to queries into that > topology returning consistent results. > > > > * I don't fully understand that part in bold. Can someone explain > it using examples? Sorry, can't really draw diagrams right now :-( <GS> no worries > > * Regarding the last 2 sentences: What do you mean that a Shard > can be registered on all cluster members? I thought that there is one > Shard Leader per Shard, and the leader is the only one that > manipulates the data and distributes it to all the other members, > using the Raft algorithm. This approach provides the consistency, so I > don't understand what do you mean here. 'Shard Leader' is a CDS backend concept. Conceptual Data Tree is a frontend thing. It is very important not to confuse backend (how the data is stored) with frontend (how the data is made available). An OpenDaylight cluster does not require all members to 'host' all shards, hence in a 5-node cluster two different shards can be hosted on members A,B,C and C,D,E respectively -- and this is a backend (implementation-specific) detail. Conceptual Data Tree deals with access, e.g. which shards are mapped where on a particular node. Routing to backend is an implementation detail. <GS> Okay, now I understand that there is a separation between the backend and frontend here. I still don't understand why or when data won't be consistent. Let's say I query YangInstanceIdentifier X from member A in a cluster, at what circumstances for example would the same query return a different result if I would execute it on member E? > 4) > > What is the maturity/roadmap of the support for different storage engines? As noted above, IMDS is integrated now, CDS is on the map for SR1. Others have not been scoped. <GS> Got it. Bye, Robert _______________________________________________ controller-dev mailing list controller-dev@lists.opendaylight.org https://lists.opendaylight.org/mailman/listinfo/controller-dev