Re: [controller-dev] [infrautils-dev] Sharding evolution

2018-06-11 Thread Faseela K
Anil,
   Agree to your points, that we still have to goto a remote node, once we 
reach the plugin side.
   But there are other synchronization issues in the code as well, due to 
several of the events landing up on different nodes, which have been fixed in 
many cases using EOS/CLusterSingleton(now entity owner can end up on another 
node as well).
   So, if we can get rid of such complexities for 3 node, that helps as well.
   But ofcourse, it will be always better if there is intelligence to make one 
particular end to end call(eg : boot VM1)  to go through one single node ;), 
than making it single node for all operations for all shards.
Thanks,
Faseela

From: Anil Vishnoi [mailto:vishnoia...@gmail.com]
Sent: Saturday, June 09, 2018 6:10 AM
To: Faseela K 
Cc: Tom Pantelis ; Michael Vorburger 
; infrautils-...@lists.opendaylight.org; controller-dev 
; genius-...@lists.opendaylight.org
Subject: Re: [controller-dev] [infrautils-dev] Sharding evolution

So reading the wiki page, i was able to understand that there are two main 
issues

(1) Transaction management and rollback -- i was not able to figure out the 
relevance with distributed shard location.
(2) Performance in cluster node -- if shard leaders are distributed, any 
transaction will involve network latency because transaction need to be routed 
to leader controller ?

Let me know if there is any other reason that i missed from the wiki.

I think (2) is something that you are addressing by localizing the shard on one 
controller? But that just solves probably 1 problem, you still have following 
problems if you really want to solve the problem

(1) Ownership of OVSDB devices are distributed across the 3 nodes ( and they 
all depends on ClusteredDataChangeListeners, and that has cost as well).
(2) Ownership of openflow devices are distributed across the 3 nodes.
(3) Operational data replication across the three node cluster also has cost 
and if your business logic depends on that, that will hit the performance as 
well.

So by locating the shards at one place, you might solve one (minor) problem in 
the whole end to end stack to improve the performance. Probably the quickest 
solution to significantly improve the end to end performance is that you force 
the ovsdb and openflow devices to be owned by the same controller as well. But 
if you do that, the only remaining purpose of cluster is to use it for data 
replication across two more nodes with the 2/3 performance hit in data store 
performance :).



On Fri, Jun 8, 2018 at 5:06 PM, Faseela K 
mailto:faseel...@ericsson.com>> wrote:
[Changed the subject]

Anil, now you can ask ;)

https://wiki.opendaylight.org/view/Genius:Sharding_evolution

Thanks,
Faseela

From: Anil Vishnoi [mailto:vishnoia...@gmail.com<mailto:vishnoia...@gmail.com>]
Sent: Saturday, June 09, 2018 5:30 AM
To: Faseela K mailto:faseel...@ericsson.com>>
Cc: Tom Pantelis mailto:tompante...@gmail.com>>; Michael 
Vorburger mailto:vorbur...@redhat.com>>; 
infrautils-...@lists.opendaylight.org<mailto:infrautils-...@lists.opendaylight.org>;
 controller-dev 
mailto:controller-dev@lists.opendaylight.org>>;
 genius-...@lists.opendaylight.org<mailto:genius-...@lists.opendaylight.org>
Subject: Re: [controller-dev] [infrautils-dev] OK to resurrect c/64522 to first 
move infrautils.DiagStatus integration for datastore from genius to controller, 
and then improve it for GENIUS-138 ?



On Fri, Jun 8, 2018 at 4:50 PM, Faseela K 
mailto:faseel...@ericsson.com>> wrote:


From: Tom Pantelis [mailto:tompante...@gmail.com<mailto:tompante...@gmail.com>]
Sent: Saturday, June 09, 2018 2:24 AM
To: Anil Vishnoi mailto:vishnoia...@gmail.com>>
Cc: Faseela K mailto:faseel...@ericsson.com>>; Michael 
Vorburger mailto:vorbur...@redhat.com>>; 
infrautils-...@lists.opendaylight.org<mailto:infrautils-...@lists.opendaylight.org>;
 controller-dev 
mailto:controller-dev@lists.opendaylight.org>>;
 genius-...@lists.opendaylight.org<mailto:genius-...@lists.opendaylight.org>
Subject: Re: [controller-dev] [infrautils-dev] OK to resurrect c/64522 to first 
move infrautils.DiagStatus integration for datastore from genius to controller, 
and then improve it for GENIUS-138 ?



On Fri, Jun 8, 2018 at 3:11 PM, Anil Vishnoi 
mailto:vishnoia...@gmail.com>> wrote:


On Thu, Jun 7, 2018 at 11:39 AM, Faseela K 
mailto:faseel...@ericsson.com>> wrote:
Not related in this context, but if we can get shard leader change 
notification, can we use that to derive an entity owner instead of using EOS? ;)
​Humble suggestion, don't use shard location/ownership status in your business 
logic ;-)​


+1. And knowledge, assumptions about shard names, member names ... :)

>> Of course we all like to avoid such complex logics in the application code. 
>> In a 3 node cluster, for an application like netvirt which has to push a lot 
>> of flows, plus a set of OVSDB config

Re: [controller-dev] [infrautils-dev] Sharding evolution

2018-06-10 Thread Muthukumaran K
Hi Robert, 

>> Invalid data being written 
This is certainly coding error. No patch-up makes any sense here.  

>> The second one stems from application design: why is the application not 
>> designed in a conflict - free manner
Fully agree !! There is no short-cut here. There is no better first step than 
single-writer approach by apps who owns specific part of the data tree and 
adhering to that strictly across initial feature impl,  enhancements / 
bug-fixes. Unless this foundation is there, no  other patch-up(s) can be of 
help 

'Compensatory transactions' is again the domain of applications and is 
orthogonal to the choice of standalone txn or chained txn as well as the type 
of failures

Regards
Muthu




-Original Message-
From: Robert Varga [mailto:n...@hq.sk] 
Sent: Saturday, June 09, 2018 5:37 PM
To: Muthukumaran K ; Faseela K 
; Anil Vishnoi 
Cc: infrautils-...@lists.opendaylight.org 
; controller-dev 
; genius-...@lists.opendaylight.org 

Subject: Re: [controller-dev] [infrautils-dev] Sharding evolution

Hello Muthu,

There are only two ways in which a transaction can fail aside from 'datastore 
is busted':
- invalid data being written
- conflicting activity outside of the causality chain

The first one is an obvious coding error and I don't quite see how you'd design 
a recovery strategy whose complexity does not exceed complexity of the normal 
path.

The second one stems from application design: why is the application not 
designed in a conflict - free manner? And when a conflict occurs, how do you 
know it's nature and how to reconcile it?

You certainly can redo a failed transaction: it is only a matter holding on to 
the inputs, i.e. DTCL view is immutable.

Nevertheless if it's performance you are after conflicts should happen once in 
a blue moon...

Sent from my BlackBerry - the most secure mobile device - via the Orange Network

  Original Message
From: muthukumara...@ericsson.com
Sent: June 9, 2018 10:10 AM
To: n...@hq.sk; faseel...@ericsson.com; vishnoia...@gmail.com
Cc: infrautils-...@lists.opendaylight.org; 
controller-dev@lists.opendaylight.org; genius-...@lists.opendaylight.org
Subject: RE: [controller-dev] [infrautils-dev] Sharding evolution

Transaction Chains is also useful in context of ensuring that last txn is 
completed before next is executed so that subsequent txn can see the changes 
made by previous one (of course within single subtree) more efficiently. And 
also enables single-writer discipline

@Robert,

In context of Txn Chain, if 10 txns are submitted and failure occurs at 5th 
txn, the chain would provide a failure callback.
Most rampant pattern part for apps would be submitting txns to the chain from 
DTCLs or CDTCLs. Assuming, 10 change notifications resulted in 10 chain txn 
submits and chain fails the 5th txn due to valid reasons, now, apps have lost 
the context of 5 txns which failed.

In such scenarios, what would be a better approach for apps to perform any 
compensatory actions for failed transactions in context of using chain?

Regards
Muthu

-Original Message-
From: controller-dev-boun...@lists.opendaylight.org 
[mailto:controller-dev-boun...@lists.opendaylight.org] On Behalf Of Robert Varga
Sent: Saturday, June 09, 2018 6:25 AM
To: Faseela K ; Anil Vishnoi 
Cc: infrautils-...@lists.opendaylight.org; controller-dev 
; genius-...@lists.opendaylight.org
Subject: Re: [controller-dev] [infrautils-dev] Sharding evolution

On 09/06/18 02:06, Faseela K wrote:
> [Changed the subject]
> 
>  
> 
> Anil, now you can ask ;)
> 
>  
> 
> https://wiki.opendaylight.org/view/Genius:Sharding_evolution
> 

MD-SAL long-term design:
https://wiki.opendaylight.org/view/MD-SAL:Boron:Conceptual_Data_Tree

Make sure to align your thinking with that... Splitting lists at MD-SAL runs 
into the problem of consistent hashing and scatter/gather operations:
- given a key I must know which shard it belongs to (and that determination has 
to be *quick*)
- anything crossing shards is subject to coordination, which is a *lot* less 
efficient than single-shard commits

If it's performance you are after:
- I cannot stress the importance of TransactionChains enough: if you cannot do 
them, you need to go back to the drawing board, as causality and shared fate 
*must* be properly expressed
- Avoid cross-shard transactions at (pretty much) all cost. I know of
*no* reason to commit to inventory and topology at the same time - if you have 
a use case which cannot be supported without it, please do describe it (and 
explain why it cannot be done)
- No synchronous operations, anywhere
- Producers (tx.submit() are just one side of the equation, consumers
(DTCL) are equally important

Regards,
Robert

___
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev


Re: [controller-dev] [infrautils-dev] Sharding evolution

2018-06-09 Thread Robert Varga
Hello Muthu,

There are only two ways in which a transaction can fail aside from 'datastore 
is busted':
- invalid data being written
- conflicting activity outside of the causality chain

The first one is an obvious coding error and I don't quite see how you'd design 
a recovery strategy whose complexity does not exceed complexity of the normal 
path.

The second one stems from application design: why is the application not 
designed in a conflict - free manner? And when a conflict occurs, how do you 
know it's nature and how to reconcile it?

You certainly can redo a failed transaction: it is only a matter holding on to 
the inputs, i.e. DTCL view is immutable.

Nevertheless if it's performance you are after conflicts should happen once in 
a blue moon...

Sent from my BlackBerry - the most secure mobile device - via the Orange Network

  Original Message  
From: muthukumara...@ericsson.com
Sent: June 9, 2018 10:10 AM
To: n...@hq.sk; faseel...@ericsson.com; vishnoia...@gmail.com
Cc: infrautils-...@lists.opendaylight.org; 
controller-dev@lists.opendaylight.org; genius-...@lists.opendaylight.org
Subject: RE: [controller-dev] [infrautils-dev] Sharding evolution

Transaction Chains is also useful in context of ensuring that last txn is 
completed before next is executed so that subsequent txn can see the changes 
made by previous one (of course within single subtree) more efficiently. And 
also enables single-writer discipline

@Robert,

In context of Txn Chain, if 10 txns are submitted and failure occurs at 5th 
txn, the chain would provide a failure callback.
Most rampant pattern part for apps would be submitting txns to the chain from 
DTCLs or CDTCLs. Assuming, 10 change notifications resulted in 10 chain txn 
submits and chain fails the 5th txn due to valid reasons, now, apps have lost 
the context of 5 txns which failed.

In such scenarios, what would be a better approach for apps to perform any 
compensatory actions for failed transactions in context of using chain?

Regards
Muthu

-Original Message-
From: controller-dev-boun...@lists.opendaylight.org 
[mailto:controller-dev-boun...@lists.opendaylight.org] On Behalf Of Robert Varga
Sent: Saturday, June 09, 2018 6:25 AM
To: Faseela K ; Anil Vishnoi 
Cc: infrautils-...@lists.opendaylight.org; controller-dev 
; genius-...@lists.opendaylight.org
Subject: Re: [controller-dev] [infrautils-dev] Sharding evolution

On 09/06/18 02:06, Faseela K wrote:
> [Changed the subject]
> 
>  
> 
> Anil, now you can ask ;)
> 
>  
> 
> https://wiki.opendaylight.org/view/Genius:Sharding_evolution
> 

MD-SAL long-term design:
https://wiki.opendaylight.org/view/MD-SAL:Boron:Conceptual_Data_Tree

Make sure to align your thinking with that... Splitting lists at MD-SAL runs 
into the problem of consistent hashing and scatter/gather operations:
- given a key I must know which shard it belongs to (and that determination has 
to be *quick*)
- anything crossing shards is subject to coordination, which is a *lot* less 
efficient than single-shard commits

If it's performance you are after:
- I cannot stress the importance of TransactionChains enough: if you cannot do 
them, you need to go back to the drawing board, as causality and shared fate 
*must* be properly expressed
- Avoid cross-shard transactions at (pretty much) all cost. I know of
*no* reason to commit to inventory and topology at the same time - if you have 
a use case which cannot be supported without it, please do describe it (and 
explain why it cannot be done)
- No synchronous operations, anywhere
- Producers (tx.submit() are just one side of the equation, consumers
(DTCL) are equally important

Regards,
Robert

___
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev


Re: [controller-dev] [infrautils-dev] Sharding evolution

2018-06-09 Thread Muthukumaran K
Transaction Chains is also useful in context of ensuring that last txn is 
completed before next is executed so that subsequent txn can see the changes 
made by previous one (of course within single subtree) more efficiently. And 
also enables single-writer discipline

@Robert,

In context of Txn Chain, if 10 txns are submitted and failure occurs at 5th 
txn, the chain would provide a failure callback.
Most rampant pattern part for apps would be submitting txns to the chain from 
DTCLs or CDTCLs. Assuming, 10 change notifications resulted in 10 chain txn 
submits and chain fails the 5th txn due to valid reasons, now, apps have lost 
the context of 5 txns which failed.

In such scenarios, what would be a better approach for apps to perform any 
compensatory actions for failed transactions in context of using chain?

Regards
Muthu

-Original Message-
From: controller-dev-boun...@lists.opendaylight.org 
[mailto:controller-dev-boun...@lists.opendaylight.org] On Behalf Of Robert Varga
Sent: Saturday, June 09, 2018 6:25 AM
To: Faseela K ; Anil Vishnoi 
Cc: infrautils-...@lists.opendaylight.org; controller-dev 
; genius-...@lists.opendaylight.org
Subject: Re: [controller-dev] [infrautils-dev] Sharding evolution

On 09/06/18 02:06, Faseela K wrote:
> [Changed the subject]
> 
>  
> 
> Anil, now you can ask ;)
> 
>  
> 
> https://wiki.opendaylight.org/view/Genius:Sharding_evolution
> 

MD-SAL long-term design:
https://wiki.opendaylight.org/view/MD-SAL:Boron:Conceptual_Data_Tree

Make sure to align your thinking with that... Splitting lists at MD-SAL runs 
into the problem of consistent hashing and scatter/gather operations:
- given a key I must know which shard it belongs to (and that determination has 
to be *quick*)
- anything crossing shards is subject to coordination, which is a *lot* less 
efficient than single-shard commits

If it's performance you are after:
- I cannot stress the importance of TransactionChains enough: if you cannot do 
them, you need to go back to the drawing board, as causality and shared fate 
*must* be properly expressed
- Avoid cross-shard transactions at (pretty much) all cost. I know of
*no* reason to commit to inventory and topology at the same time - if you have 
a use case which cannot be supported without it, please do describe it (and 
explain why it cannot be done)
- No synchronous operations, anywhere
- Producers (tx.submit() are just one side of the equation, consumers
(DTCL) are equally important

Regards,
Robert

___
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev


Re: [controller-dev] [infrautils-dev] Sharding evolution

2018-06-08 Thread Robert Varga
On 09/06/18 02:06, Faseela K wrote:
> [Changed the subject]
> 
>  
> 
> Anil, now you can ask ;)
> 
>  
> 
> https://wiki.opendaylight.org/view/Genius:Sharding_evolution
> 

MD-SAL long-term design:
https://wiki.opendaylight.org/view/MD-SAL:Boron:Conceptual_Data_Tree

Make sure to align your thinking with that... Splitting lists at MD-SAL
runs into the problem of consistent hashing and scatter/gather operations:
- given a key I must know which shard it belongs to (and that
determination has to be *quick*)
- anything crossing shards is subject to coordination, which is a *lot*
less efficient than single-shard commits

If it's performance you are after:
- I cannot stress the importance of TransactionChains enough: if you
cannot do them, you need to go back to the drawing board, as causality
and shared fate *must* be properly expressed
- Avoid cross-shard transactions at (pretty much) all cost. I know of
*no* reason to commit to inventory and topology at the same time - if
you have a use case which cannot be supported without it, please do
describe it (and explain why it cannot be done)
- No synchronous operations, anywhere
- Producers (tx.submit() are just one side of the equation, consumers
(DTCL) are equally important

Regards,
Robert



signature.asc
Description: OpenPGP digital signature
___
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev


Re: [controller-dev] [infrautils-dev] Sharding evolution

2018-06-08 Thread Anil Vishnoi
So reading the wiki page, i was able to understand that there are two main
issues

(1) Transaction management and rollback -- i was not able to figure out the
relevance with distributed shard location.
(2) Performance in cluster node -- if shard leaders are distributed, any
transaction will involve network latency because transaction need to be
routed to leader controller ?

Let me know if there is any other reason that i missed from the wiki.

I think (2) is something that you are addressing by localizing the shard on
one controller? But that just solves probably 1 problem, you still have
following problems if you really want to solve the problem

(1) Ownership of OVSDB devices are distributed across the 3 nodes ( and
they all depends on ClusteredDataChangeListeners, and that has cost as
well).
(2) Ownership of openflow devices are distributed across the 3 nodes.
(3) Operational data replication across the three node cluster also has
cost and if your business logic depends on that, that will hit the
performance as well.

So by locating the shards at one place, you might solve one (minor) problem
in the whole end to end stack to improve the performance. Probably the
quickest solution to significantly improve the end to end performance is
that you force the ovsdb and openflow devices to be owned by the same
controller as well. But if you do that, the only remaining purpose of
cluster is to use it for data replication across two more nodes with the
2/3 performance hit in data store performance :).



On Fri, Jun 8, 2018 at 5:06 PM, Faseela K  wrote:

> [Changed the subject]
>
>
>
> Anil, now you can ask ;)
>
>
>
> https://wiki.opendaylight.org/view/Genius:Sharding_evolution
>
>
>
> Thanks,
>
> Faseela
>
>
>
> *From:* Anil Vishnoi [mailto:vishnoia...@gmail.com]
> *Sent:* Saturday, June 09, 2018 5:30 AM
> *To:* Faseela K 
> *Cc:* Tom Pantelis ; Michael Vorburger <
> vorbur...@redhat.com>; infrautils-...@lists.opendaylight.org;
> controller-dev ;
> genius-...@lists.opendaylight.org
> *Subject:* Re: [controller-dev] [infrautils-dev] OK to resurrect c/64522
> to first move infrautils.DiagStatus integration for datastore from genius
> to controller, and then improve it for GENIUS-138 ?
>
>
>
>
>
>
>
> On Fri, Jun 8, 2018 at 4:50 PM, Faseela K  wrote:
>
>
>
>
>
> *From:* Tom Pantelis [mailto:tompante...@gmail.com]
> *Sent:* Saturday, June 09, 2018 2:24 AM
> *To:* Anil Vishnoi 
> *Cc:* Faseela K ; Michael Vorburger <
> vorbur...@redhat.com>; infrautils-...@lists.opendaylight.org;
> controller-dev ;
> genius-...@lists.opendaylight.org
> *Subject:* Re: [controller-dev] [infrautils-dev] OK to resurrect c/64522
> to first move infrautils.DiagStatus integration for datastore from genius
> to controller, and then improve it for GENIUS-138 ?
>
>
>
>
>
>
>
> On Fri, Jun 8, 2018 at 3:11 PM, Anil Vishnoi 
> wrote:
>
>
>
>
>
> On Thu, Jun 7, 2018 at 11:39 AM, Faseela K  wrote:
>
> Not related in this context, but if we can get shard leader change
> notification, can we use that to derive an entity owner instead of using
> EOS? ;)
>
> ​Humble suggestion, don't use shard location/ownership status in your
> business logic ;-)​
>
>
>
>
>
> +1. And knowledge, assumptions about shard names, member names ... :)
>
>
>
> >> Of course we all like to avoid such complex logics in the application
> code. In a 3 node cluster, for an application like netvirt which has to
> push a lot of flows, plus a set of OVSDB configuration, based on some
> events coming from neutron datastores(note that all of these are different
> config shards), I am just trying to understand what is the best way to
> place things.  It is always good not to make application logic, depend on
> internals of infra, but is the only way then to collocate shards?
>
> I have few questions around what lead to the solution that putting all the
> shard to one node is the only solutions
>
> ​, but i don't want to hi-jack this thread with that topic :).
>
>
>
>
>
> Thanks,
>
> Faseela
>
>
>
> *From:* infrautils-dev-boun...@lists.opendaylight.org [mailto:
> infrautils-dev-boun...@lists.opendaylight.org] *On Behalf Of *Tom Pantelis
> *Sent:* Friday, June 08, 2018 12:07 AM
> *To:* Michael Vorburger 
> *Cc:* infrautils-...@lists.opendaylight.org; controller-dev <
> controller-dev@lists.opendaylight.org>; genius-...@lists.opendaylight.org;
> Robert Varga 
> *Subject:* Re: [infrautils-dev] [controller-dev] OK to resurrect c/64522
> to first move infrautils.DiagStatus integration for datastore from genius
> to controller, and then improve it for GENIUS-138 ?
>
>
>
>
>
>
>
> --
>
> Thanks
>
> Anil
>



-- 
Thanks
Anil
___
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev


Re: [controller-dev] [infrautils-dev] Sharding evolution

2018-06-08 Thread Faseela K
[Changed the subject]

Anil, now you can ask ;)

https://wiki.opendaylight.org/view/Genius:Sharding_evolution

Thanks,
Faseela

From: Anil Vishnoi [mailto:vishnoia...@gmail.com]
Sent: Saturday, June 09, 2018 5:30 AM
To: Faseela K 
Cc: Tom Pantelis ; Michael Vorburger 
; infrautils-...@lists.opendaylight.org; controller-dev 
; genius-...@lists.opendaylight.org
Subject: Re: [controller-dev] [infrautils-dev] OK to resurrect c/64522 to first 
move infrautils.DiagStatus integration for datastore from genius to controller, 
and then improve it for GENIUS-138 ?



On Fri, Jun 8, 2018 at 4:50 PM, Faseela K 
mailto:faseel...@ericsson.com>> wrote:


From: Tom Pantelis [mailto:tompante...@gmail.com]
Sent: Saturday, June 09, 2018 2:24 AM
To: Anil Vishnoi mailto:vishnoia...@gmail.com>>
Cc: Faseela K mailto:faseel...@ericsson.com>>; Michael 
Vorburger mailto:vorbur...@redhat.com>>; 
infrautils-...@lists.opendaylight.org;
 controller-dev 
mailto:controller-dev@lists.opendaylight.org>>;
 genius-...@lists.opendaylight.org
Subject: Re: [controller-dev] [infrautils-dev] OK to resurrect c/64522 to first 
move infrautils.DiagStatus integration for datastore from genius to controller, 
and then improve it for GENIUS-138 ?



On Fri, Jun 8, 2018 at 3:11 PM, Anil Vishnoi 
mailto:vishnoia...@gmail.com>> wrote:


On Thu, Jun 7, 2018 at 11:39 AM, Faseela K 
mailto:faseel...@ericsson.com>> wrote:
Not related in this context, but if we can get shard leader change 
notification, can we use that to derive an entity owner instead of using EOS? ;)
​Humble suggestion, don't use shard location/ownership status in your business 
logic ;-)​


+1. And knowledge, assumptions about shard names, member names ... :)

>> Of course we all like to avoid such complex logics in the application code. 
>> In a 3 node cluster, for an application like netvirt which has to push a lot 
>> of flows, plus a set of OVSDB configuration, based on some events coming 
>> from neutron datastores(note that all of these are different config shards), 
>> I am just trying to understand what is the best way to place things.  It is 
>> always good not to make application logic, depend on internals of infra, but 
>> is the only way then to collocate shards?
I have few questions around what lead to the solution that putting all the 
shard to one node is the only solutions
​, but i don't want to hi-jack this thread with that topic :).


Thanks,
Faseela

From: 
infrautils-dev-boun...@lists.opendaylight.org
 
[mailto:infrautils-dev-boun...@lists.opendaylight.org]
 On Behalf Of Tom Pantelis
Sent: Friday, June 08, 2018 12:07 AM
To: Michael Vorburger mailto:vorbur...@redhat.com>>
Cc: 
infrautils-...@lists.opendaylight.org;
 controller-dev 
mailto:controller-dev@lists.opendaylight.org>>;
 genius-...@lists.opendaylight.org; 
Robert Varga mailto:n...@hq.sk>>
Subject: Re: [infrautils-dev] [controller-dev] OK to resurrect c/64522 to first 
move infrautils.DiagStatus integration for datastore from genius to controller, 
and then improve it for GENIUS-138 ?




--
Thanks
Anil
___
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev