Is the router a software component of all nodes in the cluster ?
Does the router then redirect all request to the same cache-container for all 
tenant? How is the isolation done then?
Or does each tenant have effectively different cache containers and thus be 
"physically" isolated?
Or is that config dependent (from a endpoint to the cache-container) and some 
tenants could share the same cache container. In which case will they see the 
same data ?

Finally I think the design should allow for "dynamic" tenant configuration. 
Meaning that I don't have to change the config manually when I add a new 
customer / tenant. 

That's all, and sorry for the naive questions :)

> On 29 avr. 2016, at 17:29, Sebastian Laskawiec <slask...@redhat.com> wrote:
> 
> Dear Community,
> 
> Please have a look at the design of Multi tenancy support for Infinispan [1]. 
> I would be more than happy to get some feedback from you.
> 
> Highlights:
> The implementation will be based on a Router (which will be built based on 
> Netty)
> Multiple Hot Rod and REST servers will be attached to the router which in 
> turn will be attached to the endpoint
> The router will operate on a binary protocol when using Hot Rod clients and 
> path-based routing when using REST
> Memcached will be out of scope
> The router will support SSL+SNI
> Thanks
> Sebastian
> 
> [1] 
> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to