Rodrigo you are correct, that results in a co-location hell. Also that requires
careful calculations ahead of time to ensure that you can co-locate the
corresponding data partitions (in case of a 1:N, where we have a high-demand
database) at all times. Even in case of node failures. Moreover if
On Thu, Nov 9, 2017 at 2:27 AM, wrote:
> We measured that we lose at least one order of magnitude in terms of latency,
> which is our key KPI in this setup.
If you are moving from a model where your apps were guaranteed to be
colocated into Kubernetes, you are going to
We measured that we lose at least one order of magnitude in terms of latency,
which is our key KPI in this setup.
Pods are not always on the same host, but we play with co-location a lot.
1:1 mainly.
Matthias, Tim, I will get back to you with a decent benchmark on our metals.
Thanks so far for
Are you concerned about perf because you measured it? Or because you
suspect it might become a thing later?
Are you really sure that your pods will ALWAYS be on the same host?
Are your pods 1:1 or 1:N relationships?
Could these highly-connected pods just be one bigger pod?
To be sure, there's
How big is the overhead from going through the bridge normally?
On Wed, Nov 8, 2017, 19:08 wrote:
> Thanks Tim,
>
> Do you know of any technique(s) to speed up the network between pods
> (probably co-located onto the same machine)? Shared memory communication
> seems to
Thanks Tim,
Do you know of any technique(s) to speed up the network between pods (probably
co-located onto the same machine)? Shared memory communication seems to be a
good candidate within pods.
On Wednesday, November 8, 2017 at 6:42:39 PM UTC+1, Tim Hockin wrote:
> Pods should make very few
Currently it is possible to share (shared memory) IPC namespace within pods,
but not possible to share between pods.
Is this something that will be supported in the future? Or goes against the
very design of Kubernetes?
What is the general opinion of the Community on this?
Thanks,
Z
--
You