But the OSDs themselves introduce latency also, even if they are NVMe.
We find that it is in the same ballpark. Latency does reduce I/O, but
for sub-ms ones it is still thousands of IOPS even for a single thread.
For a use case with many concurrent writers/readers (VMs), aggregated
throughput
Your data centers seem to be pretty close, some 13-14km? If it is a more
or less straight fiber run then latency should be 0.1-0.2ms or
something, clearly not a problem for synchronous replication. It should
work rather well.
With "only" 2 data centers however, you need to manually decide if t
On 01/29/2018 07:26 PM, Nico Schottelius wrote:
Hey Wido,
[...]
Like I said, latency, latency, latency. That's what matters. Bandwidth
usually isn't a real problem.
I imagined that.
What latency do you have with a 8k ping between hosts?
As the link will be setup this week, I cannot tel
Hey Wido,
> [...]
> Like I said, latency, latency, latency. That's what matters. Bandwidth
> usually isn't a real problem.
I imagined that.
> What latency do you have with a 8k ping between hosts?
As the link will be setup this week, I cannot tell yet.
However, currently we have on a 65km lin
On 01/29/2018 06:33 PM, Nico Schottelius wrote:
Good evening list,
we are soon expanding our data center [0] to a new location [1].
We are mainly offering VPS / VM Hosting, so rbd is our main interest.
We have a low latency 10 Gbit/s link between our other location [2] and
we are wondering,
Good evening list,
we are soon expanding our data center [0] to a new location [1].
We are mainly offering VPS / VM Hosting, so rbd is our main interest.
We have a low latency 10 Gbit/s link between our other location [2] and
we are wondering, what is the best practise for expanding.
Naturally