On Wed, 17 Apr 2019 16:08:34 +0200 Lars Täuber wrote:
> Wed, 17 Apr 2019 20:01:28 +0900
> Christian Balzer ==> Ceph Users :
> > On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
> >
> > > Wed, 17 Apr 2019 10:47:32 +0200
> > > Paul Emmerich ==> Lars Täuber
> > > :
> > > > The
Wed, 17 Apr 2019 20:01:28 +0900
Christian Balzer ==> Ceph Users :
> On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
>
> > Wed, 17 Apr 2019 10:47:32 +0200
> > Paul Emmerich ==> Lars Täuber :
> > > The standard argument that it helps preventing recovery traffic from
> > > clogging the
On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
> Wed, 17 Apr 2019 10:47:32 +0200
> Paul Emmerich ==> Lars Täuber :
> > The standard argument that it helps preventing recovery traffic from
> > clogging the network and impacting client traffic is missleading:
>
> What do you mean by
Wed, 17 Apr 2019 10:47:32 +0200
Paul Emmerich ==> Lars Täuber :
> The standard argument that it helps preventing recovery traffic from
> clogging the network and impacting client traffic is missleading:
What do you mean by "it"? I don't know the standard argument.
Do you mean separating the
Quoting Lars Täuber (taeu...@bbaw.de):
> > > This is something i was told to do, because a reconstruction of failed
> > > OSDs/disks would have a heavy impact on the backend network.
> >
> > Opinions vary on running "public" only versus "public" / "backend".
> > Having a separate "backend"
On Wed, Apr 17, 2019 at 7:56 AM Lars Täuber wrote:
>
> Thanks Paul for the judgement.
>
> Tue, 16 Apr 2019 10:13:03 +0200
> Paul Emmerich ==> Lars Täuber :
> > Seems in line with what I'd expect for the hardware.
> >
> > Your hardware seems to be way overspecced, you'd be fine with half the
> >
25 Gbit/s doesn't have a significant latency advantage over 10 Gbit/s.
For reference: a point-to-point 10 Gbit/s fiber link takes around 300
ns of processing for rx+tx on standard Intel X520 NICs (measured it),
so not much to save here.
Then there's serialization latency which changes from
On Wed, 17 Apr 2019 10:39:10 +0200 Lars Täuber wrote:
> Wed, 17 Apr 2019 09:52:29 +0200
> Stefan Kooman ==> Lars Täuber :
> > Quoting Lars Täuber (taeu...@bbaw.de):
> > > > I'd probably only use the 25G network for both networks instead of
> > > > using both. Splitting the network usually
The standard argument that it helps preventing recovery traffic from
clogging the network and impacting client traffic is missleading:
* write client traffic relies on the backend network for replication
operations: your client (write) traffic is impacted anyways if the
backend network is full
*
Wed, 17 Apr 2019 09:52:29 +0200
Stefan Kooman ==> Lars Täuber :
> Quoting Lars Täuber (taeu...@bbaw.de):
> > > I'd probably only use the 25G network for both networks instead of
> > > using both. Splitting the network usually doesn't help.
> >
> > This is something i was told to do, because a
Quoting Lars Täuber (taeu...@bbaw.de):
> > I'd probably only use the 25G network for both networks instead of
> > using both. Splitting the network usually doesn't help.
>
> This is something i was told to do, because a reconstruction of failed
> OSDs/disks would have a heavy impact on the
Thanks Paul for the judgement.
Tue, 16 Apr 2019 10:13:03 +0200
Paul Emmerich ==> Lars Täuber :
> Seems in line with what I'd expect for the hardware.
>
> Your hardware seems to be way overspecced, you'd be fine with half the
> RAM, half the CPU and way cheaper disks.
Do you mean all the
Seems in line with what I'd expect for the hardware.
Your hardware seems to be way overspecced, you'd be fine with half the
RAM, half the CPU and way cheaper disks.
In fakt, a good SATA 4kn disk can be faster than a SAS 512e disk.
I'd probably only use the 25G network for both networks instead
Hi there,
i'm new to ceph and just got my first cluster running.
Now i'd like to know if the performance we get is expectable.
Is there a website with benchmark results somewhere where i could have a look
to compare with our HW and our results?
This are the results:
rados bench single
14 matches
Mail list logo