I will look into that, but:
IS there a rule of thumb to determine the optimal setting for
osd disk threads
and
osd op threads
?
TIA
On Wed, Jun 12, 2019 at 3:22 PM Paul Emmerich wrote:
>
>
>
> On Wed, Jun 12, 2019 at 10:57 AM tim taler wrote:
>>
>> We experience absurd s
Hi all,
we have a 5 node ceph cluster with 44 OSDs
where all nodes also serve as virtualization hosts,
running about 22 virtual machines with all in all about 75 rbd s
(158 including snapshots).
We experience absurd slow i/o in the VMs and I suspect
our thread settings in ceph.conf to be one of
Hi all,
we have a 5 node ceph cluster with 44 OSDs
where all nodes also serve as virtualization hosts,
running about 22 virtual machines with all in all about 75 rbd s
(158 including snapshots).
We experience absurd slow i/o in the VMs and I suspect
our thread settings in ceph.conf to be one of
Hi all,
how are your experiences with different disk sizes in one pool
regarding the overall performance?
I hope someone could shed some light on the following scenario:
Let's say I mix an equal amount of 2TB and 8TB disks in one pool,
with a crush map that tries to fill all disks to the same
Well connecting an rbd to two servers would be like mapping a block
device from a storage array onto two different hosts,
that's is possible and (was) done.
(it would be much more difficult though to connect a single physical
harddisk to two computers)
The point is that as mentioned above you
the cluster is luminous upgraded from jewel so we use filestore on
xfs not bluestore)
TIA
On Tue, Dec 5, 2017 at 11:10 AM, Stefan Kooman <ste...@bit.nl> wrote:
> Quoting tim taler (robur...@gmail.com):
>> And I'm still puzzled about the implication of the cluster size on the
>
> In size=2 losing any 2 discs on different hosts would probably cause data to
> be unavailable / lost, as the pg copys are randomly distribbuted across the
> osds. Chances are, that you can find a pg which's acting group is the two
> failed osd (you lost all your replicas)
okay I see, getting
est take-away is... if you
> lose all but 1 copy of your data... do you really want to make changes to
> it? I've also noticed that the majority of clusters on the ML that have
> irreparably lost data were running with size=2 min_size=1.
>
> On Mon, Dec 4, 2017 at 6:12 AM tim taler &l
ar?)
> -Always keeping that much free space, so the cluster could lose a host and
> still has space to repair (calculating with the repair max usage % setting).
thnx again!
yupp that was helpfull
> I hope this helps, and please keep in mind that I'm a noob too :)
>
> Denes.
>
&
Hi
I'm new to ceph but have to honor to look after a cluster that I haven't
set up by myself.
Rushing to the ceph docs and having a first glimpse on our cluster I start
worrying about our setup,
so I need some advice and guidance here.
The set up is:
3 machines, each running a ceph-monitor.
all
10 matches
Mail list logo