>
> I am wondering if its not necessary to have to drain/fill OSD nodes at
> all and if this can be done with just a fresh install and not touch
> the OSD's
Abosolutely. I’ve done this both with Trusty -> Bionic and Precise -> RHEL7.
> however I don't know how to perform a fresh installation
Hi all,
I have a 39 node, 1404 spinning disk Ceph Mimic cluster across 6 racks
for a total of 9.1PiB raw and about 40% utilized. These storage nodes
started their life on Ubuntu 14.04 and in-place upgraded to 16.04 2
years ago however I have started a project to do fresh installs of
each OSD node
Good day,
rw operations (randwrite 4kB and 4MB) over mapped RBD are just too slow. I
am also using librbd over TGT.
fio input:
[global]
rw=randwrite
ioengine=libaio
iodepth=64
size=1g
direct=1
buffered=0
startdelay=5
group_reporting=1
thread=1
ramp_time=5
time_based
disk_util=0
clat_p
Hi,
huge read amplification for index buckets is unfortunately normal,
complexity of a read request is O(n) where n is the number of objects in
that bucket.
I've worked on many clusters with huge buckets and having 10 gbit/s of
network traffic between the OSDs and radosgw is unfortunately not unus
Dnia 2020-06-18, o godz. 22:18:08
Simon Leinen napisał(a):
> Mariusz Gronczewski writes:
> > listing itself is bugged in version
> > I'm running: https://tracker.ceph.com/issues/45955
>
> Ouch! Are your OSDs all running the same version as your RadosGW? The
> message looks a bit as if your Rad
Based on v15.2.2, 5 storage node(nvme:OSD=1:2, optane as rocksdb backend)
5client,
test case: fio, 20image, 4K Randread/randwrite
4KRR4KRW
default 760700 262500
PR343631185500 254
Aha. The error message is in the original complain. Very well. Carry on,
everyone.
On Fri, 19 Jun 2020 at 4:33 pm, John Zachary Dover
wrote:
> Simon,
>
> Could you post the unhelpful error message? I can’t rewrite cephadm, but I
> can at least document this error message.
>
> Zac Dover
> Documen