Hi,
We are running a ceph cluster that is currently on Luminous. At this
point most of our clients are also Luminous, but as we provision new
client hosts we are using client versions that are more recent (e.g
Octopus, Pacific and more recently Quincy). Is this safe? Is there a
known list of
I am looking at using an iscsi gateway in front of a ceph setup. However
the warning in the docs is concerning:
The iSCSI gateway is in maintenance as of November 2022. This means that
it is no longer in active development and will not be updated to add new
features.
Does this mean I should
Hi,
I am planning a Luminous to Nautilus upgrade. The instructions state
(very terse version):
- install Nautilus ceph packages
- restart MONs
- restart MGRs
- restart OSDs
We have OSDs running on our MON hosts (essentially all our ceph hosts
are the same chassis). So, if everything goes
HI all,
I'm looking at doing a Luminous to Nautilus upgrade. I'd like to
assimilate the config into the mon db. However we do have hosts with
differing [osd] config sections in their current ceph.conf files. I was
looking at using the crush type host:xxx to set these differently if
I think you might need to set some headers. Here is what we use
(connecting to Swift, but should be generally applicable). We are
running nginx and swift (swift proxy server) on the same host. but again
maybe some useful ideas for you to try (below).
Note that we explicitly stop nginx writing
' too but this is not relevant to the
'op_r' + 'op_w' vs 'op_rw' discussio.
On 2/09/20 6:49 pm, Mark Kirkwood wrote:
I did say I'd test using librbd - and this changes my observations.
Using fio configured with the rbd driver:
- a random write workload emits about equal 'op_w' and 'op_rw
I'm seeing a lot of
'op_rw', but any further clarification appreciated!
regards
Mark
On 2/09/20 6:17 pm, Mark Kirkwood wrote:
Hi,
I'd like to gain a better understanding about what operations emit
which of these performance counters, in particular when is 'op_rw'
incremented instead of 'op_r
Hi,
I'd like to gain a better understanding about what operations emit which
of these performance counters, in particular when is 'op_rw' incremented
instead of 'op_r' + 'op_w'?
I've done a little bit of investigation (v12.2.13) , running various
workoads and operations against an RBD
Increasing the memory target appears to have solved the issue.
On 26/06/20 11:47 am, Mark Kirkwood wrote:
Progress update:
- tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests
- will increase osd_memory_target from 4 to 16G, and observe
On 24/06/20 1:30 pm, Mark Kirkwood
the root cause of the slow requests, maybe.
Mark Kirkwood <mailto:mark.kirkw...@catalyst.net.nz>> 于2020年6月26日周五 上午7:47写道:
Progress update:
- tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests
- will increase osd_memory_target from 4 to 16G, and observe
On 2
Progress update:
- tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests
- will increase osd_memory_target from 4 to 16G, and observe
On 24/06/20 1:30 pm, Mark Kirkwood wrote:
Hi,
We have recently added a new storage node to our Luminous (12.2.13)
cluster. The prev nodes
Hi,
We have recently added a new storage node to our Luminous (12.2.13)
cluster. The prev nodes are all setup as Filestore: e.g 12 osds on hdd
(Seagate Constellations) with one NVMe (Intel P4600) journal. With the
new guy we decided to introduce Bluestore so it is configured as: (same
HW) 12
12 matches
Mail list logo