[ceph-users] Ceph client version vs server version inter-operability

2023-05-30 Thread Mark Kirkwood
Hi, We are running a ceph cluster that is currently on Luminous. At this point most of our clients are also Luminous, but as we provision new client hosts we are using client versions that are more recent (e.g Octopus, Pacific and more recently Quincy). Is this safe? Is there a known list of

[ceph-users] Ceph iscsi gateway semi deprecation warning?

2023-05-26 Thread Mark Kirkwood
I am looking at using an iscsi gateway in front of a ceph setup. However the warning in the docs is concerning: The iSCSI gateway is in maintenance as of November 2022. This means that it is no longer in active development and will not be updated to add new features. Does this mean I should

[ceph-users] What if: Upgrade procedure mistake by restarting OSD before MON?

2021-11-30 Thread Mark Kirkwood
Hi, I am planning a Luminous to Nautilus upgrade. The instructions state (very terse version): - install Nautilus ceph packages - restart MONs - restart MGRs - restart OSDs We have OSDs running on our MON hosts (essentially all our ceph hosts are the same chassis). So, if everything goes

[ceph-users] Centralized config mask not being applied to host

2021-11-25 Thread Mark Kirkwood
HI all, I'm looking at doing a Luminous to Nautilus upgrade.  I'd like to assimilate the config into the mon db. However we do have hosts with differing [osd] config sections in their current ceph.conf files. I was looking at using the crush type host:xxx to set these differently if

[ceph-users] Re: java client cannot visit rgw behind nginx

2020-09-03 Thread Mark Kirkwood
I think you might need to set some headers. Here is what we use (connecting to Swift, but should be generally applicable). We are running nginx and swift (swift proxy server) on the same host. but again maybe some useful ideas for you to try (below). Note that we explicitly stop nginx writing

[ceph-users] Re: Understanding op_r, op_w vs op_rw

2020-09-02 Thread Mark Kirkwood
' too but this is not relevant to the 'op_r' + 'op_w' vs 'op_rw' discussio. On 2/09/20 6:49 pm, Mark Kirkwood wrote: I did say I'd test using librbd - and this changes my observations. Using fio configured with the rbd driver: - a random write workload emits about equal 'op_w' and 'op_rw

[ceph-users] Re: Understanding op_r, op_w vs op_rw

2020-09-02 Thread Mark Kirkwood
I'm seeing a lot of 'op_rw', but any further clarification appreciated! regards Mark On 2/09/20 6:17 pm, Mark Kirkwood wrote: Hi, I'd like to gain a better understanding about what operations emit which of these performance counters, in particular when is 'op_rw' incremented instead of 'op_r

[ceph-users] Understanding op_r, op_w vs op_rw

2020-09-02 Thread Mark Kirkwood
Hi, I'd like to gain a better understanding about what operations emit which of these performance counters, in particular when is 'op_rw' incremented instead of 'op_r' + 'op_w'? I've done a little bit of investigation (v12.2.13) , running various workoads and operations against an RBD

[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-30 Thread Mark Kirkwood
Increasing the memory target appears to have solved the issue. On 26/06/20 11:47 am, Mark Kirkwood wrote: Progress update: - tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests - will increase osd_memory_target from 4 to 16G, and observe On 24/06/20 1:30 pm, Mark Kirkwood

[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-29 Thread Mark Kirkwood
the root cause of the slow requests, maybe. Mark Kirkwood <mailto:mark.kirkw...@catalyst.net.nz>> 于2020年6月26日周五 上午7:47写道: Progress update: - tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests - will increase osd_memory_target from 4 to 16G, and observe On 2

[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-25 Thread Mark Kirkwood
Progress update: - tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests - will increase osd_memory_target from 4 to 16G, and observe On 24/06/20 1:30 pm, Mark Kirkwood wrote: Hi, We have recently added a new storage node to our Luminous (12.2.13) cluster. The prev nodes

[ceph-users] Bluestore performance tuning for hdd with nvme db+wal

2020-06-23 Thread Mark Kirkwood
Hi, We have recently added a new storage node to our Luminous (12.2.13) cluster. The prev nodes are all setup as Filestore: e.g 12 osds on hdd (Seagate Constellations) with one NVMe (Intel P4600) journal. With the new guy we decided to introduce Bluestore so it is configured as: (same HW) 12