[ceph-users] Showing OSD Disk config?

2020-07-01 Thread Lindsay Mathieson
Is there a way to display an OSD's setup - data, data.db and WAL disks/partitions? -- Lindsay ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Are there 'tuned profiles' for various ceph scenarios?

2020-07-01 Thread Simon Ironside
Here's an example for SCSI disks (the main benefit vs VirtIO is discard/unmap/TRIM support): discard='unmap'/> You also need a VirtIO-SCSI controller to use these, which will look something like: function='0x0'/> Cheers, Simon. On 01/07/2020

[ceph-users] Re: Are there 'tuned profiles' for various ceph scenarios?

2020-07-01 Thread Harry G. Coin
[Resent to correct title] Marc: Here's a template that works here.  You'll need to do some steps to create the 'secret' and make the block devs and so on:                                     Glad I could contribute something.   Sure would appreciate leads for the suggested

[ceph-users] Re: *****SPAM***** Are there 'tuned profiles' for various ceph scenarios?

2020-07-01 Thread Harry G. Coin
Marc: Here's a template that works here.  You'll need to do some steps to create the 'secret' and make the block devs and so on:                                     Glad I could contribute something.   Sure would appreciate leads for the suggested sysctls/etc either apart or as

[ceph-users] Re: *****SPAM***** Are there 'tuned profiles' for various ceph scenarios?

2020-07-01 Thread Marc Roos
Just curious, how does the libvirt xml part look like of a 'direct virtio->rados link' and 'kernel-mounted rbd' -Original Message- To: ceph-users@ceph.io Subject: *SPAM* [ceph-users] Are there 'tuned profiles' for various ceph scenarios? Hi Are there any 'official' or

[ceph-users] Are there 'tuned profiles' for various ceph scenarios?

2020-07-01 Thread Harry G. Coin
Hi Are there any 'official' or even 'works for us' pointers to 'tuned profiles' for such common uses as 'ceph baremetal osd host' 'ceph osd + libvirt host' 'ceph mon/mgr' 'guest vm based on a kernel-mounted rbd' 'guest vm based on a direct virtio->rados link' I suppose there are a few other

[ceph-users] Re: find rbd locks by client IP

2020-07-01 Thread Void Star Nill
Thanks so much Jason. On Sun, Jun 28, 2020 at 7:31 AM Jason Dillaman wrote: > On Thu, Jun 25, 2020 at 7:51 PM Void Star Nill > wrote: > > > > Hello, > > > > Is there a way to list all locks held by a client with the given IP > address? > > Negative -- you would need to check every image since

[ceph-users] Re: Advice on SSD choices for WAL/DB?

2020-07-01 Thread Andrei Mikhailovsky
Thanks for the information, Burkhard. My current setup shows a bunch of these warnings (24 osds with spillover out of 36 which have wal/db on the ssd): osd.36 spilled over 1.9 GiB metadata from 'db' device (7.2 GiB used of 30 GiB) to slow device osd.37 spilled over 13 GiB metadata

[ceph-users] Advice on SSD choices for WAL/DB?

2020-07-01 Thread Andrei Mikhailovsky
Hello, We are planning to perform a small upgrade to our cluster and slowly start adding 12TB SATA HDD drives. We need to accommodate for additional SSD WAL/DB requirements as well. Currently we are considering the following: HDD Drives - Seagate EXOS 12TB SSD Drives for WAL/DB - Intel D3

[ceph-users] Re: Upgrade from Luminous to Nautilus 14.2.9 RBD issue?

2020-07-01 Thread Jason Dillaman
On Wed, Jul 1, 2020 at 3:23 AM Daniel Stan - nav.ro wrote: > > Hi, > > We are experiencing a weird issue after upgrading our clusters from ceph > luminous to nautilus 14.2.9 - I am not even sure if this is ceph related > but this started to happen exactly after we upgraded, so, I am trying my >

[ceph-users] Re: Advice on SSD choices for WAL/DB?

2020-07-01 Thread Burkhard Linke
Hi, On 7/1/20 1:57 PM, Andrei Mikhailovsky wrote: Hello, We are planning to perform a small upgrade to our cluster and slowly start adding 12TB SATA HDD drives. We need to accommodate for additional SSD WAL/DB requirements as well. Currently we are considering the following: HDD Drives -

[ceph-users] How to change 'ceph mon metadata' hostname value in octopus.

2020-07-01 Thread Cem Zafer
Hi forum people, What is the best method to change monitor metadata in octopus? Thanks. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: v14.2.10 Nautilus crash

2020-07-01 Thread Dan van der Ster
Hi Markus, Yes, I think you should open a bug tracker with more from a crashing osd log file (e.g. all the -1> -2> etc. lines before the crash) and also from the mon leader if possible. Something strange is that the mon_warn_on_pool_pg_num_not_power_of_two feature is also present in v14.2.9 (it

[ceph-users] Upgrade from Luminous to Nautilus 14.2.9 RBD issue?

2020-07-01 Thread Daniel Stan - nav.ro
Hi, We are experiencing a weird issue after upgrading our clusters from ceph luminous to nautilus 14.2.9 - I am not even sure if this is ceph related but this started to happen exactly after we upgraded, so, I am trying my luck here. We have one ceph rbd pool size 3 min size 2 from all

[ceph-users] v14.2.10 Nautilus crash

2020-07-01 Thread Markus Binz
Hello, yesterday we upgraded a mimic cluster to v14.2.10, everything was running and ok. There was this new warning, 2 pool(s) have non-power-of-two pg_num and to get a HEALTH_OK state until we can expand this pools, i found this config option to suppress the warning: ceph config set global