Is there a way to display an OSD's setup - data, data.db and WAL
disks/partitions?
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Here's an example for SCSI disks (the main benefit vs VirtIO is
discard/unmap/TRIM support):
discard='unmap'/>
You also need a VirtIO-SCSI controller to use these, which will look
something like:
function='0x0'/>
Cheers,
Simon.
On 01/07/2020
[Resent to correct title]
Marc:
Here's a template that works here. You'll need to do some steps to
create the 'secret' and make the block devs and so on:
Glad I could contribute something. Sure would appreciate leads for the
suggested
Marc:
Here's a template that works here. You'll need to do some steps to
create the 'secret' and make the block devs and so on:
Glad I could contribute something. Sure would appreciate leads for the
suggested sysctls/etc either apart or as
Just curious, how does the libvirt xml part look like of a 'direct
virtio->rados link' and 'kernel-mounted rbd'
-Original Message-
To: ceph-users@ceph.io
Subject: *SPAM* [ceph-users] Are there 'tuned profiles' for
various ceph scenarios?
Hi
Are there any 'official' or
Hi
Are there any 'official' or even 'works for us' pointers to 'tuned
profiles' for such common uses as
'ceph baremetal osd host'
'ceph osd + libvirt host'
'ceph mon/mgr'
'guest vm based on a kernel-mounted rbd'
'guest vm based on a direct virtio->rados link'
I suppose there are a few other
Thanks so much Jason.
On Sun, Jun 28, 2020 at 7:31 AM Jason Dillaman wrote:
> On Thu, Jun 25, 2020 at 7:51 PM Void Star Nill
> wrote:
> >
> > Hello,
> >
> > Is there a way to list all locks held by a client with the given IP
> address?
>
> Negative -- you would need to check every image since
Thanks for the information, Burkhard.
My current setup shows a bunch of these warnings (24 osds with spillover out of
36 which have wal/db on the ssd):
osd.36 spilled over 1.9 GiB metadata from 'db' device (7.2 GiB used of 30
GiB) to slow device
osd.37 spilled over 13 GiB metadata
Hello,
We are planning to perform a small upgrade to our cluster and slowly start
adding 12TB SATA HDD drives. We need to accommodate for additional SSD WAL/DB
requirements as well. Currently we are considering the following:
HDD Drives - Seagate EXOS 12TB
SSD Drives for WAL/DB - Intel D3
On Wed, Jul 1, 2020 at 3:23 AM Daniel Stan - nav.ro wrote:
>
> Hi,
>
> We are experiencing a weird issue after upgrading our clusters from ceph
> luminous to nautilus 14.2.9 - I am not even sure if this is ceph related
> but this started to happen exactly after we upgraded, so, I am trying my
>
Hi,
On 7/1/20 1:57 PM, Andrei Mikhailovsky wrote:
Hello,
We are planning to perform a small upgrade to our cluster and slowly start
adding 12TB SATA HDD drives. We need to accommodate for additional SSD WAL/DB
requirements as well. Currently we are considering the following:
HDD Drives -
Hi forum people,
What is the best method to change monitor metadata in octopus?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Markus,
Yes, I think you should open a bug tracker with more from a crashing
osd log file (e.g. all the -1> -2> etc. lines before the crash) and
also from the mon leader if possible.
Something strange is that the mon_warn_on_pool_pg_num_not_power_of_two
feature is also present in v14.2.9 (it
Hi,
We are experiencing a weird issue after upgrading our clusters from ceph
luminous to nautilus 14.2.9 - I am not even sure if this is ceph related
but this started to happen exactly after we upgraded, so, I am trying my
luck here.
We have one ceph rbd pool size 3 min size 2 from all
Hello,
yesterday we upgraded a mimic cluster to v14.2.10, everything was running and
ok.
There was this new warning, 2 pool(s) have non-power-of-two pg_num and to get a
HEALTH_OK state until we can expand this pools,
i found this config option to suppress the warning:
ceph config set global
15 matches
Mail list logo