Ok case close, I found the solution :
1- Connect on the active mgr
2- Show config
#ceph daemon mgr.`hostname -s` config show | grep mon_pg_warn_max_object_skew
"mon_pg_warn_max_object_skew": "10.00",
3- Change the value
#ceph config set mgr.`hostname -s` mon_pg_warn_max_object_skew 20
Thanks all for responses. It seems that both original responders relied on
modification to the osd-prestart script. I can confirm that is working for
me too and am using that as a temporary solution.
Jan, It seems that the log you mentioned shows the unit attempting to do
the right thing here (cho
CCing correct mailing list ...
On Thu, Jan 23, 2020 at 3:19 PM Jason Dillaman wrote:
>
> On Mon, Jan 20, 2020 at 8:26 AM Rainer Krienke wrote:
> >
> > Hello,
> >
> > I am fighting with rbd and CEPH_ARGS in order to make typing easier on a
> > client. First I created a keyring on one of the ceph
Hi Dan,
I have opened this this bug report for balancer not working as expected.
https://tracker.ceph.com/issues/43586
Do you experience the same issue?
Regards
Thomas
Am 23.01.2020 um 16:05 schrieb Dan van der Ster:
> Hi Frank,
>
> No, it is basically balancing the num_pgs per TB (per osd).
>
>
Any other ideas ?
> Am 15.01.2020 um 15:50 schrieb Oskar Malnowicz
> :
>
> the situation is:
>
> health: HEALTH_WARN
> 1 pools have many more objects per pg than average
>
> $ ceph health detail
> MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
> pool cephfs_data o
+ bumping up
Any suggestions/thoughts on this issue ?
Regards,
Biswajeet
On Tue, Nov 26, 2019 at 11:28 AM Biswajeet Patra <
biswajeet.pa...@flipkart.com> wrote:
> Hi All,
> I have a query regarding objecter behaviour for homeless session. In
> situations when all OSDs containing copies (*let say
Hi Frank,
No, it is basically balancing the num_pgs per TB (per osd).
Cheers, Dan
On Thu, Jan 23, 2020 at 3:53 PM Frank R wrote:
> Hi all,
>
> Does using the Upmap balancer require that all OSDs be the same size
> (per device class)?
>
> thx
> Frank
> _
On Thu, Jan 23, 2020 at 3:31 PM Hayashida, Mami wrote:
>
> Thanks, Ilya.
>
> First, I was not sure whether to post my question on @ceph.io or
> @lists.ceph.com (I subscribe to both) -- should I use @ceph.io in the future?
Yes. I got the following when I replied to your previous email:
As you
Hi all,
Does using the Upmap balancer require that all OSDs be the same size
(per device class)?
thx
Frank
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
>
> Hello,
>
> On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook
> stable-4.0, I have an issue with cephfs. I can create a folder, I can
> create empty files, but cannot write data on like I'm not allowed to write to
> the
Thanks, Ilya.
First, I was not sure whether to post my question on @ceph.io or @
lists.ceph.com (I subscribe to both) -- should I use @ceph.io in the future?
Second, thanks for your advice on cache-tiering -- I was starting to feel
that way but always good to know what Ceph "experts" would say.
On Thu, Jan 23, 2020 at 2:36 PM Ilya Dryomov wrote:
>
> On Wed, Jan 22, 2020 at 6:18 PM Hayashida, Mami
> wrote:
> >
> > Thanks, Ilya.
> >
> > I just tried modifying the osd cap for client.testuser by getting rid of
> > "tag cephfs data=cephfs_test" part and confirmed this key does work (i.e.
Hello
I apologies if I have missed an announcement, but I can’t find anything
regarding if Ceph will participate in the GSoC this year? The only thing I
could find was last years announcement (see below) and webpage. My
organisation has some ideas for Ceph projects, however we could submit th
Le 23.01.20 à 12:50, Frank Schilder a écrit :
You should probably enable the application "cephfs" on the fs-pools.
it's already activated :
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 83
lfor 0/0/81
On Mon, 2020-01-20 at 16:12 +0100, Marc Roos wrote:
> Is it possible to mount a cephfs with a specific uid or gid? To make it
> available to a 'non-root' user?
>
It's not 100% clear what you're asking for here. CephFS is a (mostly)
POSIX filesystem that has permissions and ownership, so non-root
/ Problem ///
I've got a Warning on my cluster that I cannot remove :
"1 pools have many more objects per pg than average"
Does somebody has some insight ? I think it's normal to have this warning
because I have just one pool in use, but how can
You should probably enable the application "cephfs" on the fs-pools.
In both your cases, the osd caps should read
caps osd = "allow rw tag cephfs data=cephfs_metadata, allow rw pool=cephfs_data"
This is independent of the replication type of cephfs_data.
Best regards,
=
Frank S
Hi Frank,
for some reason, the command "ceph fs authorize" does not add the required
permissions for a FS with data pools any more, older versions did. Now you need to add
these caps by hand. It needs to look something like this:
caps osd = "allow rw tag cephfs pool=cephfs_data, allow rw pool
On Wed, Jan 22, 2020 at 12:00:28PM -0500, Wesley Dillingham wrote:
> After upgrading to Nautilus 14.2.6 from Luminous 12.2.12 we are seeing
> the following behavior on OSDs which were created with "ceph-volume lvm
> create --filestore --osd-id --data --journal "
> Upon restart of the serv
Hi everyone,
I'm still looking for people to help at DevConf as our booth signup
schedule is looking still bare.
Also great news, we will have a BoF session at FOSDEM:
https://fosdem.org/2020/schedule/event/bof_opensource_storage/
We will also be sharing a booth with the CentOS project, but spa
Sorry, I had a typo. If you have separate meta- and data pools, the data pool
is not added properly. The caps should look like:
caps osd = "allow rw tag cephfs pool=cephfs-meta-pool, allow rw
pool=cephfs-data-pool"
If you don't have a separate data pool, it should work out of the box.
Hi Yoann,
for some reason, the command "ceph fs authorize" does not add the required
permissions for a FS with data pools any more, older versions did. Now you need
to add these caps by hand. It needs to look something like this:
caps osd = "allow rw tag cephfs pool=cephfs_data, allow rw pool=c
22 matches
Mail list logo