Re: [ceph-users] ceph tell mds.a scrub status "problem getting command descriptions"

2019-12-13 Thread Marc Roos
 
client.admin, did not have correct rights

 ceph auth caps client.admin mds "allow *" mgr "allow *" mon "allow *" 
osd "allow *"


-Original Message-
To: ceph-users
Subject: [ceph-users] ceph tell mds.a scrub status "problem getting 
command descriptions"


ceph tell mds.a scrub status

Generates

2019-12-14 00:46:38.782 7fef4affd700  0 client.3744774 ms_handle_reset 
on v2:192.168.10.111:6800/3517983549 Error EPERM: problem getting 
command descriptions from mds.a 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph tell mds.a scrub status "problem getting command descriptions"

2019-12-13 Thread Marc Roos


ceph tell mds.a scrub status

Generates

2019-12-14 00:46:38.782 7fef4affd700  0 client.3744774 ms_handle_reset 
on v2:192.168.10.111:6800/3517983549
Error EPERM: problem getting command descriptions from mds.a
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] deleted snap dirs are back as _origdir_1099536400705

2019-12-13 Thread Marc Roos
 
I thought I deleted snapshot dirs, but I still have them but with a 
different name. How to get rid of these?

[@ .snap]# ls -1
_snap-1_1099536400705
_snap-2_1099536400705
_snap-3_1099536400705
_snap-4_1099536400705
_snap-5_1099536400705
_snap-6_1099536400705
_snap-7_1099536400705
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-volume sizing osds

2019-12-13 Thread Oscar Segarra
Hi,

I have recently started working with Ceph Nautilus release and I have
realized that you have to start working with LVM to create OSD instead of
the "old fashioned" ceph-disk.

In terms of performance and best practices, as far as I must use LVM I can
create volume groups that joins or extends two or more physical disks. In
this scenario (many disks for server) where ceph-volume is manatory, It
still remains the rule of one OSD for each physical device? or I can reduce
the number of OSDs?

Thanks in advance,
Óscar
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph assimilated configuration - unable to remove item

2019-12-13 Thread David Herselman
Hi,

I've logged a bug report 
(https://tracker.ceph.com/issues/43296?next_issue_id=43295_issue_id=43297) 
and Alwin from Proxmox was kind enough to provide a work around:
ceph config rm global rbd_default_features;
ceph config-key rm config/global/rbd_default_features;
ceph config set global rbd_default_features 31;

ceph config dump | grep -e WHO -e rbd_default_features;
WHOMASK LEVELOPTION VALUE  RO
global  advanced rbd_default_features   31


Regards
David Herselman

-Original Message-
From: Stefan Kooman  
Sent: Wednesday, 11 December 2019 3:05 PM
To: David Herselman 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph assimilated configuration - unable to remove item

Quoting David Herselman (d...@syrex.co):
> Hi,
> 
> We assimilated our Ceph configuration to store attributes within Ceph 
> itself and subsequently have a minimal configuration file. Whilst this 
> works perfectly we are unable to remove configuration entries 
> populated by the assimilate-conf command.

I forgot about this issue, but I encountered this when we upgraded to mimic. I 
can confirm this bug. It's possible to have the same key present with different 
values. For our production cluster we decided to stick to ceph.conf for the 
time being. That's also the workaround for now if you want to override the 
config store: just put that in your config file and reboot the daemon(s).

Gr. Stefan


-- 
| BIT BV  https://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph rgw pools per client

2019-12-13 Thread Ed Fisher
You're looking for placements: 
https://docs.ceph.com/docs/master/radosgw/placement/ 


Basically, create as many placements as you want in your zone and then set the 
default placement for the user as needed. However, I don't think there's any 
way to restrict a user from choosing a different placement for their bucket if 
they know the syntax and the name of the other placement. If you need to 
prevent that I'd recommend putting a proxy in front of radosgw and blocking 
bucket create requests with an explicit placement specified.


> On Dec 13, 2019, at 3:23 AM, M Ranga Swami Reddy  wrote:
> 
> Hello - I want to have 2 diff. rgw pools for 2 diff. clients. For ex:
> For client#1 - rgw.data1, rgw.index1, rgw.user1, rgw.metadata1
> For client#2 - rgw.data2, rgw.index2, rgw.user2, rgw.metadata2
> 
> Is the above possible with ceph radosgw?
> 
> Thanks
> Swami
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph rgw pools per client

2019-12-13 Thread M Ranga Swami Reddy
Hello - I want to have 2 diff. rgw pools for 2 diff. clients. For ex:
For client#1 - rgw.data1, rgw.index1, rgw.user1, rgw.metadata1
For client#2 - rgw.data2, rgw.index2, rgw.user2, rgw.metadata2

Is the above possible with ceph radosgw?

Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com