[ceph-users] Client admin socket for RBD

2019-06-23 Thread Alex Litvak

Hello everyone,

I encounter this in nautilus client and not with mimic.  Removing admin socket 
entry from config on client makes no difference

Error:

rbd ls -p one
2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime
2019-06-23 12:58:29.348 7ff2708af700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime

I have no issues running other ceph clients (no messages on the screen with 
ceph -s or ceph iostat from the same box.)
I connected to a few other client nodes and as root I can do the same string
rbd ls -p one


On all the nodes with user libvirt I have seen the admin_socket messages

oneadmin@virt3n1-la:~$  rbd ls -p one --id libvirt
2019-06-23 13:16:41.626 7f9ea0ff9700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime
2019-06-23 13:16:41.626 7f9e8bfff700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime

I can execute all rbd operations on the cluster from client otherwise.  
Commenting client in config file makes no difference

This is an optimiised config distributed across the clients it is almost the 
same as on servers (no libvirt on servers)

[client]
admin_socket = /var/run/ceph/$name.$pid.asok

[client.libvirt]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be 
writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and 
allowed by SELinux or AppArmor

# Please do not change this file directly since it is managed by Ansible and 
will be overwritten
[global]
cluster network = 10.0.42.0/23
fsid = 3947ba2d-1b01-4909-8e3a-f9714f427483
log file = /dev/null
mon cluster log file = /dev/null
mon host = 
[v2:10.0.40.121:3300,v1:10.0.40.121:6789],[v2:10.0.40.122:3300,v1:10.0.40.122:6789],[v2:10.0.40.123:3300,v1:10.0.40.123:6789]
perf = True
public network = 10.0.40.0/23
rocksdb_perf = True


Here is config from mon

NAMEVALUE   
  
SOURCE   OVERRIDES  IGNORES
cluster_network 10.0.42.0/23
  file  
   (mon[10.0.42.0/23])
daemonize   false   
  
override
debug_asok  0/0 
  mon
debug_auth  0/0 
  mon
debug_bdev  0/0 
  mon
debug_bluefs0/0 
  mon
debug_bluestore 0/0 
  mon
debug_buffer0/0 
  mon
debug_civetweb  0/0 
  mon
debug_client0/0 
  mon
debug_compressor0/0 
  mon
debug_context   0/0 
  mon
debug_crush 0/0 
  mon
debug_crypto0/0 
  mon
debug_dpdk  0/0 
  mon
debug_eventtrace0/0 
   

Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-06-23 Thread ceph
Hello,

I would advice to use this Script from dan:
https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight

I have Used it many Times and it works Great - also if you want to drain the 
OSDs.

Hth
Mehmet

Am 30. Mai 2019 22:59:05 MESZ schrieb Michel Raabe :
>Hi Mike,
>
>On 30.05.19 02:00, Mike Cave wrote:
>> I’d like a s little friction for the cluster as possible as it is in 
>> heavy use right now.
>> 
>> I’m running mimic (13.2.5) on CentOS.
>> 
>> Any suggestions on best practices for this?
>
>You can limit the recovery for example
>
>* max backfills
>* recovery max active
>* recovery sleep
>
>It will slow down the rebalance but will not hurt the users too much.
>
>
>Michel.
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com