[ceph-users] Re: how to restart daemons on 15.2 on Debian 10

2020-05-15 Thread Simon Sutter
Hello Michael,


I had the same problems. It's very unfamiliar, if you never worked with the 
cephadm tool.

The Way I'm doing it is to go into the cephadm container:
# cephadm shell

Here you can list all containers (for each service, one container) with the 
orchestration tool:

# ceph orch ps

and then restart it with the orchestration tool:

# ceph orch restart {name from ceph orch ps}


Hope it helps.


Ceers,

Simon


Von: Ml Ml 
Gesendet: Freitag, 15. Mai 2020 12:27:09
An: ceph-users
Betreff: [ceph-users] how to restart daemons on 15.2 on Debian 10

Hello List,

how do you restart daemons (mgr, mon, osd) on 15.2.1?

It used to be something like:
  systemctl stop ceph-osd@10

Or:
  systemctl start ceph-mon@ceph03

however, those command do nothing on my setup.

Is this because i use cephadm and that docker stuff?

The Logs also seem to be missing.
/var/log/ceph/5436dd5d-83d4-4dc8-a93b-60ab5db145df is pretty empty.

I feel like i am missing a lot of documentation here? Can anyone point
me to my missing parts?

Thanks a lot.

Cheers,
Michael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] how to restart daemons on 15.2 on Debian 10

2020-05-15 Thread Ml Ml
Hello List,

how do you restart daemons (mgr, mon, osd) on 15.2.1?

It used to be something like:
  systemctl stop ceph-osd@10

Or:
  systemctl start ceph-mon@ceph03

however, those command do nothing on my setup.

Is this because i use cephadm and that docker stuff?

The Logs also seem to be missing.
/var/log/ceph/5436dd5d-83d4-4dc8-a93b-60ab5db145df is pretty empty.

I feel like i am missing a lot of documentation here? Can anyone point
me to my missing parts?

Thanks a lot.

Cheers,
Michael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC Plugins Benchmark with Current Intel/AMD CPU

2020-05-15 Thread Lazuardi Nasution
Hi Marc,

I read
https://blog.dachary.org/2015/05/12/ceph-jerasure-and-isa-plugins-benchmarks/
and
it seem there is significant performance difference between benchmarked
plugins. But, it was with old Intel E3-1200v2 series. I have read some
others EC benchmark result improvements with current CPU but I cannot find
it for Ceph implementation.

Best regards,

On Fri, May 15, 2020 at 9:03 PM Marc Roos  wrote:

>
> How many % of the latency is even CPU related?
>
>
>
> -Original Message-
> From: Lazuardi Nasution [mailto:mrxlazuar...@gmail.com]
> Sent: 15 May 2020 16:00
> To: ceph-users@ceph.io
> Subject: [ceph-users] EC Plugins Benchmark with Current Intel/AMD CPU
>
> Hi,
>
> Is there any EC plugins benchmark with current Intel/AMD CPU? It seem
> there are new instructions which may accelerate EC. Let's say we want to
> benchmark plugins using Intel 6200 or AMD 7002 series. I hope there is
> better result than what have been benchmarked some years ago.
>
> Best regards,
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC Plugins Benchmark with Current Intel/AMD CPU

2020-05-15 Thread Marc Roos


How many % of the latency is even CPU related?  



-Original Message-
From: Lazuardi Nasution [mailto:mrxlazuar...@gmail.com] 
Sent: 15 May 2020 16:00
To: ceph-users@ceph.io
Subject: [ceph-users] EC Plugins Benchmark with Current Intel/AMD CPU

Hi,

Is there any EC plugins benchmark with current Intel/AMD CPU? It seem 
there are new instructions which may accelerate EC. Let's say we want to 
benchmark plugins using Intel 6200 or AMD 7002 series. I hope there is 
better result than what have been benchmarked some years ago.

Best regards,
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] EC Plugins Benchmark with Current Intel/AMD CPU

2020-05-15 Thread Lazuardi Nasution
Hi,

Is there any EC plugins benchmark with current Intel/AMD CPU? It seem there
are new instructions which may accelerate EC. Let's say we want to
benchmark plugins using Intel 6200 or AMD 7002 series. I hope there is
better result than what have been benchmarked some years ago.

Best regards,
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephfs - NFS Ganesha

2020-05-15 Thread Daniel Gryniewicz
It sounds like you're putting the FSAL_CEPH config in another file in 
/etc/ganesha.  Ganesha only loads one file: /etc/ganesha/ganesha.conf - 
other files need to be included in that file with the %include command. 
For a simple config like yours, just use the single 
/etc/ganesha/ganesha.conf file.


Daniel

On 5/15/20 4:59 AM, Amudhan P wrote:

Hi Rafael,

I have used config you have provided but still i am not able mount nfs. I
don't see any error in log msg

Output from ganesha.log
---
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8732[main]
main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 2.6.0
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file
successfully parsed
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully
removed for proper quota
  management in FSAL
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
lower_my_caps :NFS STARTUP :EVENT :currenty set capabilities are: =
cap_chown,cap_dac_overrid
e,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_
raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_time,cap_sys_tty
_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap+ep
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory
(/var/run/ganesha) alrea
dy exists
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_rpc_cb_init_ccache :NFS STARTUP :WARN
:gssd_refresh_krb5_machine_credential failed (-1765
328160:0)
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :9P/TCP dispatcher thread was started
successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl :
ganesha.nfsd-8738[_9p_disp] _9p_dispatcher_thread :9P DISP :EVENT :9P
dispatcher started
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start :NFS STARTUP :EVENT
:-
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start :NFS STARTUP :EVENT : NFS SERVER INITIALIZED
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start :NFS STARTUP :EVENT
:-
15/05/2020 08:52:13 : epoch 5ebe57e3 : strgcntrl :
ganesha.nfsd-8738[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server
Now NOT IN GRACE

Regards
Amudhan P

On Fri, May 15, 2020 at 1:01 PM Rafael Lopez 
wrote:


Hello Amudhan,

The only ceph specific thing required in the ganesha config is to add the
FSAL block to your export, everything else is standard ganesha config as
far as I know. eg: this would export the root dir of your cephfs as
nfs-server:/cephfs
EXPORT
{
 Export_ID = 100;
 Path = /;
 Pseudo = /cephfs;
 FSAL {
 Name = CEPH;
 User_Id = cephfs_cephx_user;
 }
 CLIENT {
 Clients =  1.2.3.4;
 Access_type = RW;
 }
}

This will rely on ceph config in /etc/ceph/ceph.conf containing typical
cluster client conn

[ceph-users] Re: Cephfs - NFS Ganesha

2020-05-15 Thread Rafael Lopez
Hello Amudhan,

The only ceph specific thing required in the ganesha config is to add the
FSAL block to your export, everything else is standard ganesha config as
far as I know. eg: this would export the root dir of your cephfs as
nfs-server:/cephfs
EXPORT
{
Export_ID = 100;
Path = /;
Pseudo = /cephfs;
FSAL {
Name = CEPH;
User_Id = cephfs_cephx_user;
}
CLIENT {
Clients =  1.2.3.4;
Access_type = RW;
}
}

This will rely on ceph config in /etc/ceph/ceph.conf containing typical
cluster client connection info (cluster id, mon addresses etc).
You also have to have the cephx user specified configured for cephfs
access, including the keyring file in
/etc/ceph/ceph.client.cephfs_cephx_user.keyring.

Your cephx user could be the same one you use to mount the FS using kernel
client, but you will need the keyring file in place, and the ceph.conf.

Not sure how many changes have been made to config since ganesha 2.6, but
the 2.6 version of the sample is here:
https://github.com/nfs-ganesha/nfs-ganesha/blob/V2.6-stable/src/config_samples/ceph.conf

You should be able to see if there were any issues loading configuration
params or the ceph fsal in the ganesha log, typically /var/log/ganesha.log
or /var/log/ganesha/ganesha.log.

On Fri, 15 May 2020 at 17:12, Amudhan P  wrote:

> Hi,
>
> I am trying to setup NFS ganesh in Ceph Nautilus.
>
> In a ubuntu 18.04 system i have installed nfs-ganesha (v2.6) and
> nfs-ganesha-ceph pkg and followed the steps in the link
> https://docs.ceph.com/docs/nautilus/cephfs/nfs/  but i am not able to
> export my cephfs volume there is no error msg in nfs-ganesha, also i doubt
> whether its loading nfs-ganesha-ceph config file from "/etc/ganesha"
> folder.
>
> From same system i am able to mount thru ceph kernel client without any
> issue?
>
> How do i make this work?
>
> regards
> Amudhan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
*Rafael Lopez*
Devops Systems Engineer
Monash University eResearch Centre
E: rafael.lo...@monash.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephfs - NFS Ganesha

2020-05-15 Thread Amudhan P
Hi Rafael,

I have used config you have provided but still i am not able mount nfs. I
don't see any error in log msg

Output from ganesha.log
---
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8732[main]
main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 2.6.0
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file
successfully parsed
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully
removed for proper quota
 management in FSAL
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
lower_my_caps :NFS STARTUP :EVENT :currenty set capabilities are: =
cap_chown,cap_dac_overrid
e,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_
raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_time,cap_sys_tty
_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap+ep
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory
(/var/run/ganesha) alrea
dy exists
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_rpc_cb_init_ccache :NFS STARTUP :WARN
:gssd_refresh_krb5_machine_credential failed (-1765
328160:0)
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :9P/TCP dispatcher thread was started
successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl :
ganesha.nfsd-8738[_9p_disp] _9p_dispatcher_thread :9P DISP :EVENT :9P
dispatcher started
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start :NFS STARTUP :EVENT
:-
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start :NFS STARTUP :EVENT : NFS SERVER INITIALIZED
15/05/2020 08:50:43 : epoch 5ebe57e3 : strgcntrl : ganesha.nfsd-8738[main]
nfs_start :NFS STARTUP :EVENT
:-
15/05/2020 08:52:13 : epoch 5ebe57e3 : strgcntrl :
ganesha.nfsd-8738[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server
Now NOT IN GRACE

Regards
Amudhan P

On Fri, May 15, 2020 at 1:01 PM Rafael Lopez 
wrote:

> Hello Amudhan,
>
> The only ceph specific thing required in the ganesha config is to add the
> FSAL block to your export, everything else is standard ganesha config as
> far as I know. eg: this would export the root dir of your cephfs as
> nfs-server:/cephfs
> EXPORT
> {
> Export_ID = 100;
> Path = /;
> Pseudo = /cephfs;
> FSAL {
> Name = CEPH;
> User_Id = cephfs_cephx_user;
> }
> CLIENT {
> Clients =  1.2.3.4;
> Access_type = RW;
> }
> }
>
> This will rely on ceph config in /etc/ceph/ceph.conf containing typical
> cluster client connection info (cluster id, mon addresses etc).
> You also have to have the cephx user specified configured for cephfs
> access, including the keyring file in
> /etc/ceph/ceph.client.cephfs_cephx_user.keyring.
>
> Your cephx user could be the same one you use to mount the FS using kernel
> client, but you will need the k

[ceph-users] Re: Ceph modules

2020-05-15 Thread Konstantin Shalygin

Hi,

On 5/15/20 2:37 PM, Alfredo De Luca wrote:

Just a quick one. Are there any ansible modules for ceph around?


https://github.com/ceph/ceph-ansible



k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph modules

2020-05-15 Thread Alfredo De Luca
Hi all.
Just a quick one. Are there any ansible modules for ceph around?
Cheers

-- 
*/Alfredo*
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Cephfs - NFS Ganesha

2020-05-15 Thread Amudhan P
Hi,

I am trying to setup NFS ganesh in Ceph Nautilus.

In a ubuntu 18.04 system i have installed nfs-ganesha (v2.6) and
nfs-ganesha-ceph pkg and followed the steps in the link
https://docs.ceph.com/docs/nautilus/cephfs/nfs/  but i am not able to
export my cephfs volume there is no error msg in nfs-ganesha, also i doubt
whether its loading nfs-ganesha-ceph config file from "/etc/ganesha" folder.

>From same system i am able to mount thru ceph kernel client without any
issue?

How do i make this work?

regards
Amudhan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io