Re: [ceph-users] Nautilus - Balancer is always on

2019-08-07 Thread Konstantin Shalygin
ceph mgr module disable balancer Error EINVAL: module 'balancer' cannot be disabled (always-on) Whats the way to restart balanacer? Restart MGR service? I wanna suggest to Balancer developer to setup a ceph-balancer.log for this module get more information about whats doing. Maybe

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread DHilsbos
JC; Excellent, thank you! I apologize, normally I'm better about RTFM... Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. dhils...@performair.com www.PerformAir.com From: JC Lopez [mailto:jelo...@redhat.com] Sent: Wednesday, August 07, 20

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread JC Lopez
Hi, See https://docs.ceph.com/docs/nautilus/cephfs/kernel/ -o mds_namespace={fsname} Regards JC > On Aug 7, 2019, at 10:24, dhils...@performair.com wrote: > > All; > > Thank you for your assistance, this led me to the fact that I hadn't se

Re: [ceph-users] Can kstore be used as OSD objectstore backend when deploying a Ceph Storage Cluster? If can, how to?

2019-08-07 Thread Gregory Farnum
No; KStore is not for real use AFAIK. On Wed, Aug 7, 2019 at 12:24 AM R.R.Yuan wrote: > > Hi, All, > >When deploying a development cluster, there are three types of OSD > objectstore backend: filestore, bluestore and kstore. >But there is no "--kstore" option when using "ceph-de

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread DHilsbos
All; Thank you for your assistance, this led me to the fact that I hadn't set up the Ceph repo on this client server, and the ceph-common I had installed was version 10. I got all of that squared away, and it all works. I do have a couple follow up questions: Can more than one system mount the

[ceph-users] FYI: Mailing list domain change

2019-08-07 Thread David Galloway
Hi all, I am in the process of migrating the upstream Ceph mailing lists from Dreamhost to a self-hosted instance of Mailman 3. Please update your address book and mail filters to ceph-us...@ceph.io (notice the Top Level Domain change). You may receive a "Welcome" e-mail as I subscribe you to th

[ceph-users] ceph device list empty

2019-08-07 Thread Gary Molenkamp
I'm testing an upgrade to Nautilus on a development cluster and the command "ceph device ls" is returning an empty list. # ceph device ls DEVICE HOST:DEV DAEMONS LIFE EXPECTANCY # I have walked through the luminous upgrade documentation under https://docs.ceph.com/docs/master/releases/nautilus/

Re: [ceph-users] OSD's keep crasching after clusterreboot

2019-08-07 Thread Ansgar Jazdzewski
another update, we now took the more destructive route and removed the cephfs pools (lucky we had only test date in the filesystem) Our hope was that within the startup-process the osd will delete the no longer needed PG, But this is NOT the Case. So we are still have the same issue the only diff

Re: [ceph-users] bluestore write iops calculation

2019-08-07 Thread vitalif
I can add RAM ans is there a way to increase rocksdb caching , can I increase bluestore_cache_size_hdd to higher value to cache rocksdb? In recent releases it's governed by the osd_memory_target parameter. In previous releases it's bluestore_cache_size_hdd. Check release notes to know for sure

Re: [ceph-users] New CRUSH device class questions

2019-08-07 Thread Paul Emmerich
On Wed, Aug 7, 2019 at 9:30 AM Robert LeBlanc wrote: >> # ceph osd crush rule dump replicated_racks_nvme >> { >> "rule_id": 0, >> "rule_name": "replicated_racks_nvme", >> "ruleset": 0, >> "type": 1, >> "min_size": 1, >> "max_size": 10, >> "steps": [ >>

Re: [ceph-users] out of memory bluestore osds

2019-08-07 Thread Mark Nelson
Hi Jaime, we only use the cache size parameters now if you've disabled autotuning.  With autotuning we adjust the cache size on the fly to try and keep the mapped process memory under the osd_memory_target.  You can set a lower memory target than default, though you will have far less cache

Re: [ceph-users] 14.2.2 - OSD Crash

2019-08-07 Thread Igor Fedotov
Manuel, well, this is a bit different from the tickets I shared... But still looks like slow DB access. 80+ seconds for submit/commit latency is TOO HIGH, this definitely might cause suicides... Have you had a chance to inspect disk utilization? Introducing NVMe drive for WAL/DB might be

[ceph-users] out of memory bluestore osds

2019-08-07 Thread Jaime Ibar
Hi all, we run a Ceph Luminous 12.2.12 cluster, 7 osds servers 12x4TB disks each. Recently we redeployed the osds of one of them using bluestore backend, however, after this, we're facing Out of memory errors(invoked oom-killer) and the OS kills one of the ceph-osd process. The osd is restarted a

Re: [ceph-users] 14.2.2 - OSD Crash

2019-08-07 Thread EDH - Manuel Rios Fernandez
Hi Igor Yes we got all in same device : [root@CEPH-MON01 ~]# ceph osd df tree ID CLASS WEIGHTREWEIGHT SIZERAW USE DATAOMAPMETA AVAIL %USE VAR PGS STATUS TYPE NAME 31 130.96783- 131 TiB 114 TiB 114 TiB 14 MiB 204 GiB 17 TiB 86.88 1.03 -

[ceph-users] Nautilus - Balancer is always on

2019-08-07 Thread EDH - Manuel Rios Fernandez
Hi All, ceph mgr module disable balancer Error EINVAL: module 'balancer' cannot be disabled (always-on) Whats the way to restart balanacer? Restart MGR service? I wanna suggest to Balancer developer to setup a ceph-balancer.log for this module get more information about whats doing.

Re: [ceph-users] 14.2.2 - OSD Crash

2019-08-07 Thread Igor Fedotov
Hi Manuel, as Brad pointed out timeouts and suicides are rather consequences of some other issues with OSDs. I recall at least two recent relevant tickets: https://tracker.ceph.com/issues/36482 https://tracker.ceph.com/issues/40741 (see last comments) Both had massive and slow reads from Ro

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread Frank Schilder
On Centos7, the option "secretfile" requires installation of ceph-fuse. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Yan, Zheng Sent: 07 August 2019 10:10:19 To: dhils...@performair.c

Re: [ceph-users] OSD's keep crasching after clusterreboot

2019-08-07 Thread Ansgar Jazdzewski
Hi, as a follow-up: * a full log of one OSD failing to start https://pastebin.com/T8UQ2rZ6 * our ec-pool cration in the fist place https://pastebin.com/20cC06Jn * ceph osd dump and ceph osd erasure-code-profile get cephfs https://pastebin.com/TRLPaWcH as we try to dig more into it, it looks like

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread Yan, Zheng
On Wed, Aug 7, 2019 at 3:46 PM wrote: > > All; > > I have a server running CentOS 7.6 (1810), that I want to set up with CephFS > (full disclosure, I'm going to be running samba on the CephFS). I can mount > the CephFS fine when I use the option secret=, but when I switch to > secretfile=, I g

Re: [ceph-users] New CRUSH device class questions

2019-08-07 Thread Konstantin Shalygin
On 8/7/19 2:30 PM, Robert LeBlanc wrote: ... plus 11 more hosts just like this Interesting. Please paste full `ceph osd df tree`. What is actually your NVMe models? Yes, our HDD cluster is much like this, but not Luminous, so we created as separate root with SSD OSD for the metadata and set

Re: [ceph-users] New CRUSH device class questions

2019-08-07 Thread Robert LeBlanc
On Wed, Aug 7, 2019 at 12:08 AM Konstantin Shalygin wrote: > On 8/7/19 1:40 PM, Robert LeBlanc wrote: > > > Maybe it's the lateness of the day, but I'm not sure how to do that. > > Do you have an example where all the OSDs are of class ssd? > Can't parse what you mean. You always should paste you

[ceph-users] Can kstore be used as OSD objectstore backend when deploying a Ceph Storage Cluster? If can, how to?

2019-08-07 Thread R.R.Yuan
Hi, All, When deploying a development cluster, there are three types of OSD objectstore backend: filestore, bluestore and kstore. But there is no "--kstore" option when using "ceph-deploy osd"command to deploy a real ceph cluster. Can kstore be used as OSD objectst

Re: [ceph-users] New CRUSH device class questions

2019-08-07 Thread Konstantin Shalygin
On 8/7/19 1:40 PM, Robert LeBlanc wrote: Maybe it's the lateness of the day, but I'm not sure how to do that. Do you have an example where all the OSDs are of class ssd? Can't parse what you mean. You always should paste your `ceph osd tree` first. Yes, we can set quotas to limit space usage