[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-10 Thread Jens Hyllegaard (Soft Design A/S)
According to your "pvs" you still have a VG on your sdb device. As long as that 
is on there, it will not be available to ceph. I have had to do a lvremove, 
like this:
lvremove ceph-78c78efb-af86-427c-8be1-886fa1d54f8a 
osd-db-72784b7a-b5c0-46e6-8566-74758c297adc

Do a lvs command to see the right parameters.

Regards

Jens

-Original Message-
From: Tony Liu  
Sent: 10. februar 2021 22:59
To: David Orman 
Cc: Jens Hyllegaard (Soft Design A/S) ; 
ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd 
service spec

Hi David,

===
# pvs
  PV VG  Fmt  Attr 
PSizePFree
  /dev/sda3  vg0 lvm2 a-- 
1.09t  0
  /dev/sdb   ceph-block-dbs-f8d28f1f-2dd3-47d0-9110-959e88405112 lvm2 a--  
<447.13g 127.75g
  /dev/sdc   ceph-block-8f85121e-98bf-4466-aaf3-d888bcc938f6 lvm2 a-- 
2.18t  0
  /dev/sde   ceph-block-0b47f685-a60b-42fb-b679-931ef763b3c8 lvm2 a-- 
2.18t  0
  /dev/sdf   ceph-block-c526140d-c75f-4b0d-8c63-fbb2a8abfaa2 lvm2 a-- 
2.18t  0
  /dev/sdg   ceph-block-52b422f7-900a-45ff-a809-69fadabe12fa lvm2 a-- 
2.18t  0
  /dev/sdh   ceph-block-da269f0d-ae11-4178-bf1e-6441b8800336 lvm2 a-- 
2.18t  0
===
After "orch osd rm", which doesn't clean up DB LV on OSD node, I manually clean 
it up by running "ceph-volume lvm zap --osd-id 12", which does the cleanup.
Is "orch device ls" supposed to show SSD device available if there is free 
space?
That could be another issue.

Thanks!
Tony

From: David Orman 
Sent: February 10, 2021 01:19 PM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd 
service spec

It's displaying sdb (what I assume you want to be used as a DB device) as 
unavailable. What's "pvs" output look like on that "ceph-osd-1" host? Perhaps 
it is full. I see the other email you sent regarding replacement; I suspect the 
pre-existing LV from your previous OSD is not re-used. You may need to delete 
it then the service specification should re-create it along with the OSD. If I 
remember correctly, I stopped the automatic application of the service spec 
(ceph orch rm osd.servicespec) when I had to replace a failed OSD, removed the 
OSD, nuked the LV on the db device in question, put in the new drive, then 
re-enabled the service-spec (ceph orch apply osd -i) and the OSD + DB/WAL were 
created appropriately. I don't remember the exact sequence, and it may depend 
on the ceph version. I'm also unsure if the "orch osd rm  --replace 
[--force]" will allow preservation of the db/wal mapping, it might be worth 
looking at in the future.

On Wed, Feb 10, 2021 at 2:22 PM Tony Liu 
mailto:tonyliu0...@hotmail.com>> wrote:
Hi David,

Request info is below.

# ceph orch device ls ceph-osd-1
HOSTPATH  TYPE   SIZE  DEVICE_ID   MODEL
VENDOR   ROTATIONAL  AVAIL  REJECT REASONS
ceph-osd-1  /dev/sdd  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VL2G   
DL2400MM0159 SEAGATE  1   True
ceph-osd-1  /dev/sda  hdd   1117G  SEAGATE_ST1200MM0099_WFK4NNDY   
ST1200MM0099 SEAGATE  1   False  LVM detected, Insufficient space 
(<5GB) on vgs, locked
ceph-osd-1  /dev/sdb  ssd447G  ATA_MZ7KH480HAHQ0D3_S5CNNA0N305738  
MZ7KH480HAHQ0D3  ATA  0   False  LVM detected, locked
ceph-osd-1  /dev/sdc  hdd   2235G  SEAGATE_DL2400MM0159_WBM2WNSE   
DL2400MM0159 SEAGATE  1   False  LVM detected, Insufficient space 
(<5GB) on vgs, locked
ceph-osd-1  /dev/sde  hdd   2235G  SEAGATE_DL2400MM0159_WBM2WP2S   
DL2400MM0159 SEAGATE  1   False  LVM detected, Insufficient space 
(<5GB) on vgs, locked
ceph-osd-1  /dev/sdf  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VK99   
DL2400MM0159 SEAGATE  1   False  LVM detected, Insufficient space 
(<5GB) on vgs, locked
ceph-osd-1  /dev/sdg  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VJBT   
DL2400MM0159 SEAGATE  1   False  LVM detected, Insufficient space 
(<5GB) on vgs, locked
ceph-osd-1  /dev/sdh  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VMFK   
DL2400MM0159 SEAGATE  1   False  LVM detected, Insufficient space 
(<5GB) on vgs, locked
# cat osd-spec.yaml
service_type: osd
service_id: osd-spec
placement:
 hosts:
 - ceph-osd-1
spec:
  objectstore: bluestore
  #block_db_size: 32212254720
  block_db_size: 64424509440
  data_devices:
#rotational: 1
paths:
- /dev/sdd
  db_devices:
#rotational: 0
size: ":1T"
#unmanaged: true

+-+--+--+--++-+
# ceph orch apply osd -i osd-spec.yaml --dry-run
+

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-09 Thread Jens Hyllegaard (Soft Design A/S)
Hi Tony.

I assume they used a size constraint instead of rotational. So if all your 
SSD's are 1TB or less , and all HDD's are more than that you could use:

spec:
  objectstore: bluestore
  data_devices:
rotational: true
  filter_logic: AND
  db_devices:
size: ':1TB'

It was usable in my test environment, and seems to work.

Regards

Jens


-Original Message-
From: Tony Liu  
Sent: 9. februar 2021 02:09
To: David Orman 
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: db_devices doesn't show up in exported osd service 
spec

Hi David,

Could you show me an example of OSD service spec YAML to workaround it by 
specifying size?

Thanks!
Tony

From: David Orman 
Sent: February 8, 2021 04:06 PM
To: Tony Liu
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd 
service spec

Adding ceph-users:

We ran into this same issue, and we used a size specification to workaround for 
now.

Bug and patch:

https://tracker.ceph.com/issues/49014
https://github.com/ceph/ceph/pull/39083

Backport to Octopus:

https://github.com/ceph/ceph/pull/39171

On Sat, Feb 6, 2021 at 7:05 PM Tony Liu 
mailto:tonyliu0...@hotmail.com>> wrote:
Add dev to comment.

With 15.2.8, when apply OSD service spec, db_devices is gone.
Here is the service spec file.
==
service_type: osd
service_id: osd-spec
placement:
  hosts:
  - ceph-osd-1
spec:
  objectstore: bluestore
  data_devices:
rotational: 1
  db_devices:
rotational: 0
==

Here is the logging from mon. The message with "Tony" is added by me in mgr to 
confirm. The audit from mon shows db_devices is gone.
Is there anything in mon to filter that out based on host info?
How can I trace it?
==
audit 2021-02-07T00:45:38.106171+ mgr.ceph-control-1.nxjnzz (mgr.24142551) 
4020 : audit [DBG] from='client.24184218 -' entity='client.admin' 
cmd=[{"prefix": "orch apply osd", "target": ["mon-mgr", ""]}]: dispatch cephadm 
2021-02-07T00:45:38.108546+ mgr.ceph-control-1.nxjnzz (mgr.24142551) 4021 : 
cephadm [INF] Marking host: ceph-osd-1 for OSDSpec preview refresh.
cephadm 2021-02-07T00:45:38.108798+ mgr.ceph-control-1.nxjnzz 
(mgr.24142551) 4022 : cephadm [INF] Saving service osd.osd-spec spec with 
placement ceph-osd-1 cephadm 2021-02-07T00:45:38.108893+ 
mgr.ceph-control-1.nxjnzz (mgr.24142551) 4023 : cephadm [INF] Tony: spec: 
placement=PlacementSpec(hosts=[HostPlacementSpec(hostname='ceph-osd-1',
 network='', name='')]), service_id='osd-spec', service_type='osd', 
data_devices=DeviceSelection(rotational=1, all=False), 
db_devices=DeviceSelection(rotational=0, all=False), osd_id_claims={}, 
unmanaged=False, filter_logic='AND', preview_only=False)> audit 
2021-02-07T00:45:38.109782+ mon.ceph-control-3 (mon.2) 25 : audit [INF] 
from='mgr.24142551 10.6.50.30:0/2838166251<http://10.6.50.30:0/2838166251>' 
entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key 
set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": 
\"2021-02-07T00:45:38.108810\", \"spec\": {\"plac
 ement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", 
\"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": 
{\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", 
\"objectstore\": \"bluestore\"}}}"}]: dispatch audit 
2021-02-07T00:45:38.110133+ mon.ceph-control-1 (mon.0) 107 : audit [INF] 
from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz' 
cmd=[{"prefix":"config-key 
set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": 
\"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": 
[\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": 
\"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": 
{\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": 
\"bluestore\"}}}"}]: dispatch audit 2021-02-07T00:45:38.152756+ 
mon.ceph-control-1 (mon.0) 108 : audit [INF] from='mgr.24142551 ' 
entity='mgr.ceph-control-1.nxjnzz' cmd='[{"prefix":"config-key 
set","key":"mgr/cephadm/spec.osd.osd-
 spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": 
{\"placement\": {\"hosts\": [\"ceph-osd-

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-04 Thread Jens Hyllegaard (Soft Design A/S)
Hi.

I have the same situation. Running 15.2.8
I created a specification that looked just like it. With rotational in the data 
and non-rotational in the db.

First use applied fine. Afterwards it only uses the hdd, and not the ssd.
Also, is there a way to remove an unused osd service.
I manages to create osd.all-available-devices, when I tried to stop the 
autocreation of OSD's. Using ceph orch apply osd --all-available-devices 
--unmanaged=true

I created the original OSD using the web interface.

Regards

Jens
-Original Message-
From: Eugen Block  
Sent: 3. februar 2021 11:40
To: Tony Liu 
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: db_devices doesn't show up in exported osd service 
spec

How do you manage the db_sizes of your SSDs? Is that managed automatically by 
ceph-volume? You could try to add another config and see what it does, maybe 
try to add block_db_size?


Zitat von Tony Liu :

> All mon, mgr, crash and osd are upgraded to 15.2.8. It actually fixed 
> another issue (no device listed after adding host).
> But this issue remains.
> ```
> # cat osd-spec.yaml
> service_type: osd
> service_id: osd-spec
> placement:
>   host_pattern: ceph-osd-[1-3]
> data_devices:
>   rotational: 1
> db_devices:
>   rotational: 0
>
> # ceph orch apply osd -i osd-spec.yaml Scheduled osd.osd-spec 
> update...
>
> # ceph orch ls --service_name osd.osd-spec --export
> service_type: osd
> service_id: osd-spec
> service_name: osd.osd-spec
> placement:
>   host_pattern: ceph-osd-[1-3]
> spec:
>   data_devices:
> rotational: 1
>   filter_logic: AND
>   objectstore: bluestore
> ```
> db_devices still doesn't show up.
> Keep scratching my head...
>
>
> Thanks!
> Tony
>> -Original Message-
>> From: Eugen Block 
>> Sent: Tuesday, February 2, 2021 2:20 AM
>> To: ceph-users@ceph.io
>> Subject: [ceph-users] Re: db_devices doesn't show up in exported osd 
>> service spec
>>
>> Hi,
>>
>> I would recommend to update (again), here's my output from a 15.2.8 
>> test
>> cluster:
>>
>>
>> host1:~ # ceph orch ls --service_name osd.default --export
>> service_type: osd
>> service_id: default
>> service_name: osd.default
>> placement:
>>hosts:
>>- host4
>>- host3
>>- host1
>>- host2
>> spec:
>>block_db_size: 4G
>>data_devices:
>>  rotational: 1
>>  size: '20G:'
>>db_devices:
>>  size: '10G:'
>>filter_logic: AND
>>objectstore: bluestore
>>
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Tony Liu :
>>
>> > Hi,
>> >
>> > When build cluster Octopus 15.2.5 initially, here is the OSD 
>> > service spec file applied.
>> > ```
>> > service_type: osd
>> > service_id: osd-spec
>> > placement:
>> >   host_pattern: ceph-osd-[1-3]
>> > data_devices:
>> >   rotational: 1
>> > db_devices:
>> >   rotational: 0
>> > ```
>> > After applying it, all HDDs were added and DB of each hdd is 
>> > created on SSD.
>> >
>> > Here is the export of OSD service spec.
>> > ```
>> > # ceph orch ls --service_name osd.osd-spec --export
>> > service_type: osd
>> > service_id: osd-spec
>> > service_name: osd.osd-spec
>> > placement:
>> >   host_pattern: ceph-osd-[1-3]
>> > spec:
>> >   data_devices:
>> > rotational: 1
>> >   filter_logic: AND
>> >   objectstore: bluestore
>> > ```
>> > Why db_devices doesn't show up there?
>> >
>> > When I replace a disk recently, when the new disk was installed and 
>> > zapped, OSD was automatically re-created, but DB was created on 
>> > HDD, not SSD. I assume this is because of that missing db_devices?
>> >
>> > I tried to update service spec, the same result, db_devices doesn't 
>> > show up when export it.
>> >
>> > Is this some known issue or something I am missing?
>> >
>> >
>> > Thanks!
>> > Tony
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send 
>> > an email to ceph-users-le...@ceph.io
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
>> email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] NFS version 4.0

2021-02-04 Thread Jens Hyllegaard (Soft Design A/S)
Hi.

We are trying to set up an NFS server using ceph which needs to be accessed by 
an IBM System i.
As far as I can tell the IBM System i only supports nfs v. 4.
Looking at the nfs-ganesha deployments it seems that these only support 4.1 or 
4.2. I have tried editing the configuration file to support 4.0 and it seems to 
work.
Is there a reason than it currently only support 4.1 and 4.2?

I can of course edit the configuration file, but I would have to do that after 
any deployment or upgrade of the nfs servers.

Regards

Jens Hyllegaard
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Unable to use ceph command

2021-01-29 Thread Jens Hyllegaard (Soft Design A/S)
Hi.

I think you are right. I suspect that somehow the 
/etc/ceph/ceph.client.admin.keyring file disappeared on all the hosts.

I ended up reinstalling the cluster.

Thank you for your input.

Regards

Jens

-Original Message-
From: Robert Sander  
Sent: 29. januar 2021 13:29
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Unable to use ceph command

Hi,

Am 26.01.21 um 14:58 schrieb Jens Hyllegaard (Soft Design A/S):
> 
> I am not sure why this is not working, but I am now unable to use the ceph 
> command on any of my hosts.
> 
> When I try to launch ceph, I get the following response:
> [errno 13] RADOS permission denied (error connecting to the cluster)

This issue is mostly caused by not having a readable ceph.conf and 
ceph.client.admin.keyring file in /etc/ceph for the user that starts the ceph 
command.

Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: "ceph orch restart mgr" command creates mgr restart loop

2021-01-27 Thread Jens Hyllegaard (Soft Design A/S)
Hi Chris

Having also recently started exploring Ceph. I too happened upon this problem.
I found that terminating the command witch ctrl-c seemed to stop the looping. 
Which btw. also happens on all other mgr instances in the cluster.

Regards

Jens

-Original Message-
From: Chris Read  
Sent: 11. januar 2021 21:54
To: ceph-users@ceph.io
Subject: [ceph-users] "ceph orch restart mgr" command creates mgr restart loop

Greetings all...

I'm busy testing out Ceph and have hit this troublesome bug while following the 
steps outlined here:

https://docs.ceph.com/en/octopus/cephadm/monitoring/#configuring-ssl-tls-for-grafana

When I issue the "ceph orch restart mgr" command, it appears the command is not 
cleared from a message queue somewhere (I'm still very unclear on many ceph 
specifics), and so each time the mgr process returns from restart it picks up 
the message again and keeps restarting itself forever (so far it's been stuck 
in this state for 45 minutes).

Watching the logs we see this going on:

$ ceph log last cephadm -w

root@ceph-poc-000:~# ceph log last cephadm -w
  cluster:
id: d23bc326-543a-11eb-bfe0-b324db228b6c
health: HEALTH_OK

  services:
mon: 5 daemons, quorum
ceph-poc-000,ceph-poc-003,ceph-poc-004,ceph-poc-002,ceph-poc-001 (age 2h)
mgr: ceph-poc-000.himivo(active, since 4s), standbys:
ceph-poc-001.unjulx
osd: 10 osds: 10 up (since 2h), 10 in (since 2h)

  data:
pools:   1 pools, 1 pgs
objects: 0 objects, 0 B
usage:   10 GiB used, 5.4 TiB / 5.5 TiB avail
pgs: 1 active+clean


2021-01-11T20:46:32.976606+ mon.ceph-poc-000 [INF] Active manager daemon 
ceph-poc-000.himivo restarted
2021-01-11T20:46:32.980749+ mon.ceph-poc-000 [INF] Activating manager 
daemon ceph-poc-000.himivo
2021-01-11T20:46:33.061519+ mon.ceph-poc-000 [INF] Manager daemon 
ceph-poc-000.himivo is now available
2021-01-11T20:46:39.156420+ mon.ceph-poc-000 [INF] Active manager daemon 
ceph-poc-000.himivo restarted
2021-01-11T20:46:39.160618+ mon.ceph-poc-000 [INF] Activating manager 
daemon ceph-poc-000.himivo
2021-01-11T20:46:39.242603+ mon.ceph-poc-000 [INF] Manager daemon 
ceph-poc-000.himivo is now available
2021-01-11T20:46:45.299953+ mon.ceph-poc-000 [INF] Active manager daemon 
ceph-poc-000.himivo restarted
2021-01-11T20:46:45.304006+ mon.ceph-poc-000 [INF] Activating manager 
daemon ceph-poc-000.himivo
2021-01-11T20:46:45.733495+ mon.ceph-poc-000 [INF] Manager daemon 
ceph-poc-000.himivo is now available
2021-01-11T20:46:51.871903+ mon.ceph-poc-000 [INF] Active manager daemon 
ceph-poc-000.himivo restarted
2021-01-11T20:46:51.877107+ mon.ceph-poc-000 [INF] Activating manager 
daemon ceph-poc-000.himivo
2021-01-11T20:46:51.976190+ mon.ceph-poc-000 [INF] Manager daemon 
ceph-poc-000.himivo is now available
2021-01-11T20:46:58.000720+ mon.ceph-poc-000 [INF] Active manager daemon 
ceph-poc-000.himivo restarted
2021-01-11T20:46:58.006843+ mon.ceph-poc-000 [INF] Activating manager 
daemon ceph-poc-000.himivo
2021-01-11T20:46:58.097163+ mon.ceph-poc-000 [INF] Manager daemon 
ceph-poc-000.himivo is now available
2021-01-11T20:47:04.188630+ mon.ceph-poc-000 [INF] Active manager daemon 
ceph-poc-000.himivo restarted
2021-01-11T20:47:04.193501+ mon.ceph-poc-000 [INF] Activating manager 
daemon ceph-poc-000.himivo
2021-01-11T20:47:04.285509+ mon.ceph-poc-000 [INF] Manager daemon 
ceph-poc-000.himivo is now available
2021-01-11T20:47:10.348099+ mon.ceph-poc-000 [INF] Active manager daemon 
ceph-poc-000.himivo restarted
2021-01-11T20:47:10.352340+ mon.ceph-poc-000 [INF] Activating manager 
daemon ceph-poc-000.himivo
2021-01-11T20:47:10.752243+ mon.ceph-poc-000 [INF] Manager daemon 
ceph-poc-000.himivo is now available

And in the logs for the mgr instance itself we see it keep replaying the 
message over and over:

$ docker logs -f
ceph-d23bc326-543a-11eb-bfe0-b324db228b6c-mgr.ceph-poc-000.himivo
debug 2021-01-11T20:47:31.390+ 7f48b0d0d200  0 set uid:gid to 167:167
(ceph:ceph)
debug 2021-01-11T20:47:31.390+ 7f48b0d0d200  0 ceph version 15.2.8
(bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process ceph-mgr, 
pid 1 debug 2021-01-11T20:47:31.390+ 7f48b0d0d200  0 pidfile_write: ignore 
empty --pid-file debug 2021-01-11T20:47:31.414+ 7f48b0d0d200  1 mgr[py] 
Loading python module 'alerts'
debug 2021-01-11T20:47:31.486+ 7f48b0d0d200  1 mgr[py] Loading python 
module 'balancer'
debug 2021-01-11T20:47:31.542+ 7f48b0d0d200  1 mgr[py] Loading python 
module 'cephadm'
debug 2021-01-11T20:47:31.742+ 7f48b0d0d200  1 mgr[py] Loading python 
module 'crash'
debug 2021-01-11T20:47:31.798+ 7f48b0d0d200  1 mgr[py] Loading python 
module 'dashboard'
debug 2021-01-11T20:47:32.258+ 7f48b0d0d200  1 mgr[py] Loading python 
module 'devicehealth'
debug 2021-01-11T20:47:32.306+ 7f48b0d0d200  1 mgr[py] Loading python 
module 'diskprediction_local'
debug 

[ceph-users] Re: Unable to use ceph command

2021-01-26 Thread Jens Hyllegaard (Soft Design A/S)
According to the management interface, everything is OK.
There are 3 monitors in quorum.
I am running this on docker.
Perhaps I should have a look at the containers and see if their information is 
different than what is in /etc/ceph on the hosts

Regards

Jens

-Original Message-
From: Eugen Block  
Sent: 26. januar 2021 15:43
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Unable to use ceph command

Do you have mon containers running so they can form a quorum? Do your hosts 
still have a (at least) minimal ceph.conf?


Zitat von "Jens Hyllegaard (Soft Design A/S)" :

> Hi.
>
> I am not sure why this is not working, but I am now unable to use the 
> ceph command on any of my hosts.
>
> When I try to launch ceph, I get the following response:
> [errno 13] RADOS permission denied (error connecting to the cluster)
>
> The web management interface is working fine.
>
> I have a suspicion that this started after trying to recreate an nfs 
> cluster I first removed the existing one with: ceph nfs cluster delete 
>  And then tried to create it again with: ceph nfs cluster create 
> cephfs  The command seemed to hang, and after several hours I 
> ended the command with ctrl-c.
> Since then I have been unable to use the ceph command.
>
> This is fortunately a test environment, and it is running Octopus 
> 15.2.8
>
> Does anyone have an idea on how I can get access again?
>
> Regards
>
> Jens Hyllegaard
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Unable to use ceph command

2021-01-26 Thread Jens Hyllegaard (Soft Design A/S)
Hi.

I am not sure why this is not working, but I am now unable to use the ceph 
command on any of my hosts.

When I try to launch ceph, I get the following response:
[errno 13] RADOS permission denied (error connecting to the cluster)

The web management interface is working fine.

I have a suspicion that this started after trying to recreate an nfs cluster
I first removed the existing one with: ceph nfs cluster delete 
And then tried to create it again with: ceph nfs cluster create cephfs 
The command seemed to hang, and after several hours I ended the command with 
ctrl-c.
Since then I have been unable to use the ceph command.

This is fortunately a test environment, and it is running Octopus 15.2.8

Does anyone have an idea on how I can get access again?

Regards

Jens Hyllegaard
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Setting up NFS with Octopus

2020-12-21 Thread Jens Hyllegaard (Soft Design A/S)
Hi.

You are correct. There must be some hardcoded occurences of nfs-ganesha.
I tried creating a new cluster using the ceph nfs cluster create command.
I was still unable to create an export using the management interface, still 
got permission errors.
But I created the folder manually and did a chmod 777 on it. I then made the 
nfs export using the management interface and pointed it at the folder.
I am, however, unable to mount the NFS share when specifying only V3 on the 
export. I noticed you mentioned that NFSv3 is not supported?

Regards

Jens

-Original Message-
From: Eugen Block  
Sent: 21. december 2020 11:40
To: Jens Hyllegaard (Soft Design A/S) 
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Setting up NFS with Octopus

Hi,

> I am still not sure if I need to create two different pools, one for 
> NFS daemon and one for the export?

the pool (and/or namespace) you specify in your nfs.yaml is for the ganesha 
config only (and should be created for you), it doesn't store nfs data since 
that is covered via cephfs (the backend). So the data
pool(s) of your cephfs are also storing your nfs data. The CephFS should 
already be present which seems true in your case.

I'm wondering if the name of your cephfs could be the reason for the failure, 
maybe it's hard-coded somewhere (wouldn't be the first time), but that's not 
more than an assumption. It could be worth a try, it shouldn't take too long to 
tear down the cephfs and recreate it. If you try that you should also tear down 
the ganesha pool just to clean up properly.

The detailed steps are basically covered in [1] (I believe you already 
referenced that in this thread though), I only noticed that the 'ceph nfs 
export ls' commands don't seem to work, but that information is present in the 
dashboard.

Also keep in mind that NFSv3 is not supported when you create the share.

Regards,
Eugen

[1] https://docs.ceph.com/en/latest/cephfs/fs-nfs-exports/


Zitat von "Jens Hyllegaard (Soft Design A/S)" :

> This is the output from ceph status:
>   cluster:
> id: 9d7bc71a-3f88-11eb-bc58-b9cfbaed27d3
> health: HEALTH_WARN
> 1 pool(s) do not have an application enabled
>
>   services:
> mon: 3 daemons, quorum
> ceph-storage-1.softdesign.dk,ceph-storage-2,ceph-storage-3 (age 4d)
> mgr: ceph-storage-1.softdesign.dk.vsrdsm(active, since 4d),
> standbys: ceph-storage-3.jglzte
> mds: objstore:1 {0=objstore.ceph-storage-1.knaufh=up:active} 1 up:standby
> osd: 3 osds: 3 up (since 3d), 3 in (since 3d)
>
>   task status:
> scrub status:
> mds.objstore.ceph-storage-1.knaufh: idle
>
>   data:
> pools:   4 pools, 97 pgs
> objects: 31 objects, 25 KiB
> usage:   3.1 GiB used, 2.7 TiB / 2.7 TiB avail
> pgs: 97 active+clean
>
>   io:
> client:   170 B/s rd, 0 op/s rd, 0 op/s wr
>
> So everything seems to ok.
>
> I wonder if anyone could guide me from scratch on how to set up the NFS.
> I am still not sure if I need to create two different pools, one for 
> NFS daemon and one for the export?
>
> Regards
>
> Jens
>
> -Original Message-
> From: Eugen Block 
> Sent: 18. december 2020 16:30
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Setting up NFS with Octopus
>
> What is the cluster status? The permissions seem correct, maybe the 
> OSDs have a problem?
>
>
> Zitat von "Jens Hyllegaard (Soft Design A/S)"  
> :
>
>> I have tried mounting the cephFS as two different users.
>> I tried creating a user obuser with:
>> fs authorize objstore client.objuser / rw
>>
>> And I tried mounting using the admin user.
>>
>> The mount works as expected, but neither user is able to create files 
>> or folders.
>> Unless I use sudo, then it works for both users.
>>
>> The client.objuser keyring is:
>>
>> client.objuser
>> key: AQCGodxfuuLxCBAAMjaSNM58JtkkUwO8UqGGYw==
>>     caps: [mds] allow rw
>> caps: [mon] allow r
>> caps: [osd] allow rw tag cephfs data=objstore
>>
>> Regards
>>
>> Jens
>>
>> -Original Message-
>> From: Eugen Block 
>> Sent: 18. december 2020 13:25
>> To: Jens Hyllegaard (Soft Design A/S) 
>> Cc: 'ceph-users@ceph.io' 
>> Subject: Re: [ceph-users] Re: Setting up NFS with Octopus
>>
>> Sorry, I was afk. Did you authorize a client against that new cephfs 
>> volume? I'm not sure because I did it slightly different and it's an 
>> upgraded cluster. But a permission denied sounds like no one is 
>> allowed to write into cephfs.
>>
>>
>> Zitat von "Jens Hyllegaard (Soft Design A/S)"
>> :
>>
>>> I found out how to get the in

[ceph-users] Re: Setting up NFS with Octopus

2020-12-21 Thread Jens Hyllegaard (Soft Design A/S)
This is the output from ceph status:
  cluster:
id: 9d7bc71a-3f88-11eb-bc58-b9cfbaed27d3
health: HEALTH_WARN
1 pool(s) do not have an application enabled

  services:
mon: 3 daemons, quorum 
ceph-storage-1.softdesign.dk,ceph-storage-2,ceph-storage-3 (age 4d)
mgr: ceph-storage-1.softdesign.dk.vsrdsm(active, since 4d), standbys: 
ceph-storage-3.jglzte
mds: objstore:1 {0=objstore.ceph-storage-1.knaufh=up:active} 1 up:standby
osd: 3 osds: 3 up (since 3d), 3 in (since 3d)

  task status:
scrub status:
mds.objstore.ceph-storage-1.knaufh: idle

  data:
pools:   4 pools, 97 pgs
objects: 31 objects, 25 KiB
usage:   3.1 GiB used, 2.7 TiB / 2.7 TiB avail
pgs: 97 active+clean

  io:
client:   170 B/s rd, 0 op/s rd, 0 op/s wr

So everything seems to ok.

I wonder if anyone could guide me from scratch on how to set up the NFS.
I am still not sure if I need to create two different pools, one for NFS daemon 
and one for the export?

Regards

Jens

-Original Message-
From: Eugen Block  
Sent: 18. december 2020 16:30
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Setting up NFS with Octopus

What is the cluster status? The permissions seem correct, maybe the OSDs have a 
problem?


Zitat von "Jens Hyllegaard (Soft Design A/S)" :

> I have tried mounting the cephFS as two different users.
> I tried creating a user obuser with:
> fs authorize objstore client.objuser / rw
>
> And I tried mounting using the admin user.
>
> The mount works as expected, but neither user is able to create files 
> or folders.
> Unless I use sudo, then it works for both users.
>
> The client.objuser keyring is:
>
> client.objuser
> key: AQCGodxfuuLxCBAAMjaSNM58JtkkUwO8UqGGYw==
> caps: [mds] allow rw
> caps: [mon] allow r
> caps: [osd] allow rw tag cephfs data=objstore
>
> Regards
>
> Jens
>
> -Original Message-----
> From: Eugen Block 
> Sent: 18. december 2020 13:25
> To: Jens Hyllegaard (Soft Design A/S) 
> Cc: 'ceph-users@ceph.io' 
> Subject: Re: [ceph-users] Re: Setting up NFS with Octopus
>
> Sorry, I was afk. Did you authorize a client against that new cephfs 
> volume? I'm not sure because I did it slightly different and it's an 
> upgraded cluster. But a permission denied sounds like no one is 
> allowed to write into cephfs.
>
>
> Zitat von "Jens Hyllegaard (Soft Design A/S)"  
> :
>
>> I found out how to get the information:
>>
>> client.nfs.objstore.ceph-storage-3
>> key: AQBCRNtfsBY8IhAA4MFTghHMT4rq58AvAsPclw==
>> caps: [mon] allow r
>> caps: [osd] allow rw pool=objpool namespace=nfs-ns
>>
>> Regards
>>
>> Jens
>>
>> -Original Message-
>> From: Jens Hyllegaard (Soft Design A/S) 
>> 
>> Sent: 18. december 2020 12:10
>> To: 'Eugen Block' ; 'ceph-users@ceph.io'
>> 
>> Subject: [ceph-users] Re: Setting up NFS with Octopus
>>
>> I am sorry, but I am not sure how to do that? We have just started 
>> working with Ceph.
>>
>> -Original Message-
>> From: Eugen Block 
>> Sent: 18. december 2020 12:06
>> To: Jens Hyllegaard (Soft Design A/S) 
>> Subject: Re: [ceph-users] Re: Setting up NFS with Octopus
>>
>> Oh you're right, it worked for me, I just tried that with a new path 
>> and it was created for me.
>> Can you share the client keyrings? I have two nfs daemons running and 
>> they have these permissions:
>>
>> client.nfs.ses7-nfs.host2
>>  key: AQClNNJf5KHVERAAAzhpp9Mclh5wplrcE9VMkQ==
>>  caps: [mon] allow r
>>  caps: [osd] allow rw pool=nfs-test namespace=ganesha
>> client.nfs.ses7-nfs.host3
>>  key: AQCqNNJf4rlqBhAARGTMkwXAldeprSYgmPEmJg==
>>  caps: [mon] allow r
>>  caps: [osd] allow rw pool=nfs-test namespace=ganesha
>>
>>
>>
>> Zitat von "Jens Hyllegaard (Soft Design A/S)"
>> :
>>
>>> On the Create NFS export page it says the directory will be created.
>>>
>>> Regards
>>>
>>> Jens
>>>
>>>
>>> -Original Message-
>>> From: Eugen Block 
>>> Sent: 18. december 2020 11:52
>>> To: ceph-users@ceph.io
>>> Subject: [ceph-users] Re: Setting up NFS with Octopus
>>>
>>> Hi,
>>>
>>> is the path (/objstore) present within your CephFS? If not you need 
>>> to mount the CephFS root first and create your directory to have NFS 
>>> access it.
>>>
>>>
>>> Zitat von "Jens Hyllegaard (Soft Design A/S)"
>>> :
>&g

[ceph-users] Re: Setting up NFS with Octopus

2020-12-18 Thread Jens Hyllegaard (Soft Design A/S)
I have tried mounting the cephFS as two different users.
I tried creating a user obuser with:
fs authorize objstore client.objuser / rw

And I tried mounting using the admin user.

The mount works as expected, but neither user is able to create files or 
folders.
Unless I use sudo, then it works for both users.

The client.objuser keyring is:

client.objuser
key: AQCGodxfuuLxCBAAMjaSNM58JtkkUwO8UqGGYw==
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=objstore

Regards

Jens

-Original Message-
From: Eugen Block  
Sent: 18. december 2020 13:25
To: Jens Hyllegaard (Soft Design A/S) 
Cc: 'ceph-users@ceph.io' 
Subject: Re: [ceph-users] Re: Setting up NFS with Octopus

Sorry, I was afk. Did you authorize a client against that new cephfs volume? 
I'm not sure because I did it slightly different and it's an upgraded cluster. 
But a permission denied sounds like no one is allowed to write into cephfs.


Zitat von "Jens Hyllegaard (Soft Design A/S)" :

> I found out how to get the information:
>
> client.nfs.objstore.ceph-storage-3
> key: AQBCRNtfsBY8IhAA4MFTghHMT4rq58AvAsPclw==
> caps: [mon] allow r
> caps: [osd] allow rw pool=objpool namespace=nfs-ns
>
> Regards
>
> Jens
>
> -Original Message-
> From: Jens Hyllegaard (Soft Design A/S) 
> 
> Sent: 18. december 2020 12:10
> To: 'Eugen Block' ; 'ceph-users@ceph.io' 
> 
> Subject: [ceph-users] Re: Setting up NFS with Octopus
>
> I am sorry, but I am not sure how to do that? We have just started 
> working with Ceph.
>
> -Original Message-----
> From: Eugen Block 
> Sent: 18. december 2020 12:06
> To: Jens Hyllegaard (Soft Design A/S) 
> Subject: Re: [ceph-users] Re: Setting up NFS with Octopus
>
> Oh you're right, it worked for me, I just tried that with a new path 
> and it was created for me.
> Can you share the client keyrings? I have two nfs daemons running and 
> they have these permissions:
>
> client.nfs.ses7-nfs.host2
>  key: AQClNNJf5KHVERAAAzhpp9Mclh5wplrcE9VMkQ==
>  caps: [mon] allow r
>  caps: [osd] allow rw pool=nfs-test namespace=ganesha
> client.nfs.ses7-nfs.host3
>  key: AQCqNNJf4rlqBhAARGTMkwXAldeprSYgmPEmJg==
>  caps: [mon] allow r
>  caps: [osd] allow rw pool=nfs-test namespace=ganesha
>
>
>
> Zitat von "Jens Hyllegaard (Soft Design A/S)"  
> :
>
>> On the Create NFS export page it says the directory will be created.
>>
>> Regards
>>
>> Jens
>>
>>
>> -Original Message-
>> From: Eugen Block 
>> Sent: 18. december 2020 11:52
>> To: ceph-users@ceph.io
>> Subject: [ceph-users] Re: Setting up NFS with Octopus
>>
>> Hi,
>>
>> is the path (/objstore) present within your CephFS? If not you need 
>> to mount the CephFS root first and create your directory to have NFS 
>> access it.
>>
>>
>> Zitat von "Jens Hyllegaard (Soft Design A/S)"
>> :
>>
>>> Hi.
>>>
>>> We are completely new to Ceph, and are exploring using it as an NFS 
>>> server at first and expand from there.
>>>
>>> However we have not been successful in getting a working solution.
>>>
>>> I have set up a test environment with 3 physical servers, each with 
>>> one OSD using the guide at:
>>> https://docs.ceph.com/en/latest/cephadm/install/
>>>
>>> I created a new replicated pool:
>>> ceph osd pool create objpool replicated
>>>
>>> And then I deployed the gateway:
>>> ceph orch apply nfs objstore objpool nfs-ns
>>>
>>> I then created a new CephFS volume:
>>> ceph fs volume create objstore
>>>
>>> So far so good 
>>>
>>> My problem is when I try to create the NFS export The settings are 
>>> as
>>> follows:
>>> Cluster: objstore
>>> Daemons: nfs.objstore
>>> Storage Backend: CephFS
>>> CephFS User ID: admin
>>> CephFS Name: objstore
>>> CephFS Path: /objstore
>>> NFS Protocol: NFSV3
>>> Access Type: RW
>>> Squash: all_squash
>>> Transport protocol: both UDP & TCP
>>> Client: Any client can access
>>>
>>> However when I click on Create NFS export, I get:
>>> Failed to create NFS 'objstore:/objstore'
>>>
>>> error in mkdirs /objstore: Permission denied [Errno 13]
>>>
>>> Has anyone got an idea as to why this is not working?
>>>
>>> If you need any further information, do not hesitate to say so.
>

[ceph-users] Re: Setting up NFS with Octopus

2020-12-18 Thread Jens Hyllegaard (Soft Design A/S)
I found out how to get the information:

client.nfs.objstore.ceph-storage-3
key: AQBCRNtfsBY8IhAA4MFTghHMT4rq58AvAsPclw==
caps: [mon] allow r
caps: [osd] allow rw pool=objpool namespace=nfs-ns

Regards

Jens

-Original Message-
From: Jens Hyllegaard (Soft Design A/S)  
Sent: 18. december 2020 12:10
To: 'Eugen Block' ; 'ceph-users@ceph.io' 
Subject: [ceph-users] Re: Setting up NFS with Octopus

I am sorry, but I am not sure how to do that? We have just started working with 
Ceph.

-Original Message-
From: Eugen Block 
Sent: 18. december 2020 12:06
To: Jens Hyllegaard (Soft Design A/S) 
Subject: Re: [ceph-users] Re: Setting up NFS with Octopus

Oh you're right, it worked for me, I just tried that with a new path and it was 
created for me.
Can you share the client keyrings? I have two nfs daemons running and they have 
these permissions:

client.nfs.ses7-nfs.host2
 key: AQClNNJf5KHVERAAAzhpp9Mclh5wplrcE9VMkQ==
 caps: [mon] allow r
 caps: [osd] allow rw pool=nfs-test namespace=ganesha
client.nfs.ses7-nfs.host3
 key: AQCqNNJf4rlqBhAARGTMkwXAldeprSYgmPEmJg==
 caps: [mon] allow r
 caps: [osd] allow rw pool=nfs-test namespace=ganesha



Zitat von "Jens Hyllegaard (Soft Design A/S)" :

> On the Create NFS export page it says the directory will be created.
>
> Regards
>
> Jens
>
>
> -Original Message-
> From: Eugen Block 
> Sent: 18. december 2020 11:52
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Setting up NFS with Octopus
>
> Hi,
>
> is the path (/objstore) present within your CephFS? If not you need to 
> mount the CephFS root first and create your directory to have NFS 
> access it.
>
>
> Zitat von "Jens Hyllegaard (Soft Design A/S)"  
> :
>
>> Hi.
>>
>> We are completely new to Ceph, and are exploring using it as an NFS 
>> server at first and expand from there.
>>
>> However we have not been successful in getting a working solution.
>>
>> I have set up a test environment with 3 physical servers, each with 
>> one OSD using the guide at:
>> https://docs.ceph.com/en/latest/cephadm/install/
>>
>> I created a new replicated pool:
>> ceph osd pool create objpool replicated
>>
>> And then I deployed the gateway:
>> ceph orch apply nfs objstore objpool nfs-ns
>>
>> I then created a new CephFS volume:
>> ceph fs volume create objstore
>>
>> So far so good 
>>
>> My problem is when I try to create the NFS export The settings are as
>> follows:
>> Cluster: objstore
>> Daemons: nfs.objstore
>> Storage Backend: CephFS
>> CephFS User ID: admin
>> CephFS Name: objstore
>> CephFS Path: /objstore
>> NFS Protocol: NFSV3
>> Access Type: RW
>> Squash: all_squash
>> Transport protocol: both UDP & TCP
>> Client: Any client can access
>>
>> However when I click on Create NFS export, I get:
>> Failed to create NFS 'objstore:/objstore'
>>
>> error in mkdirs /objstore: Permission denied [Errno 13]
>>
>> Has anyone got an idea as to why this is not working?
>>
>> If you need any further information, do not hesitate to say so.
>>
>>
>> Best regards,
>>
>> Jens Hyllegaard
>> Senior consultant
>> Soft Design
>> Rosenkaeret 13 | DK-2860 Søborg | Denmark | +45 39 66 02 00 | 
>> softdesign.dk<http://www.softdesign.dk/> | synchronicer.com
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
>> email to ceph-users-le...@ceph.io
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Setting up NFS with Octopus

2020-12-18 Thread Jens Hyllegaard (Soft Design A/S)
I am sorry, but I am not sure how to do that? We have just started working with 
Ceph.

-Original Message-
From: Eugen Block  
Sent: 18. december 2020 12:06
To: Jens Hyllegaard (Soft Design A/S) 
Subject: Re: [ceph-users] Re: Setting up NFS with Octopus

Oh you're right, it worked for me, I just tried that with a new path and it was 
created for me.
Can you share the client keyrings? I have two nfs daemons running and they have 
these permissions:

client.nfs.ses7-nfs.host2
 key: AQClNNJf5KHVERAAAzhpp9Mclh5wplrcE9VMkQ==
 caps: [mon] allow r
 caps: [osd] allow rw pool=nfs-test namespace=ganesha
client.nfs.ses7-nfs.host3
 key: AQCqNNJf4rlqBhAARGTMkwXAldeprSYgmPEmJg==
 caps: [mon] allow r
 caps: [osd] allow rw pool=nfs-test namespace=ganesha



Zitat von "Jens Hyllegaard (Soft Design A/S)" :

> On the Create NFS export page it says the directory will be created.
>
> Regards
>
> Jens
>
>
> -Original Message-
> From: Eugen Block 
> Sent: 18. december 2020 11:52
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Setting up NFS with Octopus
>
> Hi,
>
> is the path (/objstore) present within your CephFS? If not you need to 
> mount the CephFS root first and create your directory to have NFS 
> access it.
>
>
> Zitat von "Jens Hyllegaard (Soft Design A/S)"  
> :
>
>> Hi.
>>
>> We are completely new to Ceph, and are exploring using it as an NFS
>> server at first and expand from there.
>>
>> However we have not been successful in getting a working solution.
>>
>> I have set up a test environment with 3 physical servers, each with
>> one OSD using the guide at:
>> https://docs.ceph.com/en/latest/cephadm/install/
>>
>> I created a new replicated pool:
>> ceph osd pool create objpool replicated
>>
>> And then I deployed the gateway:
>> ceph orch apply nfs objstore objpool nfs-ns
>>
>> I then created a new CephFS volume:
>> ceph fs volume create objstore
>>
>> So far so good 
>>
>> My problem is when I try to create the NFS export The settings are as
>> follows:
>> Cluster: objstore
>> Daemons: nfs.objstore
>> Storage Backend: CephFS
>> CephFS User ID: admin
>> CephFS Name: objstore
>> CephFS Path: /objstore
>> NFS Protocol: NFSV3
>> Access Type: RW
>> Squash: all_squash
>> Transport protocol: both UDP & TCP
>> Client: Any client can access
>>
>> However when I click on Create NFS export, I get:
>> Failed to create NFS 'objstore:/objstore'
>>
>> error in mkdirs /objstore: Permission denied [Errno 13]
>>
>> Has anyone got an idea as to why this is not working?
>>
>> If you need any further information, do not hesitate to say so.
>>
>>
>> Best regards,
>>
>> Jens Hyllegaard
>> Senior consultant
>> Soft Design
>> Rosenkaeret 13 | DK-2860 Søborg | Denmark | +45 39 66 02 00 |
>> softdesign.dk<http://www.softdesign.dk/> | synchronicer.com
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
>> email to ceph-users-le...@ceph.io
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an  
> email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Setting up NFS with Octopus

2020-12-18 Thread Jens Hyllegaard (Soft Design A/S)
Hi.

We are completely new to Ceph, and are exploring using it as an NFS server at 
first and expand from there.

However we have not been successful in getting a working solution.

I have set up a test environment with 3 physical servers, each with one OSD 
using the guide at: https://docs.ceph.com/en/latest/cephadm/install/

I created a new replicated pool:
ceph osd pool create objpool replicated

And then I deployed the gateway:
ceph orch apply nfs objstore objpool nfs-ns

I then created a new CephFS volume:
ceph fs volume create objstore

So far so good 

My problem is when I try to create the NFS export
The settings are as follows:
Cluster: objstore
Daemons: nfs.objstore
Storage Backend: CephFS
CephFS User ID: admin
CephFS Name: objstore
CephFS Path: /objstore
NFS Protocol: NFSV3
Access Type: RW
Squash: all_squash
Transport protocol: both UDP & TCP
Client: Any client can access

However when I click on Create NFS export, I get:
Failed to create NFS 'objstore:/objstore'

error in mkdirs /objstore: Permission denied [Errno 13]

Has anyone got an idea as to why this is not working?

If you need any further information, do not hesitate to say so.


Best regards,

Jens Hyllegaard
Senior consultant
Soft Design
Rosenkaeret 13 | DK-2860 Søborg | Denmark | +45 39 66 02 00 | 
softdesign.dk | synchronicer.com


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io