[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-21 Thread Konstantin Shalygin
Hi,

Just add POSIX domain, fstype ceph
This is equivalent of mount -t Ceph on ovirt side


k
Sent from my iPhone

> On 21 Apr 2023, at 05:24, Lokendra Rathour  wrote:
> 
> Hi Robert / Team,
> Further we are now trying to integrate the ceph as storage domain in OVirt
> 4.4
> 
> 
> We want to create a storage domain of POSIX-compliant type for mounting a
> ceph-based infrastructure in oVirt.
> As stated we are able to manually mount the ceph-mon nodes using the
> following command on the oVirt deployment hosts:
> 
> sudo mount -t ceph :/volumes/xyz/conf/00593e1d-b674-4b00-a289-20becr06761c9
> /rhev/data-center/mnt/:_volumes_xyz_conf_00593e1d-b674-4b00-a289-20bec0r6761c9
> -o rw,name=foo,secret=AQABDzRkTar*Lnx6qX/VDA==
> 
> # mount on Node:
> 
> *[root@deployment-host mnt]# df -kh*
> 
> * df: /run/user/0/gvfs: Transport endpoint is not connected*
> 
> *
> Filesystem
> Size  Used Avail Use% Mounted on*
> 
> [abcd:abcd:abcd::51]:6789,[abcd:abcd:abcd::52]:6789,[abcd:abcd:abcd::53]:6789:/volumes/xyz/conf/00593e1d-b674-4b00-a289-20bec06761c9
> 19G 0   19G   0%
> /rhev/data-center/mnt/:_volumes_xyz_conf_00593e1d-b674-4b00-a289-20bec06761c9
> 
> 
> 
> Query:
> 1. Could anyone help us out with storage domain creation in oVirt, we need
> to ensure that Domain is always up and connected in the state of Active
> Monitor failure.
> 
>> On Tue, Apr 18, 2023 at 2:41 PM Lokendra Rathour 
>> wrote:
>> 
>> yes thanks, Robert,
>> after installing the Ceph common mount is working fine.
>> 
>> 
>> On Tue, Apr 18, 2023 at 2:10 PM Robert Sander <
>> r.san...@heinlein-support.de> wrote:
>> 
 On 18.04.23 06:12, Lokendra Rathour wrote:
>>> 
 but if I try mounting from a normal Linux machine with connectivity
 enabled between Ceph mon nodes, it gives the error as stated before.
>>> 
>>> Have you installed ceph-common on the "normal Linux machine"?
>>> 
>>> Regards
>>> --
>>> Robert Sander
>>> Heinlein Support GmbH
>>> Linux: Akademie - Support - Hosting
>>> http://www.heinlein-support.de
>>> 
>>> Tel: 030-405051-43
>>> Fax: 030-405051-19
>>> 
>>> Zwangsangaben lt. §35a GmbHG:
>>> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
>>> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>> 
>> 
>> 
>> --
>> ~ Lokendra
>> skype: lokendrarathour
>> 
>> 
>> 
> 
> -- 
> ~ Lokendra
> skype: lokendrarathour
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-20 Thread Lokendra Rathour
Hi Robert / Team,
Further we are now trying to integrate the ceph as storage domain in OVirt
4.4


We want to create a storage domain of POSIX-compliant type for mounting a
ceph-based infrastructure in oVirt.
As stated we are able to manually mount the ceph-mon nodes using the
following command on the oVirt deployment hosts:

sudo mount -t ceph :/volumes/xyz/conf/00593e1d-b674-4b00-a289-20becr06761c9
/rhev/data-center/mnt/:_volumes_xyz_conf_00593e1d-b674-4b00-a289-20bec0r6761c9
-o rw,name=foo,secret=AQABDzRkTar*Lnx6qX/VDA==

 # mount on Node:

*[root@deployment-host mnt]# df -kh*

* df: /run/user/0/gvfs: Transport endpoint is not connected*

*
Filesystem
Size  Used Avail Use% Mounted on*

[abcd:abcd:abcd::51]:6789,[abcd:abcd:abcd::52]:6789,[abcd:abcd:abcd::53]:6789:/volumes/xyz/conf/00593e1d-b674-4b00-a289-20bec06761c9
19G 0   19G   0%
/rhev/data-center/mnt/:_volumes_xyz_conf_00593e1d-b674-4b00-a289-20bec06761c9



Query:
1. Could anyone help us out with storage domain creation in oVirt, we need
to ensure that Domain is always up and connected in the state of Active
Monitor failure.

On Tue, Apr 18, 2023 at 2:41 PM Lokendra Rathour 
wrote:

> yes thanks, Robert,
> after installing the Ceph common mount is working fine.
>
>
> On Tue, Apr 18, 2023 at 2:10 PM Robert Sander <
> r.san...@heinlein-support.de> wrote:
>
>> On 18.04.23 06:12, Lokendra Rathour wrote:
>>
>> > but if I try mounting from a normal Linux machine with connectivity
>> > enabled between Ceph mon nodes, it gives the error as stated before.
>>
>> Have you installed ceph-common on the "normal Linux machine"?
>>
>> Regards
>> --
>> Robert Sander
>> Heinlein Support GmbH
>> Linux: Akademie - Support - Hosting
>> http://www.heinlein-support.de
>>
>> Tel: 030-405051-43
>> Fax: 030-405051-19
>>
>> Zwangsangaben lt. §35a GmbHG:
>> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
>> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
>
> --
> ~ Lokendra
> skype: lokendrarathour
>
>
>

-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-18 Thread Lokendra Rathour
yes thanks, Robert,
after installing the Ceph common mount is working fine.


On Tue, Apr 18, 2023 at 2:10 PM Robert Sander 
wrote:

> On 18.04.23 06:12, Lokendra Rathour wrote:
>
> > but if I try mounting from a normal Linux machine with connectivity
> > enabled between Ceph mon nodes, it gives the error as stated before.
>
> Have you installed ceph-common on the "normal Linux machine"?
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Linux: Akademie - Support - Hosting
> http://www.heinlein-support.de
>
> Tel: 030-405051-43
> Fax: 030-405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-18 Thread Robert Sander

On 18.04.23 06:12, Lokendra Rathour wrote:

but if I try mounting from a normal Linux machine with connectivity 
enabled between Ceph mon nodes, it gives the error as stated before.


Have you installed ceph-common on the "normal Linux machine"?

Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-17 Thread Lokendra Rathour
yes I did,
This mentioned command works fine on the node where ceph common is
installed.
#*sudo mount -t ceph
:/volumes/hns/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8 /mnt/imgs  -o
name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v*

parsing options: rw,name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA==
mount.ceph: options "name=foo".
invalid new device string format
Could not discover monitor addresses
mount.ceph: switching to using v1 address
mount.ceph: resolved to:
"[abcd:abcd:abcd::51]:6789,[abcd:abcd:abcd::52]:6789,[abcd:abcd:abcd::53]:6789"
mount.ceph: trying mount with old device syntax:
[abcd:abcd:abcd::51]:6789,[abcd:abcd:abcd::52]:6789,[abcd:abcd:abcd::53]:6789:/volumes/hns/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8
mount.ceph: options
"name=foo,key=foo,fsid=1cc50e1a-8069-493d-af66-99e2abcb6a19" will pass to
kernel
[root@ceph-node-client almalinux]#

but if I try mounting from a normal Linux machine with connectivity enabled
between Ceph mon nodes, it gives the error as stated before.



On Mon, Apr 17, 2023 at 3:34 PM Robert Sander 
wrote:

> On 14.04.23 12:17, Lokendra Rathour wrote:
>
> > *mount: /mnt/image: mount point does not exist.*
>
> Have you created the mount point?
>
> Regards
> --
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-17 Thread Robert Sander

On 14.04.23 12:17, Lokendra Rathour wrote:


*mount: /mnt/image: mount point does not exist.*


Have you created the mount point?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-17 Thread Konstantin Shalygin
Hi,

This by the reason of DNS. Something from userland should be provide IP 
addresses for kernel


k
Sent from my iPhone

> On 17 Apr 2023, at 05:56, Lokendra Rathour  wrote:
> 
> Hi Team,
> The mount at the client side should be independent of Ceph, but here in
> this case of DNS SRV-based mount, we see that the Ceph common utility is
> needed.
> What can be the reason for the same, any inputs in this direction would be
> helpful.
> 
> Best Regards,
> Lokendra
> 
> 
>> On Sun, Apr 16, 2023 at 10:11 AM Lokendra Rathour 
>> wrote:
>> 
>> Hi .
>> Any input will be of great help.
>> Thanks once again.
>> Lokendra
>> 
>> On Fri, 14 Apr, 2023, 3:47 pm Lokendra Rathour, 
>> wrote:
>> 
>>> Hi Team,
>>> their is one additional observation.
>>> Mount as the client is working fine from one of the Ceph nodes.
>>> Command *: sudo mount -t ceph :/ /mnt/imgs  -o
>>> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwdfULnx6qX/VDA== *
>>> 
>>> *we are not passing the Monitor address, instead, DNS SRV is configured
>>> as per:*
>>> https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
>>> 
>>> mount works fine in this case.
>>> 
>>> 
>>> 
>>> But if we try to mount from the other Location i.e from another
>>> VM/client(non-Ceph Node)
>>> we are getting the error :
>>>  mount -t  ceph :/ /mnt/imgs  -o
>>> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v
>>> *mount: /mnt/image: mount point does not exist.*
>>> 
>>> the document says that if we do not pass the monitor address, it tries
>>> discovering the monitor address from DNS Servers, but in actual it is not
>>> happening.
>>> 
>>> 
>>> 
>>> On Tue, Apr 11, 2023 at 6:48 PM Lokendra Rathour <
>>> lokendrarath...@gmail.com> wrote:
>>> 
 Ceph version Quincy.
 
 But now I am able to resolve the issue.
 
 During mount i will not pass any monitor details, it will be
 auto-discovered via SRV.
 
 On Tue, Apr 11, 2023 at 6:09 PM Eugen Block  wrote:
 
> What ceph version is this? Could it be this bug [1]? Although the
> error message is different, not sure if it could be the same issue,
> and I don't have anything to test ipv6 with.
> 
> [1] https://tracker.ceph.com/issues/47300
> 
> Zitat von Lokendra Rathour :
> 
>> Hi All,
>> Requesting any inputs around the issue raised.
>> 
>> Best Regards,
>> Lokendra
>> 
>> On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, <
> lokendrarath...@gmail.com>
>> wrote:
>> 
>>> Hi Team,
>>> 
>>> 
>>> 
>>> We have a ceph cluster with 3 storage nodes:
>>> 
>>> 1. storagenode1 - abcd:abcd:abcd::21
>>> 
>>> 2. storagenode2 - abcd:abcd:abcd::22
>>> 
>>> 3. storagenode3 - abcd:abcd:abcd::23
>>> 
>>> 
>>> 
>>> The requirement is to mount ceph using the domain name of MON node:
>>> 
>>> Note: we resolved the domain name via DNS server.
>>> 
>>> 
>>> For this we are using the command:
>>> 
>>> ```
>>> 
>>> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
>>> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
>>> 
>>> ```
>>> 
>>> 
>>> 
>>> We are getting the following logs in /var/log/messages:
>>> 
>>> ```
>>> 
>>> Jan 24 17:23:17 localhost kernel: libceph: resolve '
>>> storagenode.storage.com' (ret=-3): failed
>>> 
>>> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
>>> storagenode.storage.com:6789'
>>> 
>>> ```
>>> 
>>> 
>>> 
>>> We also tried mounting ceph storage using IP of MON which is working
> fine.
>>> 
>>> 
>>> 
>>> Query:
>>> 
>>> 
>>> Could you please help us out with how we can mount ceph using FQDN.
>>> 
>>> 
>>> 
>>> My /etc/ceph/ceph.conf is as follows:
>>> 
>>> [global]
>>> 
>>> ms bind ipv6 = true
>>> 
>>> ms bind ipv4 = false
>>> 
>>> mon initial members = storagenode1,storagenode2,storagenode3
>>> 
>>> osd pool default crush rule = -1
>>> 
>>> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
>>> 
>>> mon host =
>>> 
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
>>> 
>>> public network = abcd:abcd:abcd::/64
>>> 
>>> cluster network = eff0:eff0:eff0::/64
>>> 
>>> 
>>> 
>>> [osd]
>>> 
>>> osd memory target = 4294967296
>>> 
>>> 
>>> 
>>> [client.rgw.storagenode1.rgw0]
>>> 
>>> host = storagenode1
>>> 
>>> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
>>> 
>>> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
>>> 
>>> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
>>> 
>>> rgw thread pool size = 512
>>> 
>>> --
>>> ~ 

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-16 Thread Lokendra Rathour
Hi Team,
The mount at the client side should be independent of Ceph, but here in
this case of DNS SRV-based mount, we see that the Ceph common utility is
needed.
What can be the reason for the same, any inputs in this direction would be
helpful.

Best Regards,
Lokendra


On Sun, Apr 16, 2023 at 10:11 AM Lokendra Rathour 
wrote:

> Hi .
> Any input will be of great help.
> Thanks once again.
> Lokendra
>
> On Fri, 14 Apr, 2023, 3:47 pm Lokendra Rathour, 
> wrote:
>
>> Hi Team,
>> their is one additional observation.
>> Mount as the client is working fine from one of the Ceph nodes.
>> Command *: sudo mount -t ceph :/ /mnt/imgs  -o
>> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwdfULnx6qX/VDA== *
>>
>> *we are not passing the Monitor address, instead, DNS SRV is configured
>> as per:*
>> https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
>>
>> mount works fine in this case.
>>
>> 
>>
>> But if we try to mount from the other Location i.e from another
>> VM/client(non-Ceph Node)
>> we are getting the error :
>>   mount -t  ceph :/ /mnt/imgs  -o
>> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v
>> *mount: /mnt/image: mount point does not exist.*
>>
>> the document says that if we do not pass the monitor address, it tries
>> discovering the monitor address from DNS Servers, but in actual it is not
>> happening.
>>
>>
>>
>> On Tue, Apr 11, 2023 at 6:48 PM Lokendra Rathour <
>> lokendrarath...@gmail.com> wrote:
>>
>>> Ceph version Quincy.
>>>
>>> But now I am able to resolve the issue.
>>>
>>> During mount i will not pass any monitor details, it will be
>>> auto-discovered via SRV.
>>>
>>> On Tue, Apr 11, 2023 at 6:09 PM Eugen Block  wrote:
>>>
 What ceph version is this? Could it be this bug [1]? Although the
 error message is different, not sure if it could be the same issue,
 and I don't have anything to test ipv6 with.

 [1] https://tracker.ceph.com/issues/47300

 Zitat von Lokendra Rathour :

 > Hi All,
 > Requesting any inputs around the issue raised.
 >
 > Best Regards,
 > Lokendra
 >
 > On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, <
 lokendrarath...@gmail.com>
 > wrote:
 >
 >> Hi Team,
 >>
 >>
 >>
 >> We have a ceph cluster with 3 storage nodes:
 >>
 >> 1. storagenode1 - abcd:abcd:abcd::21
 >>
 >> 2. storagenode2 - abcd:abcd:abcd::22
 >>
 >> 3. storagenode3 - abcd:abcd:abcd::23
 >>
 >>
 >>
 >> The requirement is to mount ceph using the domain name of MON node:
 >>
 >> Note: we resolved the domain name via DNS server.
 >>
 >>
 >> For this we are using the command:
 >>
 >> ```
 >>
 >> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
 >> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
 >>
 >> ```
 >>
 >>
 >>
 >> We are getting the following logs in /var/log/messages:
 >>
 >> ```
 >>
 >> Jan 24 17:23:17 localhost kernel: libceph: resolve '
 >> storagenode.storage.com' (ret=-3): failed
 >>
 >> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
 >> storagenode.storage.com:6789'
 >>
 >> ```
 >>
 >>
 >>
 >> We also tried mounting ceph storage using IP of MON which is working
 fine.
 >>
 >>
 >>
 >> Query:
 >>
 >>
 >> Could you please help us out with how we can mount ceph using FQDN.
 >>
 >>
 >>
 >> My /etc/ceph/ceph.conf is as follows:
 >>
 >> [global]
 >>
 >> ms bind ipv6 = true
 >>
 >> ms bind ipv4 = false
 >>
 >> mon initial members = storagenode1,storagenode2,storagenode3
 >>
 >> osd pool default crush rule = -1
 >>
 >> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
 >>
 >> mon host =
 >>
 [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
 >>
 >> public network = abcd:abcd:abcd::/64
 >>
 >> cluster network = eff0:eff0:eff0::/64
 >>
 >>
 >>
 >> [osd]
 >>
 >> osd memory target = 4294967296
 >>
 >>
 >>
 >> [client.rgw.storagenode1.rgw0]
 >>
 >> host = storagenode1
 >>
 >> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
 >>
 >> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
 >>
 >> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
 >>
 >> rgw thread pool size = 512
 >>
 >> --
 >> ~ Lokendra
 >> skype: lokendrarathour
 >>
 >>
 >>
 > ___
 > ceph-users mailing list -- ceph-users@ceph.io
 > To unsubscribe send an email to ceph-users-le...@ceph.io

 ___
 ceph-users mailing list -- ceph-users@ceph.io
 

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-15 Thread Lokendra Rathour
Hi .
Any input will be of great help.
Thanks once again.
Lokendra

On Fri, 14 Apr, 2023, 3:47 pm Lokendra Rathour, 
wrote:

> Hi Team,
> their is one additional observation.
> Mount as the client is working fine from one of the Ceph nodes.
> Command *: sudo mount -t ceph :/ /mnt/imgs  -o
> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwdfULnx6qX/VDA== *
>
> *we are not passing the Monitor address, instead, DNS SRV is configured as
> per:*
> https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
>
> mount works fine in this case.
>
> 
>
> But if we try to mount from the other Location i.e from another
> VM/client(non-Ceph Node)
> we are getting the error :
>   mount -t  ceph :/ /mnt/imgs  -o
> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v
> *mount: /mnt/image: mount point does not exist.*
>
> the document says that if we do not pass the monitor address, it tries
> discovering the monitor address from DNS Servers, but in actual it is not
> happening.
>
>
>
> On Tue, Apr 11, 2023 at 6:48 PM Lokendra Rathour <
> lokendrarath...@gmail.com> wrote:
>
>> Ceph version Quincy.
>>
>> But now I am able to resolve the issue.
>>
>> During mount i will not pass any monitor details, it will be
>> auto-discovered via SRV.
>>
>> On Tue, Apr 11, 2023 at 6:09 PM Eugen Block  wrote:
>>
>>> What ceph version is this? Could it be this bug [1]? Although the
>>> error message is different, not sure if it could be the same issue,
>>> and I don't have anything to test ipv6 with.
>>>
>>> [1] https://tracker.ceph.com/issues/47300
>>>
>>> Zitat von Lokendra Rathour :
>>>
>>> > Hi All,
>>> > Requesting any inputs around the issue raised.
>>> >
>>> > Best Regards,
>>> > Lokendra
>>> >
>>> > On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, <
>>> lokendrarath...@gmail.com>
>>> > wrote:
>>> >
>>> >> Hi Team,
>>> >>
>>> >>
>>> >>
>>> >> We have a ceph cluster with 3 storage nodes:
>>> >>
>>> >> 1. storagenode1 - abcd:abcd:abcd::21
>>> >>
>>> >> 2. storagenode2 - abcd:abcd:abcd::22
>>> >>
>>> >> 3. storagenode3 - abcd:abcd:abcd::23
>>> >>
>>> >>
>>> >>
>>> >> The requirement is to mount ceph using the domain name of MON node:
>>> >>
>>> >> Note: we resolved the domain name via DNS server.
>>> >>
>>> >>
>>> >> For this we are using the command:
>>> >>
>>> >> ```
>>> >>
>>> >> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
>>> >> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
>>> >>
>>> >> ```
>>> >>
>>> >>
>>> >>
>>> >> We are getting the following logs in /var/log/messages:
>>> >>
>>> >> ```
>>> >>
>>> >> Jan 24 17:23:17 localhost kernel: libceph: resolve '
>>> >> storagenode.storage.com' (ret=-3): failed
>>> >>
>>> >> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
>>> >> storagenode.storage.com:6789'
>>> >>
>>> >> ```
>>> >>
>>> >>
>>> >>
>>> >> We also tried mounting ceph storage using IP of MON which is working
>>> fine.
>>> >>
>>> >>
>>> >>
>>> >> Query:
>>> >>
>>> >>
>>> >> Could you please help us out with how we can mount ceph using FQDN.
>>> >>
>>> >>
>>> >>
>>> >> My /etc/ceph/ceph.conf is as follows:
>>> >>
>>> >> [global]
>>> >>
>>> >> ms bind ipv6 = true
>>> >>
>>> >> ms bind ipv4 = false
>>> >>
>>> >> mon initial members = storagenode1,storagenode2,storagenode3
>>> >>
>>> >> osd pool default crush rule = -1
>>> >>
>>> >> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
>>> >>
>>> >> mon host =
>>> >>
>>> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
>>> >>
>>> >> public network = abcd:abcd:abcd::/64
>>> >>
>>> >> cluster network = eff0:eff0:eff0::/64
>>> >>
>>> >>
>>> >>
>>> >> [osd]
>>> >>
>>> >> osd memory target = 4294967296
>>> >>
>>> >>
>>> >>
>>> >> [client.rgw.storagenode1.rgw0]
>>> >>
>>> >> host = storagenode1
>>> >>
>>> >> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
>>> >>
>>> >> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
>>> >>
>>> >> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
>>> >>
>>> >> rgw thread pool size = 512
>>> >>
>>> >> --
>>> >> ~ Lokendra
>>> >> skype: lokendrarathour
>>> >>
>>> >>
>>> >>
>>> > ___
>>> > ceph-users mailing list -- ceph-users@ceph.io
>>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>>>
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>>
>>
>>
>> --
>> ~ Lokendra
>> skype: lokendrarathour
>>
>>
>>
>
> --
> ~ Lokendra
> skype: lokendrarathour
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-14 Thread Lokendra Rathour
Hi Team,
their is one additional observation.
Mount as the client is working fine from one of the Ceph nodes.
Command *: sudo mount -t ceph :/ /mnt/imgs  -o
name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwdfULnx6qX/VDA== *

*we are not passing the Monitor address, instead, DNS SRV is configured as
per:*
https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/

mount works fine in this case.



But if we try to mount from the other Location i.e from another
VM/client(non-Ceph Node)
we are getting the error :
  mount -t  ceph :/ /mnt/imgs  -o
name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v
*mount: /mnt/image: mount point does not exist.*

the document says that if we do not pass the monitor address, it tries
discovering the monitor address from DNS Servers, but in actual it is not
happening.



On Tue, Apr 11, 2023 at 6:48 PM Lokendra Rathour 
wrote:

> Ceph version Quincy.
>
> But now I am able to resolve the issue.
>
> During mount i will not pass any monitor details, it will be
> auto-discovered via SRV.
>
> On Tue, Apr 11, 2023 at 6:09 PM Eugen Block  wrote:
>
>> What ceph version is this? Could it be this bug [1]? Although the
>> error message is different, not sure if it could be the same issue,
>> and I don't have anything to test ipv6 with.
>>
>> [1] https://tracker.ceph.com/issues/47300
>>
>> Zitat von Lokendra Rathour :
>>
>> > Hi All,
>> > Requesting any inputs around the issue raised.
>> >
>> > Best Regards,
>> > Lokendra
>> >
>> > On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, <
>> lokendrarath...@gmail.com>
>> > wrote:
>> >
>> >> Hi Team,
>> >>
>> >>
>> >>
>> >> We have a ceph cluster with 3 storage nodes:
>> >>
>> >> 1. storagenode1 - abcd:abcd:abcd::21
>> >>
>> >> 2. storagenode2 - abcd:abcd:abcd::22
>> >>
>> >> 3. storagenode3 - abcd:abcd:abcd::23
>> >>
>> >>
>> >>
>> >> The requirement is to mount ceph using the domain name of MON node:
>> >>
>> >> Note: we resolved the domain name via DNS server.
>> >>
>> >>
>> >> For this we are using the command:
>> >>
>> >> ```
>> >>
>> >> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
>> >> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
>> >>
>> >> ```
>> >>
>> >>
>> >>
>> >> We are getting the following logs in /var/log/messages:
>> >>
>> >> ```
>> >>
>> >> Jan 24 17:23:17 localhost kernel: libceph: resolve '
>> >> storagenode.storage.com' (ret=-3): failed
>> >>
>> >> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
>> >> storagenode.storage.com:6789'
>> >>
>> >> ```
>> >>
>> >>
>> >>
>> >> We also tried mounting ceph storage using IP of MON which is working
>> fine.
>> >>
>> >>
>> >>
>> >> Query:
>> >>
>> >>
>> >> Could you please help us out with how we can mount ceph using FQDN.
>> >>
>> >>
>> >>
>> >> My /etc/ceph/ceph.conf is as follows:
>> >>
>> >> [global]
>> >>
>> >> ms bind ipv6 = true
>> >>
>> >> ms bind ipv4 = false
>> >>
>> >> mon initial members = storagenode1,storagenode2,storagenode3
>> >>
>> >> osd pool default crush rule = -1
>> >>
>> >> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
>> >>
>> >> mon host =
>> >>
>> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
>> >>
>> >> public network = abcd:abcd:abcd::/64
>> >>
>> >> cluster network = eff0:eff0:eff0::/64
>> >>
>> >>
>> >>
>> >> [osd]
>> >>
>> >> osd memory target = 4294967296
>> >>
>> >>
>> >>
>> >> [client.rgw.storagenode1.rgw0]
>> >>
>> >> host = storagenode1
>> >>
>> >> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
>> >>
>> >> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
>> >>
>> >> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
>> >>
>> >> rgw thread pool size = 512
>> >>
>> >> --
>> >> ~ Lokendra
>> >> skype: lokendrarathour
>> >>
>> >>
>> >>
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
>
> --
> ~ Lokendra
> skype: lokendrarathour
>
>
>

-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-11 Thread Lokendra Rathour
Ceph version Quincy.

But now I am able to resolve the issue.

During mount i will not pass any monitor details, it will be
auto-discovered via SRV.

On Tue, Apr 11, 2023 at 6:09 PM Eugen Block  wrote:

> What ceph version is this? Could it be this bug [1]? Although the
> error message is different, not sure if it could be the same issue,
> and I don't have anything to test ipv6 with.
>
> [1] https://tracker.ceph.com/issues/47300
>
> Zitat von Lokendra Rathour :
>
> > Hi All,
> > Requesting any inputs around the issue raised.
> >
> > Best Regards,
> > Lokendra
> >
> > On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, <
> lokendrarath...@gmail.com>
> > wrote:
> >
> >> Hi Team,
> >>
> >>
> >>
> >> We have a ceph cluster with 3 storage nodes:
> >>
> >> 1. storagenode1 - abcd:abcd:abcd::21
> >>
> >> 2. storagenode2 - abcd:abcd:abcd::22
> >>
> >> 3. storagenode3 - abcd:abcd:abcd::23
> >>
> >>
> >>
> >> The requirement is to mount ceph using the domain name of MON node:
> >>
> >> Note: we resolved the domain name via DNS server.
> >>
> >>
> >> For this we are using the command:
> >>
> >> ```
> >>
> >> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
> >> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
> >>
> >> ```
> >>
> >>
> >>
> >> We are getting the following logs in /var/log/messages:
> >>
> >> ```
> >>
> >> Jan 24 17:23:17 localhost kernel: libceph: resolve '
> >> storagenode.storage.com' (ret=-3): failed
> >>
> >> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
> >> storagenode.storage.com:6789'
> >>
> >> ```
> >>
> >>
> >>
> >> We also tried mounting ceph storage using IP of MON which is working
> fine.
> >>
> >>
> >>
> >> Query:
> >>
> >>
> >> Could you please help us out with how we can mount ceph using FQDN.
> >>
> >>
> >>
> >> My /etc/ceph/ceph.conf is as follows:
> >>
> >> [global]
> >>
> >> ms bind ipv6 = true
> >>
> >> ms bind ipv4 = false
> >>
> >> mon initial members = storagenode1,storagenode2,storagenode3
> >>
> >> osd pool default crush rule = -1
> >>
> >> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
> >>
> >> mon host =
> >>
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
> >>
> >> public network = abcd:abcd:abcd::/64
> >>
> >> cluster network = eff0:eff0:eff0::/64
> >>
> >>
> >>
> >> [osd]
> >>
> >> osd memory target = 4294967296
> >>
> >>
> >>
> >> [client.rgw.storagenode1.rgw0]
> >>
> >> host = storagenode1
> >>
> >> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
> >>
> >> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
> >>
> >> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
> >>
> >> rgw thread pool size = 512
> >>
> >> --
> >> ~ Lokendra
> >> skype: lokendrarathour
> >>
> >>
> >>
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-11 Thread Eugen Block
What ceph version is this? Could it be this bug [1]? Although the  
error message is different, not sure if it could be the same issue,  
and I don't have anything to test ipv6 with.


[1] https://tracker.ceph.com/issues/47300

Zitat von Lokendra Rathour :


Hi All,
Requesting any inputs around the issue raised.

Best Regards,
Lokendra

On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, 
wrote:


Hi Team,



We have a ceph cluster with 3 storage nodes:

1. storagenode1 - abcd:abcd:abcd::21

2. storagenode2 - abcd:abcd:abcd::22

3. storagenode3 - abcd:abcd:abcd::23



The requirement is to mount ceph using the domain name of MON node:

Note: we resolved the domain name via DNS server.


For this we are using the command:

```

mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==

```



We are getting the following logs in /var/log/messages:

```

Jan 24 17:23:17 localhost kernel: libceph: resolve '
storagenode.storage.com' (ret=-3): failed

Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
storagenode.storage.com:6789'

```



We also tried mounting ceph storage using IP of MON which is working fine.



Query:


Could you please help us out with how we can mount ceph using FQDN.



My /etc/ceph/ceph.conf is as follows:

[global]

ms bind ipv6 = true

ms bind ipv4 = false

mon initial members = storagenode1,storagenode2,storagenode3

osd pool default crush rule = -1

fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe

mon host =
[v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]

public network = abcd:abcd:abcd::/64

cluster network = eff0:eff0:eff0::/64



[osd]

osd memory target = 4294967296



[client.rgw.storagenode1.rgw0]

host = storagenode1

keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring

log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log

rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080

rgw thread pool size = 512

--
~ Lokendra
skype: lokendrarathour




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-09 Thread Lokendra Rathour
Hi All,
Requesting any inputs around the issue raised.

Best Regards,
Lokendra

On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, 
wrote:

> Hi Team,
>
>
>
> We have a ceph cluster with 3 storage nodes:
>
> 1. storagenode1 - abcd:abcd:abcd::21
>
> 2. storagenode2 - abcd:abcd:abcd::22
>
> 3. storagenode3 - abcd:abcd:abcd::23
>
>
>
> The requirement is to mount ceph using the domain name of MON node:
>
> Note: we resolved the domain name via DNS server.
>
>
> For this we are using the command:
>
> ```
>
> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
>
> ```
>
>
>
> We are getting the following logs in /var/log/messages:
>
> ```
>
> Jan 24 17:23:17 localhost kernel: libceph: resolve '
> storagenode.storage.com' (ret=-3): failed
>
> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
> storagenode.storage.com:6789'
>
> ```
>
>
>
> We also tried mounting ceph storage using IP of MON which is working fine.
>
>
>
> Query:
>
>
> Could you please help us out with how we can mount ceph using FQDN.
>
>
>
> My /etc/ceph/ceph.conf is as follows:
>
> [global]
>
> ms bind ipv6 = true
>
> ms bind ipv4 = false
>
> mon initial members = storagenode1,storagenode2,storagenode3
>
> osd pool default crush rule = -1
>
> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
>
> mon host =
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
>
> public network = abcd:abcd:abcd::/64
>
> cluster network = eff0:eff0:eff0::/64
>
>
>
> [osd]
>
> osd memory target = 4294967296
>
>
>
> [client.rgw.storagenode1.rgw0]
>
> host = storagenode1
>
> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
>
> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
>
> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
>
> rgw thread pool size = 512
>
> --
> ~ Lokendra
> skype: lokendrarathour
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-03-28 Thread Lokendra Rathour
Hi All,
Any help in this issue would be appreciated.
Thanks once again.


On Tue, Jan 24, 2023 at 7:32 PM Lokendra Rathour 
wrote:

> Hi Team,
>
>
>
> We have a ceph cluster with 3 storage nodes:
>
> 1. storagenode1 - abcd:abcd:abcd::21
>
> 2. storagenode2 - abcd:abcd:abcd::22
>
> 3. storagenode3 - abcd:abcd:abcd::23
>
>
>
> The requirement is to mount ceph using the domain name of MON node:
>
> Note: we resolved the domain name via DNS server.
>
>
> For this we are using the command:
>
> ```
>
> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
>
> ```
>
>
>
> We are getting the following logs in /var/log/messages:
>
> ```
>
> Jan 24 17:23:17 localhost kernel: libceph: resolve '
> storagenode.storage.com' (ret=-3): failed
>
> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
> storagenode.storage.com:6789'
>
> ```
>
>
>
> We also tried mounting ceph storage using IP of MON which is working fine.
>
>
>
> Query:
>
>
> Could you please help us out with how we can mount ceph using FQDN.
>
>
>
> My /etc/ceph/ceph.conf is as follows:
>
> [global]
>
> ms bind ipv6 = true
>
> ms bind ipv4 = false
>
> mon initial members = storagenode1,storagenode2,storagenode3
>
> osd pool default crush rule = -1
>
> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
>
> mon host =
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
>
> public network = abcd:abcd:abcd::/64
>
> cluster network = eff0:eff0:eff0::/64
>
>
>
> [osd]
>
> osd memory target = 4294967296
>
>
>
> [client.rgw.storagenode1.rgw0]
>
> host = storagenode1
>
> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
>
> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
>
> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
>
> rgw thread pool size = 512
>
> --
> ~ Lokendra
> skype: lokendrarathour
>
>
>

-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-02-04 Thread kushagra . gupta
Hi Robert,

Thank you for the help. We had previously refered the link: 
https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/ 
But we were not able to configure mon_dns_srv_name correctly.

We find the following link: 
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/configuration_guide/ceph-monitor-configuration

Which gives just a little more information about the DNS lookup.

After following the link, we updated the ceph.conf as follows:
```
[root@storagenode3 ~]# cat /etc/ceph/ceph.conf
[global]
ms bind ipv6 = true
ms bind ipv4 = false
mon initial members = storagenode1,storagenode2,storagenode3
osd pool default crush rule = -1
mon dns srv name = ceph-mon
fsid = ce479912-a277-45b6-87b1-203d3e43d776
public network = abcd:abcd:abcd::/64
cluster network = eff0:eff0:eff0::/64

[osd]
osd memory target = 4294967296

[client.rgw.storagenode3.rgw0]
host = storagenode3
keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode3.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-storagenode3.rgw0.log
rgw frontends = beast endpoint=[abcd:abcd:abcd::23]:8080
rgw thread pool size = 512

[root@storagenode3 ~]#
```

We also updated the dns server as follows:
```
storagenode1.storage.com  IN    abcd:abcd:abcd::21
storagenode2.storage.com  IN    abcd:abcd:abcd::22
storagenode3.storage.com  IN    abcd:abcd:abcd::23

_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode1.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode2.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode3.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode1.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode2.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode3.storage.com
```

But when we run the command ceph -s, we get the following error:

```
[root@storagenode3 ~]# ceph -s
unable to get monitor info from DNS SRV with service name: ceph-mon
2023-02-02T15:18:14.098+0530 7f1313a58700 -1 failed for service _ceph-mon._tcp
2023-02-02T15:18:14.098+0530 7f1313a58700 -1 monclient: get_monmap_and_config 
cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster)
[root@storagenode3 ~]#
```

Could you please help us to configure the server using mon_dns_srv_name 
correctly?

Thanks and Regards
Kushagra Gupta
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-02-03 Thread Ruidong Gao
Hi Lokendra,

To make monitor looked up through dns, ceph-mon also need to be resolved 
correctly by dns server just like _ceph-mon._tcp.
And ceph-mon is default service name, which doesn’t need to be in the conf file 
anyway.

Ben
> 2023年2月3日 12:14,Lokendra Rathour  写道:
> 
> Hi Robert and Team,
> 
> 
> 
> Thank you for the help. We had previously referred to the link:
> https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
> But we were not able to configure mon_dns_srv_name correctly.
> 
> 
> 
> We find the following link:
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/configuration_guide/ceph-monitor-configuration
> 
> 
> 
> Which gives just a little more information about the DNS lookup.
> 
> 
> 
> After following the link, we updated the ceph.conf as follows:
> ```
> [root@storagenode3 ~]# cat /etc/ceph/ceph.conf
> [global]
> ms bind ipv6 = true
> ms bind ipv4 = false
> mon initial members = storagenode1,storagenode2,storagenode3
> osd pool default crush rule = -1
> mon dns srv name = ceph-mon
> fsid = ce479912-a277-45b6-87b1-203d3e43d776
> public network = abcd:abcd:abcd::/64
> cluster network = eff0:eff0:eff0::/64
> 
> 
> 
> [osd]
> osd memory target = 4294967296
> 
> 
> 
> [client.rgw.storagenode3.rgw0]
> host = storagenode3
> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode3.rgw0/keyring
> log file = /var/log/ceph/ceph-rgw-storagenode3.rgw0.log
> rgw frontends = beast endpoint=[abcd:abcd:abcd::23]:8080
> rgw thread pool size = 512
> 
> 
> 
> [root@storagenode3 ~]#
> ```
> 
> We also updated the dns server as follows:
> 
> ```
> storagenode1.storage.com  IN    abcd:abcd:abcd::21
> storagenode2.storage.com  IN    abcd:abcd:abcd::22
> storagenode3.storage.com  IN    abcd:abcd:abcd::23
> 
> 
> 
> _ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode1.storage.com
> _ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode2.storage.com
> _ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode3.storage.com
> _ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode1.storage.com
> _ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode2.storage.com
> _ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode3.storage.com
> 
> 
> ```
> 
> But when we run the command ceph -s, we get the following error:
> 
> ```
> [root@storagenode3 ~]# ceph -s
> unable to get monitor info from DNS SRV with service name: ceph-mon
> 2023-02-02T15:18:14.098+0530 7f1313a58700 -1 failed for service
> _ceph-mon._tcp
> 2023-02-02T15:18:14.098+0530 7f1313a58700 -1 monclient:
> get_monmap_and_config cannot identify monitors to contact
> [errno 2] RADOS object not found (error connecting to the cluster)
> [root@storagenode3 ~]#
> ```
> 
> Could you please help us to configure the server using mon_dns_srv_name
> correctly?
> 
> 
> 
> On Wed, Jan 25, 2023 at 9:06 PM John Mulligan 
> wrote:
> 
>> On Tuesday, January 24, 2023 9:02:41 AM EST Lokendra Rathour wrote:
>>> Hi Team,
>>> 
>>> 
>>> 
>>> We have a ceph cluster with 3 storage nodes:
>>> 
>>> 1. storagenode1 - abcd:abcd:abcd::21
>>> 
>>> 2. storagenode2 - abcd:abcd:abcd::22
>>> 
>>> 3. storagenode3 - abcd:abcd:abcd::23
>>> 
>>> 
>>> 
>>> The requirement is to mount ceph using the domain name of MON node:
>>> 
>>> Note: we resolved the domain name via DNS server.
>>> 
>>> 
>>> For this we are using the command:
>>> 
>>> ```
>>> 
>>> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
>>> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
>>> 
>>> ```
>>> 
>>> 
>>> 
>>> We are getting the following logs in /var/log/messages:
>>> 
>>> ```
>>> 
>>> Jan 24 17:23:17 localhost kernel: libceph: resolve '
>> storagenode.storage.com'
>>> (ret=-3): failed
>>> 
>>> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
>>> storagenode.storage.com:6789'
>>> 
>>> ```
>>> 
>> 
>> 
>> I saw a similar log message recently when I had forgotten to install the
>> ceph
>> mount helper.
>> Check to see if you have a binary 'mount.ceph' on the system. If you don't
>> try
>> to install it from packages. On fedora I needed to install 'ceph-common'.
>> 
>> 
>>> 
>>> 
>>> We also tried mounting ceph storage using IP of MON which is working
>> fine.
>>> 
>>> 
>>> 
>>> Query:
>>> 
>>> 
>>> Could you please help us out with how we can mount ceph using FQDN.
>>> 
>>> 
>>> 
>>> My /etc/ceph/ceph.conf is as follows:
>>> 
>>> [global]
>>> 
>>> ms bind ipv6 = true
>>> 
>>> ms bind ipv4 = false
>>> 
>>> mon initial members = storagenode1,storagenode2,storagenode3
>>> 
>>> osd pool default crush rule = -1
>>> 
>>> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
>>> 
>>> mon host =
>>> 
>> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:a
>>> 
>> bcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1
>>> :[abcd:abcd:abcd::23]:6789]
>>> 
>>> public network = abcd:abcd:abcd::/64
>>> 
>>> cluster network = eff0:eff0:eff0::/64
>>> 
>>> 
>>> 
>>> [osd]
>>> 
>>> osd memory target 

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-02-02 Thread Lokendra Rathour
Hi Robert and Team,



Thank you for the help. We had previously referred to the link:
https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
But we were not able to configure mon_dns_srv_name correctly.



We find the following link:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/configuration_guide/ceph-monitor-configuration



Which gives just a little more information about the DNS lookup.



After following the link, we updated the ceph.conf as follows:
```
[root@storagenode3 ~]# cat /etc/ceph/ceph.conf
[global]
ms bind ipv6 = true
ms bind ipv4 = false
mon initial members = storagenode1,storagenode2,storagenode3
osd pool default crush rule = -1
mon dns srv name = ceph-mon
fsid = ce479912-a277-45b6-87b1-203d3e43d776
public network = abcd:abcd:abcd::/64
cluster network = eff0:eff0:eff0::/64



[osd]
osd memory target = 4294967296



[client.rgw.storagenode3.rgw0]
host = storagenode3
keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode3.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-storagenode3.rgw0.log
rgw frontends = beast endpoint=[abcd:abcd:abcd::23]:8080
rgw thread pool size = 512



[root@storagenode3 ~]#
```

 We also updated the dns server as follows:

```
storagenode1.storage.com  IN    abcd:abcd:abcd::21
storagenode2.storage.com  IN    abcd:abcd:abcd::22
storagenode3.storage.com  IN    abcd:abcd:abcd::23



_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode1.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode2.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode3.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode1.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode2.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode3.storage.com


```

But when we run the command ceph -s, we get the following error:

```
[root@storagenode3 ~]# ceph -s
unable to get monitor info from DNS SRV with service name: ceph-mon
2023-02-02T15:18:14.098+0530 7f1313a58700 -1 failed for service
_ceph-mon._tcp
2023-02-02T15:18:14.098+0530 7f1313a58700 -1 monclient:
get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster)
[root@storagenode3 ~]#
```

 Could you please help us to configure the server using mon_dns_srv_name
correctly?



On Wed, Jan 25, 2023 at 9:06 PM John Mulligan 
wrote:

> On Tuesday, January 24, 2023 9:02:41 AM EST Lokendra Rathour wrote:
> > Hi Team,
> >
> >
> >
> > We have a ceph cluster with 3 storage nodes:
> >
> > 1. storagenode1 - abcd:abcd:abcd::21
> >
> > 2. storagenode2 - abcd:abcd:abcd::22
> >
> > 3. storagenode3 - abcd:abcd:abcd::23
> >
> >
> >
> > The requirement is to mount ceph using the domain name of MON node:
> >
> > Note: we resolved the domain name via DNS server.
> >
> >
> > For this we are using the command:
> >
> > ```
> >
> > mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
> > name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
> >
> > ```
> >
> >
> >
> > We are getting the following logs in /var/log/messages:
> >
> > ```
> >
> > Jan 24 17:23:17 localhost kernel: libceph: resolve '
> storagenode.storage.com'
> > (ret=-3): failed
> >
> > Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
> > storagenode.storage.com:6789'
> >
> > ```
> >
>
>
> I saw a similar log message recently when I had forgotten to install the
> ceph
> mount helper.
> Check to see if you have a binary 'mount.ceph' on the system. If you don't
> try
> to install it from packages. On fedora I needed to install 'ceph-common'.
>
>
> >
> >
> > We also tried mounting ceph storage using IP of MON which is working
> fine.
> >
> >
> >
> > Query:
> >
> >
> > Could you please help us out with how we can mount ceph using FQDN.
> >
> >
> >
> > My /etc/ceph/ceph.conf is as follows:
> >
> > [global]
> >
> > ms bind ipv6 = true
> >
> > ms bind ipv4 = false
> >
> > mon initial members = storagenode1,storagenode2,storagenode3
> >
> > osd pool default crush rule = -1
> >
> > fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
> >
> > mon host =
> >
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:a
> >
> bcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1
> > :[abcd:abcd:abcd::23]:6789]
> >
> > public network = abcd:abcd:abcd::/64
> >
> > cluster network = eff0:eff0:eff0::/64
> >
> >
> >
> > [osd]
> >
> > osd memory target = 4294967296
> >
> >
> >
> > [client.rgw.storagenode1.rgw0]
> >
> > host = storagenode1
> >
> > keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
> >
> > log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
> >
> > rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
> >
> > rgw thread pool size = 512
>
>
>
>
>

-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to 

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-01-25 Thread John Mulligan
On Tuesday, January 24, 2023 9:02:41 AM EST Lokendra Rathour wrote:
> Hi Team,
> 
> 
> 
> We have a ceph cluster with 3 storage nodes:
> 
> 1. storagenode1 - abcd:abcd:abcd::21
> 
> 2. storagenode2 - abcd:abcd:abcd::22
> 
> 3. storagenode3 - abcd:abcd:abcd::23
> 
> 
> 
> The requirement is to mount ceph using the domain name of MON node:
> 
> Note: we resolved the domain name via DNS server.
> 
> 
> For this we are using the command:
> 
> ```
> 
> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
> 
> ```
> 
> 
> 
> We are getting the following logs in /var/log/messages:
> 
> ```
> 
> Jan 24 17:23:17 localhost kernel: libceph: resolve 'storagenode.storage.com'
> (ret=-3): failed
> 
> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
> storagenode.storage.com:6789'
> 
> ```
> 


I saw a similar log message recently when I had forgotten to install the ceph 
mount helper. 
Check to see if you have a binary 'mount.ceph' on the system. If you don't try 
to install it from packages. On fedora I needed to install 'ceph-common'.


> 
> 
> We also tried mounting ceph storage using IP of MON which is working fine.
> 
> 
> 
> Query:
> 
> 
> Could you please help us out with how we can mount ceph using FQDN.
> 
> 
> 
> My /etc/ceph/ceph.conf is as follows:
> 
> [global]
> 
> ms bind ipv6 = true
> 
> ms bind ipv4 = false
> 
> mon initial members = storagenode1,storagenode2,storagenode3
> 
> osd pool default crush rule = -1
> 
> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
> 
> mon host =
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:a
> bcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1
> :[abcd:abcd:abcd::23]:6789]
> 
> public network = abcd:abcd:abcd::/64
> 
> cluster network = eff0:eff0:eff0::/64
> 
> 
> 
> [osd]
> 
> osd memory target = 4294967296
> 
> 
> 
> [client.rgw.storagenode1.rgw0]
> 
> host = storagenode1
> 
> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
> 
> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
> 
> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
> 
> rgw thread pool size = 512



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-01-24 Thread Robert Sander

Hi,

you can also use SRV records in DNS to publish the IPs of the MONs.

Read https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/ 
for more info.


Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-01-24 Thread Robert Sander

Hi,

On 24.01.23 15:02, Lokendra Rathour wrote:


My /etc/ceph/ceph.conf is as follows:

[global]
fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
mon host = 
[v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]


Does this ceph.conf also exist on the hosts that want to mount the 
filesystem? Then you do not need to specify a MON host or IP when 
mounting CephFS. Just do


mount -t ceph -o name=admin,secret=XXX :/ /backup

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io