[ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-14 Thread Kashif Mumtaz
Hello Dear,
I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per below
Ceph iSCSI Gateway — Ceph Documentation

  
|  
|   |  
Ceph iSCSI Gateway — Ceph Documentation
   |  |

  |

 


Ceph is iscsi gateway are configured and chap auth is set.



/> lso- / 
.
 [...]  o- clusters 

 [Clusters: 1]  | o- ceph 
..
 [HEALTH_WARN]  |   o- pools 
..
 [Pools: 2]  |   | o- kashif 
. [Commit: 0b, 
Avail: 116G, Used: 1K, Commit%: 0%]  |   | o- rbd 
... [Commit: 
10G, Avail: 116G, Used: 3K, Commit%: 8%]  |   o- topology 
...
 [OSDs: 13,MONs: 3]  o- disks 
.
 [10G, Disks: 1]  | o- rbd.disk_1 
...
 [disk_1 (10G)]  o- iscsi-target 
.
 [Targets: 1]    o- iqn.2003-01.com.redhat.iscsi-gw:tahir 
. 
[Gateways: 2]      o- gateways 

 [Up: 2/2, Portals: 2]      | o- gateway 

 [192.168.10.37 (UP)]      | o- gateway2 
...
 [192.168.10.38 (UP)]      o- hosts 
..
 [Hosts: 1]        o- iqn.1994-05.com.redhat:rh7-client 
... [Auth: CHAP, Disks: 
1(10G)]          o- lun 0 
.. 
[rbd.disk_1(10G), Owner: gateway2]/>


But initiators are unable to mount it. Try both ion Linux and ESXi 6.


Below is the  error message on iscsi gateway server log file.
Oct 14 19:34:49 gateway kernel: iSCSI Initiator Node: 
iqn.1998-01.com.vmware:esx0-36c45c69 is not authorized to access iSCSI target 
portal group: 1.Oct 14 19:34:49 gateway kernel: iSCSI Login negotiation failed.
Oct 14 19:35:27 gateway kernel: iSCSI Initiator Node: 
iqn.1994-05.com.redhat:5ef55740c576 is not authorized to access iSCSI target 
portal group: 1.Oct 14 19:35:27 gateway kernel: iSCSI Login negotiation failed.

I am giving the ceph authentication on initiator side.   
Discovery on initiator is happening 
root@server1 ~]# iscsiadm -m discovery -t st -p  
192.168.10.37192.168.10.37:3260,1 
iqn.2003-01.com.redhat.iscsi-gw:tahir192.168.10.38:3260,2 
iqn.2003-01.com.redhat.iscsi-gw:tahir
But when trying to login , it is giving  "iSCSI login failed due to 
authorization failure"

[root@server1 ~]# iscsiadm -m node -T iqn.2003-01.com.redhat.iscsi-gw:tahir  
-lLogging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw:tahir, 
portal: 192.168.10.37,3260] (multiple)Logging in to [iface: default, target: 
iqn.2003-01.com.redhat.iscsi-gw:tahir, portal: 192.168.10.38,3260] 
(multiple)iscsiadm: Could not login to [iface: default, target: 
iqn.2003-01.com.redhat.iscsi-gw:tahir, portal: 192.168.10.37,3260].iscsiadm: 
initiator reported error (24 - iSCSI login failed due to authorization 
failure)iscsiadm: Could not login to [iface: default, target: 
iqn.2003-01.com.redhat.iscsi-gw:tahir, portal: 192.168.10.38,3260].iscsiadm: 
initiator reported error (24 - iSCSI login failed due to authorization 
failure)iscsiadm: Could not log into all portals

Can someone give the idea what is missing.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Configuring Ceph using multiple networks

2017-10-07 Thread Kashif Mumtaz


I have successfully installed Luminous on Ubutnu  16.04 by one network.
 
Now I am trying to install same by using two networksthis.(on different 
machine)public network=   192.168.10.0/24

cluster network=  172.16.50.0/24

 Eeach node has two interfaces. One in public networkother in cluster network.  
While creating the initial monitors  facing the the below error
 
“Some monitors have still notreached quorum”


 
Below is /etc/hosts file on each host172.16.50.1 mon1172.16.50.2 
mon2172.16.50.3 mon3172.16.50.4 osd1172.16.50.5 osd2
 
This is ceph.conf filemon1:/home/cephadmin/my-cluster# cat 
ceph.conf[global]fsid = 7f0ffa2b-7528-407b-8a63-360741d80939mon_initial_members 
= mon1, mon2, mon3mon_host = 
172.16.50.1,172.16.50.2,172.16.50.3auth_cluster_required = 
cephxauth_service_required = cephxauth_client_required = cephx
 
# Public and cluster networkpublic network =  192.168.10.0/24cluster network = 
172.16.50.0/24# Write an object 2 timesosd pool default size = 2# 1 for a multi 
node cluster in a single rackosd crush chooseleaf type = 1
 

 

 
While searching on net , I found same error discussed inbelow post
 
https://www.spinics.net/lists/ceph-users/msg24603.html
 
I am observing same error in my case  that during ceph-deploy moncreate-initial 
 all devices name are notresolving properly.


 
Log:

[2017-10-0700:31:18,975][mon1][DEBUG ] status for monitor: mon.mon1

[2017-10-0700:31:18,976][mon1][DEBUG ] {

[2017-10-07 00:31:18,976][mon1][DEBUG]   "election_epoch": 0,

[2017-10-0700:31:18,976][mon1][DEBUG ]  "extra_probe_peers": [

[2017-10-0700:31:18,976][mon1][DEBUG ]"172.16.50.1:6789/0",

[2017-10-0700:31:18,977][mon1][DEBUG ]"172.16.50.2:6789/0",

[2017-10-07 00:31:18,977][mon1][DEBUG] "172.16.50.3:6789/0

[2017-10-07 00:31:18,982][mon1][DEBUG ] "addr":"192.168.10.31:6789/0",

[2017-10-07 00:31:18,982][mon1][DEBUG ] "name": "mon1",

[2017-10-07 00:31:18,982][mon1][DEBUG ] 
"public_addr":"192.168.10.31:6789/0",

[2017-10-07 00:31:18,982][mon1][DEBUG ] "rank": 0

[2017-10-0700:31:18,983][mon1][DEBUG ]   },

[2017-10-0700:31:18,983][mon1][DEBUG ]   {

[2017-10-0700:31:18,983][mon1][DEBUG ]"addr": "0.0.0.0:0/1",

[2017-10-0700:31:18,983][mon1][DEBUG ]"name": "mon2",

[2017-10-0700:31:18,983][mon1][DEBUG ]"public_addr": "0.0.0.0:0/1",

[2017-10-0700:31:18,983][mon1][DEBUG ]     "rank": 1

[2017-10-0700:31:18,984][mon1][DEBUG ]   },

[2017-10-0700:31:18,984][mon1][DEBUG ]   {

[2017-10-0700:31:18,984][mon1][DEBUG ]"addr": "0.0.0.0:0/2",

[2017-10-0700:31:18,984][mon1][DEBUG ]"name": "mon3",

[2017-10-0700:31:18,984][mon1][DEBUG ]"public_addr": "0.0.0.0:0/2",

[2017-10-0700:31:18,985][mon1][DEBUG ]"rank": 2


 

 

 
I configured the DNS also as suggested solution in above link, but  issue did 
not resolve.


 
Can some one help in this regard?


 
I am not sue in DNSlooup name should be resolve to node’s public IP orcluster 
IP?


 
In /etc/hosts file which IP of nodes should use cluster or public ?


 

 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-10-01 Thread Kashif Mumtaz
Dear,
Thanks for help. I am able to install on single node. 
Now going to install on multiple nodes. Just want to clarify one small thing.
Is Ceph key and Ceph repository need to add on every node or it is required 
only on admin node  where we execute ceph-deploy command ? 

On Friday, September 29, 2017 9:57 AM, Stefan Kooman <ste...@bit.nl> wrote:
 

 Quoting Kashif Mumtaz (kashif.mum...@yahoo.com):
> 
> Dear User,
> I am striving had to install Ceph luminous version on Ubuntu 16.04.3  ( 
> xenial ).
> Its repo is available at https://download.ceph.com/debian-luminous/ 
> I added it like sudo apt-add-repository 'deb 
> https://download.ceph.com/debian-luminous/ xenial main'
> # more  sources.list
> deb https://download.ceph.com/debian-luminous/ xenial main

^^ That looks good. 

> It say no package available. Did anybody able to install Luminous on Xenial 
> by using repo?

Just checkin': you did a "apt update" after adding the repo?

The repo works fine for me. Is the Ceph gpg key installed?

apt-key list |grep Ceph
uid                  Ceph.com (release key) <secur...@ceph.com>

Make sure you have "apt-transport-https" installed (as the repos uses
TLS).

Gr. Stefan


-- 
| BIT BV  http://www.bit.nl/       Kamer van Koophandel 09090351
| GPG: 0xD14839C6                  +31 318 648 688 / i...@bit.nl


   ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-09-29 Thread Kashif Mumtaz
Dear Stefan,
Thanks for your help. You are right. I was missing apt update" after adding 
repo.
 After doing apt update I am able to install luminous
cadmin@admin:~/my-cluster$ ceph --versionceph version 12.2.1 
(3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)

I am not much in practice with Ubuntu. I use Centos/RHEL only . This time a 
specific requirement to install it on Ubuntu.
I want to ask one thing.
Now ceph two version availbe in repository.
1- Jewel in Ubuntu update repository2 -   Manually added ceph Repository
If one package available in multiple repository with different version How can 
I install specific version ?






. 
 

On Friday, September 29, 2017 9:57 AM, Stefan Kooman <ste...@bit.nl> wrote:
 

 Quoting Kashif Mumtaz (kashif.mum...@yahoo.com):
> 
> Dear User,
> I am striving had to install Ceph luminous version on Ubuntu 16.04.3  ( 
> xenial ).
> Its repo is available at https://download.ceph.com/debian-luminous/ 
> I added it like sudo apt-add-repository 'deb 
> https://download.ceph.com/debian-luminous/ xenial main'
> # more  sources.list
> deb https://download.ceph.com/debian-luminous/ xenial main

^^ That looks good. 

> It say no package available. Did anybody able to install Luminous on Xenial 
> by using repo?

Just checkin': you did a "apt update" after adding the repo?

The repo works fine for me. Is the Ceph gpg key installed?

apt-key list |grep Ceph
uid                  Ceph.com (release key) <secur...@ceph.com>

Make sure you have "apt-transport-https" installed (as the repos uses
TLS).

Gr. Stefan


-- 
| BIT BV  http://www.bit.nl/       Kamer van Koophandel 09090351
| GPG: 0xD14839C6                  +31 318 648 688 / i...@bit.nl


   ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-09-28 Thread Kashif Mumtaz

Dear User,
I am striving had to install Ceph luminous version on Ubuntu 16.04.3  ( xenial 
).
Its repo is available at https://download.ceph.com/debian-luminous/ 
I added it like sudo apt-add-repository 'deb 
https://download.ceph.com/debian-luminous/ xenial main'
# more  sources.list
deb https://download.ceph.com/debian-luminous/ xenial main
It say no package available. Did anybody able to install Luminous on Xenial by 
using repo?___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com