[ceph-users] Query regarding integrating Ceph with Vcenter/Clustered Esxi hosts.

2015-04-17 Thread Vivek Varghese Cherian
Hi all,

I have a setup where I can launch vms from a standalone vmware esxi host
which acts as a iscsi initiator and a ceph rbd block device that is
exported as a iscsi target.

During the time of launching of vms from the standalone esxi host
integrated with ceph, it is prompting me to choose the datastore I want to
launch the vms on, and all I need to do is to choose the iscsi datastore on
the ceph cluster and the vms gets launched there.

My client's requirement is not to just to launch vms on a standalone vmware
esxi host integrated with Ceph, but that he should be able to have a shared
VirtualSAN storage kind of environment like the one on NetApp and should be
able to launch vms on ceph storage from either clustered Esxi hosts or from
all hosts across a VCenter.

Is this feasible ?

Any pointers/references would be most welcome.

Thanks in advance.

Regards,
-- 
Vivek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph node operating system high availability and osd restoration best practices.

2015-03-09 Thread Vivek Varghese Cherian
Hi,

I have a 4 node ceph cluster and the operating system used on the nodes is
Ubuntu
14.04.

The ceph cluster currently has 12 osds spread across the 4 nodes. Currently
one of the nodes has been restored after an operating system file system
corruption
which basically made the node and the osds on that particular node
inaccessible to
the rest of the cluster

I had to re-install the operating system to make the node accessible and I
am currently
in the process of restoring the osds on the re-installed node.

I have 3 questions

1) Is there any mechanism to provide Node Operating System high
availability on a ceph cluster ?

2) Are there any best practices to follow while restoring the osds on a
node that has been restored
after an operating system crash ?

3) Is there any way to check if the data stored on the ceph cluster is safe
and has been replicated to
the other 3 nodes when one nodes crashed ?


Regards,
-- 
Vivek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Introducing Learning Ceph : The First ever Book on Ceph

2015-02-17 Thread Vivek Varghese Cherian
On Fri, Feb 6, 2015 at 4:23 AM, Karan Singh karan.si...@csc.fi wrote:

 Hello Community Members

 I am happy to introduce the first book on Ceph with the title *Learning
 Ceph*.

 Me and many folks from the publishing house together with technical
 reviewers spent several months to get this book compiled and published.

 Finally the book is up for sale on , i hope you would like it and surely
 will learn a lot from it.

 Amazon :
 http://www.amazon.com/Learning-Ceph-Karan-Singh/dp/1783985623/ref=sr_1_1?s=booksie=UTF8qid=1423174441sr=1-1keywords=ceph
 Packtpub : https://www.packtpub.com/application-development/learning-ceph



Hi Karan,

It would have been great if you could release the book under a creative
commons, or any other free/open source license so that people like me can
download it and read it.
After all ceph is open source, I don't see why a book on ceph should not
follow the same licensing pattern as ceph does.

Regards,
-- 
Vivek Varghese Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to download files from ceph radosgw node using openstack juno swift client.

2014-12-16 Thread Vivek Varghese Cherian
Hi,

root@ppm-c240-ceph3:/var/run/ceph# ceph --admin-daemon
/var/run/ceph/ceph-osd.11.asok config show | less | grep rgw_max_chunk_size
rgw_max_chunk_size: 524288,
root@ppm-c240-ceph3:/var/run/ceph#

And the value is above 4 MB.


Regards,
-- 
Vivek Varghese Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to start radosgw

2014-12-15 Thread Vivek Varghese Cherian
Hi,



 Do I need to overwrite the existing .db files and .txt file in
 /var/lib/nssdb on the radosgw host  with the ones copied from
 /var/ceph/nss on the Juno node ?


 Yeah - worth a try (we want to rule out any certificate mis-match errors).

 Cheers

 Mark



I have manually copied the keys from the directory /var/ceph/nss on the
juno node to the /var/ceph/nss on my radogw node, I have also made the
following changes to my ceph.conf:

#rgw keystone url = 10.x.x.175:35357
rgw keystone url = 10.x.x.175:5000
rgw keystone admin token = password123
rgw keystone accepted roles = Member, admin
rgw keystone token cache size = 1
rgw keystone revocation interval = 15 * 60
rgw s3 auth use keystone = true
#nss db path = /var/lib/nssdb
nss db path = /var/ceph/nss

I have restarted the radosgw and it works.

ceph@ppm-c240-ceph3:~$ ps aux | grep rados
root 19833  0.2  0.0 10324668 33288 ?  Ssl  Dec12   7:30
/usr/bin/radosgw -n client.radosgw.gateway
ceph 28101  0.0  0.0  10464   916 pts/0S+   02:25   0:00 grep
--color=auto rados
ceph@ppm-c240-ceph3:~$


Imho, the document ( http://ceph.com/docs/master/radosgw/keystone/ ) should
explicitly state that the /var/ceph/nss directory should be created on the
radosgw node and not on the openstack node.

I had a discussion with Loïc Dachary on irc, and on his request, I have
filed a bug against the documentation.

The ticket url is http://tracker.ceph.com/issues/10305


Btw, thanks Mark for the pointers.


Regards,
---
Vivek Varghese Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Unable to download files from ceph radosgw node using openstack juno swift client.

2014-12-15 Thread Vivek Varghese Cherian
-swiftclient-2.3.0
10.81.83.175 - - [16/Dec/2014:01:09:20 -0500] GET
/swift/v1/AUTH_25bb0caaff834efdafa1c1fcbb6aaf93?format=json HTTP/1.1 500
719 - python-swiftclient-2.3.0
10.81.83.175 - - [16/Dec/2014:01:09:50 -0500] HEAD
/swift/v1/AUTH_25bb0caaff834efdafa1c1fcbb6aaf93 HTTP/1.1 500 189 -
python-swiftclient-2.3.0
10.81.83.175 - - [16/Dec/2014:01:10:20 -0500] HEAD
/swift/v1/AUTH_25bb0caaff834efdafa1c1fcbb6aaf93 HTTP/1.1 500 189 -
python-swiftclient-2.3.0
10.81.83.175 - - [16/Dec/2014:01:10:50 -0500] GET
/swift/v1/AUTH_25bb0caaff834efdafa1c1fcbb6aaf93?format=json HTTP/1.1 500
719 - python-swiftclient-2.3.0


The radosgw logs is as follows,

root@ppm-c240-ceph3:/var/log/radosgw# tail -f ceph-client.admin

2014-12-09 11:39:19.862854 7fdbfa1257c0  0 librados: client.admin
initialization error (2) No such file or directory
2014-12-09 11:39:19.863429 7fdbfa1257c0 -1 Couldn't init storage provider
(RADOS)
2014-12-09 11:41:25.894934 7ffa8fe087c0  0 ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 5843
2014-12-09 11:41:25.922673 7ffa8fe087c0 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2014-12-09 11:41:25.922807 7ffa8fe087c0  0 librados: client.admin
initialization error (2) No such file or directory
2014-12-09 11:41:25.924901 7ffa8fe087c0 -1 Couldn't init storage provider
(RADOS)
2014-12-09 11:42:12.110334 7f145f77d7c0  0 ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 5862
2014-12-09 11:42:12.139115 7f145f77d7c0 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2014-12-09 11:42:12.139215 7f145f77d7c0  0 librados: client.admin
initialization error (2) No such file or directory
2014-12-09 11:42:12.141559 7f145f77d7c0 -1 Couldn't init storage provider
(RADOS)


Any pointers as to what could be the cause of the error in downloading
files from the ceph cluster using the openstack juno swift client would be
highly appreciated.


Regards,
---
Vivek Varghese Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to start radosgw

2014-12-10 Thread Vivek Varghese Cherian
Hi,


root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d
 log-to-stderr
 2014-12-09 12:51:31.410944 7f073f6457c0  0 ceph version 0.80.7
 (6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 5958
 common/ceph_crypto.cc: In function 'void
 ceph::crypto::init(CephContext*)' thread 7f073f6457c0 time 2014-12-09
 12:51:31.412682
 common/ceph_crypto.cc: 54: FAILED assert(s == SECSuccess)
   ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
   1: (()+0x293ce8) [0x7f073e797ce8]
   2: (common_init_finish(CephContext*, int)+0x10) [0x7f073e76afa0]
   3: (main()+0x340) [0x4665a0]
   4: (__libc_start_main()+0xf5) [0x7f073c932ec5]
   5: /usr/bin/radosgw() [0x4695c7]
   NOTE: a copy of the executable, or `objdump -rdS executable` is
 needed to interpret this.
 2014-12-09 12:51:31.413544 7f073f6457c0 -1 common/ceph_crypto.cc: In
 function 'void ceph::crypto::init(CephContext*)' thread 7f073f6457c0
 time 2014-12-09 12:51:31.412682
 common/ceph_crypto.cc: 54: FAILED assert(s == SECSuccess)




 This looks like it could be failing to talk to Keystone via SSL - have you
 setup Keystone to use SSL? If so you'll need the converted certs copied to
 /var/lib/nssdb on your Radosgw host (see bottom of
 http://ceph.com/docs/master/radosgw/keystone/). If you have already done
 this...then apologies, but it's worth double checking!

 Cheers

 Mark



I have followed these steps on my Juno node from the URL
http://ceph.com/docs/master/radosgw/keystone/

mkdir /var/ceph/nss

openssl x509 -in /etc/keystone/ssl/certs/ca.pem -pubkey | \
certutil -d /var/ceph/nss -A -n ca -t TCu,Cu,Tuw

openssl x509 -in /etc/keystone/ssl/certs/signing_cert.pem -pubkey | \
certutil -A -d /var/ceph/nss -n signing_cert -t P,P,P


Do you suggest that I manually copy the self signed certificates (Generated
on Dec 4, 2014) from /var/ceph/nss on the Juno node to /var/lib/nssdb on
the rados gw host  ?

btw, I can already see the following files (dated Sep24 2014) in my
/var/lib/nssdb on the radosgw host.

root@ppm-c240-ceph3:/var/lib/nssdb# ls -la
total 52
drwxr-xr-x  2 root root  4096 Oct 29 03:17 .
drwxr-xr-x 44 root root  4096 Nov  6 05:06 ..
-rw-r--r--  1 root root  9216 Sep 24 08:25 cert9.db
-rw-r--r--  1 root root 11264 Sep 24 08:25 key4.db
-rw-r--r--  1 root root   449 Sep 24 08:25 pkcs11.txt
-rw-r--r--  1 root root 16384 Sep 24 08:25 secmod.db
root@ppm-c240-ceph3:/var/lib/nssdb#

Do I need to overwrite the existing .db files and .txt file in
/var/lib/nssdb on the radosgw host  with the ones copied from /var/ceph/nss
on the Juno node ?

Regards,
-- 
Vivek Varghese Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Unable to start radosgw

2014-12-09 Thread Vivek Varghese Cherian
 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 keyvaluestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
  20/20 rgw
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
  -2/-2 (syslog threshold)
  99/99 (stderr threshold)
  max_recent 1
  max_new 1000
  log_file
--- end dump of recent events ---
Aborted (core dumped)
root@ppm-c240-ceph3:~#


Any pointers as to why this is happening is highly appreciated.


Regards,
-- 
Vivek Varghese Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Integration of Ceph Object Gateway(radosgw) with OpenStack Juno Keystone

2014-12-06 Thread Vivek Varghese Cherian
Hi,

I am trying to integrate OpenStack Juno Keystone with the Ceph Object
Gateway(radosw).

I want to use keystone as the users authority. A user that keystone
authorizes to access
the gateway will also be created on the radosgw. Tokens that keystone
validates will be
considered as valid by the rados gateway.


I have deployed a 4 node ceph cluster running on Ubuntu 14.04

Host1: ppm-c240-admin.xyz.com (10.x.x.123)

Host2: ppm-c240-ceph1.xyz.com (10.x.x.124)

Host3: ppm-c240-ceph2.xyz.com (10.x.x.125)

Host4: ppm-c240-ceph3.xyz.com (10.x.x.126)


My ceph -w output is as follows,

ceph@ppm-c240-admin:~/my-cluster$ ceph -w

cluster df18a088-2a70-43f9-b07f-ce8cf7c3349c
health HEALTH_OK
monmap e1: 3 mons at
{ppm-c240-admin=10.x.x.123:6789/0,ppm-c240-ceph1=10.x.x.124:6789/0,ppm-c240-ceph2=10.x.x.125:6789/0},

election epoch 20, quorum 0,1,2
ppm-c240-admin,ppm-c240-ceph1,ppm-c240-ceph2
osdmap e92: 12 osds: 12 up, 12 in
pgmap v461: 704 pgs, 4 pools, 0 bytes data, 0 objects
442 MB used, 44622 GB / 44623 GB avail
704 active+clean
2014-11-04 12:24:37.126783 mon.0 [INF] pgmap v461: 704 pgs: 704
active+clean; 0 bytes data, 442 MB used, 44622 GB / 44623 GB avail

The host ppm-c240-ceph3.xyz.com (10.x.x.126) is running the Ceph Object
Gateway(Radosgw)

I can see the radosgw up and running,

ppmuser@ppm-c240-ceph3:~$ ll /var/run/ceph
total 0
drwxr-xr-x  3 root root 140 Dec  4 11:33 ./
drwxr-xr-x 21 root root 800 Dec  5 06:51 ../
srwxr-xr-x  1 root root   0 Nov  3 11:52 ceph-osd.10.asok=
srwxr-xr-x  1 root root   0 Nov  3 11:54 ceph-osd.11.asok=
srwxr-xr-x  1 root root   0 Nov  3 11:49 ceph-osd.9.asok=
srwxrwxrwx  1 root root   0 Nov  7 09:50 ceph.radosgw.gateway.fastcgi.sock=
drwxr-xr-x  2 root root  40 Apr 28  2014 radosgw-agent/
ppmuser@ppm-c240-ceph3:~$


root@ppm-c240-ceph3:~# ceph df detail
GLOBAL:
SIZE   AVAIL  RAW USED %RAW USED OBJECTS
44623G 44622G 496M 0  43
POOLS:
NAME ID CATEGORY USED %USED MAX AVAIL
OBJECTS DIRTY READ WRITE
data 0  -   0 0
14873G   0 00 0
metadata 1  -   0 0
14873G   0 00 0
rbd  2  -   0 0
14873G   0 00 0
pg_pool  3  -   0 0
14873G   0 00 0
.rgw.root4  - 840 0
14873G   3 3 2356 3
.rgw.control 5  -   0 0
14873G   8 80 0
.rgw 6  -   0 0
14873G   0 00 0
.rgw.gc  7  -   0 0
14873G  3232 121k 83200
.users.uid   8  -   0 0
14873G   0 0  862   220
.users   9  -   0 0
14873G   0 0084
.users.email 10 -   0 0
14873G   0 0042
.users.swift 11 -   0 0
14873G   0 00 2
root@ppm-c240-ceph3:~#


I am following the steps in the URL:
http://ceph.com/docs/master/radosgw/keystone/ for the radosgw integration
with keystone.

The /etc/ceph/ceph.conf snippet on all the 4 ceph nodes are as follows,

rgw keystone url = 10.x.x.175:35357
rgw keystone admin token = xyz123
rgw keystone accepted roles = Member, admin
rgw keystone token cache size = 1
rgw keystone revocation interval = 15 * 60
rgw s3 auth use keystone = true
nss db path = /var/lib/nssdb

Where 10.x.x.175 is the I.P. Address of the Keystone server (Single Node
Juno install on Ubuntu 14.04)

Keystone itself is pointing to the Ceph Object Gateway as an object-storage
endpoint:

ppmuser@ppm-dc-c3sv3-ju:~$ keystone service-create --name swift --type
object-store --description Object Storage

ppmuser@ppm-dc-c3sv3-ju:~$ keystone endpoint-create --service-id
a70fbbc539434fa5bf8c0977e36161a4  --publicurl
http://ppm-c240-ceph3.xyz.com/swift/v1 --internalurl
http://ppm-c240-ceph3.xyz.com/swift/v1 --adminurl
http://ppm-c240-ceph3.xyz.com/swift/v1


I have the following queries

1) I am following the steps in
http://docs.openstack.org/juno/install-guide/install/apt/content/swift
install-controller-node.html. Do I need to create a swift user in keystone
on my OpenStack node ?

2) Once the swift user is created, do I have to do a $ keystone
user-role-add --user swift --tenant service --role admin ?

3) I did not find any documents on how to proceed further and test the
integrated setup, so any pointers would be most welcome.

ps: I am cross posting this to the openstack and ceph lists because it
involves an integration of both.

Regards,
---
Vivek Varghese Cherian

Re: [ceph-users] Integration of Ceph Object Gateway(radosgw) with OpenStack Juno Keystone

2014-12-06 Thread Vivek Varghese Cherian
Hi,

I left out mentioning in my previous mail that my ceph version is firefly.

Regards,
-- 
Vivek Varghese Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com