Hi All,
I am following manual steps to create osd node.
while executing below command, i am facing error like below
#ceph-osd -i 1 --mkfs --mkkey
2014-05-14 05:04:12.097585 7f91c99007c0 -1 ** ERROR: error creating empty
object store in /var/lib/ceph/osd/-: (2) No such file or directory
But
Hi Yehuda/All,
I have configured Rados Gateway as suggested in ceph tutorial to integrate
with Keystone and my ceph.conf looks like below.
*[client.radosgw.gateway]host = mon keyring =
/etc/ceph/ceph.client.radosgw.keyringrgw socket path =
/tmp/radosgw.sock
Hi,
I have configured ceph.conf like below for integration of radosgw with
keystone.
[client.radosgw.gateway]
host = mon
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /tmp/radosgw.sock
log file = /var/log/ceph/radosgw.log
rgw keystone url =
First of all you need to create /var/lib/ceph/osd/ceph-0 folder. Then try.
Thanks,
Srinivas.
On Tue, May 6, 2014 at 5:41 PM, Sakhi Hadebe shad...@csir.co.za wrote:
Hi,
I am doing a fresh ceph installation and I am hit by the error below
when restarting the ceph service:
root@nodeA#
Hi All,
I would like to share the info regarding, how to setup RADOS with latest
Apache 2.4.x versions.
Please find below additional steps to setup, which are kind of updates to
ceph documentation.
Steps:
1) If you are planning to use Apache 2.4 versions, You cannot build FastCGI
module
Hi Yehuda,
I have configured cluster and its health is active and clean like below
*root@mon:/etc/ceph# ceph statuscluster
a7f64266-0894-4f1e-a635-d0aeaca0e993 health HEALTH_WARN 110 pgs stuck
unclean monmap e1: 1 mons at {mon=192.168.0.102:6789/0
Hi All,
I would like to use lighttpd instead of apache for rados gateway
configuration. But i am facing issues with syntax for rgw.conf.
Could you please share the details how I can prepare rgw.conf fot lighttpd?
Please also suggest version of mod_fastcgi for apache version 2.4.3.
Thanks,
Hi,
My monitor node and osd nodes are running fine. But my cluster health is
stale+active+clean
root@node1:/etc/ceph# ceph status
cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
health HEALTH_WARN 2856 pgs stale; 2856 pgs stuck stale
monmap e1: 1 mons at {mon=192.168.0.102:6789/0},
Hi All,
I could able to create a cluster with 1 monitor node and 2 OSD nodes on our
proprietary distribution. Ceph health is Ok and active.
root@mon:/etc/ceph#
* ceph -s cluster a7f64266-0894-4f1e-a635-d0aeaca0e993 health
HEALTH_OK monmap e1: 1 mons at {mon=192.168.0.102:6789/0
(\ /) in secret key. I see that your secret key has slashes.
perhaps generate a new gateway user specifying keys using:
--access-key= and --secret=
On 04/23/2014 02:30 PM, Srinivasa Rao Ragolu wrote:
Hi All,
I could able to create a cluster with 1 monitor node and 2 OSD nodes
this issue.
Thanks,
Srinivas.
On Wed, Apr 23, 2014 at 7:13 PM, Srinivasa Rao Ragolu srag...@mvista.comwrote:
even after creating new secret key, I am facing the issue. Could you
please let me know are there any other mistakes?
Thanks,
Srinivas.
On Wed, Apr 23, 2014 at 7:03 PM, Peter ptier
some of these settings under your gateway config in ceph.conf:
http://ceph.com/docs/master/radosgw/config-ref/#swift-settings
or
rgw dns name =
On 04/23/2014 02:57 PM, Srinivasa Rao Ragolu wrote:
My Swift command line outputs are like below
root@mon:/etc/ceph# *radosgw-admin user info
Hi Yehuda and all,
I am using apache 2.4.3 version and with this version I could not able to
load mod_fastcgi version 2.4.6.
So I have used only apache without fastcgi. Added ServerName and
mod_rewrite.so attributes to/etc/apache2/confirm/httpd.conf.
I could able to run apache2, radosgway
Hi All,
I am creating ceph rados gateway on monitor node of my cluster. When I try
to add created keyring in to cluster. I am facing below issue. please help
me in resolving the issue.
#sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add
client.radosgw.gateway -i
Issue got resolved. Thanks
On Mon, Apr 21, 2014 at 6:56 PM, Srinivasa Rao Ragolu srag...@mvista.comwrote:
Hi All,
I am creating ceph rados gateway on monitor node of my cluster. When I try
to add created keyring in to cluster. I am facing below issue. please help
me in resolving the issue
Hi All,
I could successfully able to create ceph cluster on our proprietary
distribution with manual ceph commands
*ceph.conf*
[global]
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
mon initial members = mon
mon host = 192.168.0.102
public network = 192.168.0.0/22
auth cluster required = cephx
HI all,
I have created ceph cluster with 1 monitor node and 2 OSd nodes. Cluster
health is OK and Active.
My deployment is on our private distribution of Linux kernel 3.10.33 and
ceph version is 0.72.2
I could able to create image with command rbd create sample --size 200.
inserted rbd.ko
192 active+clean
client io 13 B/s rd, 0 op/s
After this monitor daemon getting killed. Need to start it again.
Thanks,
Srinivas.
On Wed, Apr 16, 2014 at 3:18 PM, Wido den Hollander w...@42on.com wrote:
On 04/16/2014 11:41 AM, Srinivasa Rao Ragolu wrote:
HI all,
I have created
:59 GMT+04:00 Srinivasa Rao Ragolu srag...@mvista.com:
Hi Wido,
Output of info command is given below
root@mon:/etc/ceph#
* rbd info sample rbd: error opening image sample: (95) Operation not
supported2014-04-16 09:57:24.575279 7f661c6e5780 -1 librbd: Error listing
snapshots: (95) Operation
:
On Wed, Apr 16, 2014 at 2:13 PM, Srinivasa Rao Ragolu
srag...@mvista.com wrote:
Thanks. Please see the output of above command
root@mon:/etc/ceph# rbd ls -l
rbd: error opening blk2: (95) Operation not supported2014-04-16
10:12:13.947625 7f3a2a0c7780 -1 librbd: Error listing snapshots
root@node1:/etc/ceph# ceph daemon osd.0 config get osd_class_dir
{ osd_class_dir: \/usr\/lib64\/rados-classes}
Thanks,
Srinivas.
On Wed, Apr 16, 2014 at 4:37 PM, Ilya Dryomov ilya.dryo...@inktank.comwrote:
On Wed, Apr 16, 2014 at 2:45 PM, Srinivasa Rao Ragolu
srag...@mvista.com wrote:
root
.
On Wed, Apr 16, 2014 at 5:12 PM, Srinivasa Rao Ragolu srag...@mvista.comwrote:
root@node1:/etc/ceph# ceph daemon osd.0 config get osd_class_dir
{ osd_class_dir: \/usr\/lib64\/rados-classes}
Thanks,
Srinivas.
On Wed, Apr 16, 2014 at 4:37 PM, Ilya Dryomov ilya.dryo...@inktank.comwrote:
On Wed
Yes llya. From that command output only, I assumed this must be
rados_classes issue. After that copied in exact location and restarted all
the nodes.
Thanks,
Srinivas.
On Wed, Apr 16, 2014 at 5:50 PM, Ilya Dryomov ilya.dryo...@inktank.comwrote:
On Wed, Apr 16, 2014 at 4:00 PM, Srinivasa Rao
Hi All,
On our private distribution, I have compiled ceph and could able to install
ceph.
Now I have added /etc/ceph/ceph.conf as
[global]
fsid = e5a14ff4-148a-473a-8721-53bda59c74a2
mon initial members = mon
mon host = 192.168.0.102
auth cluster required = cephx
auth service required = cephx
Correct error as below
*Error EINVAL: entity osd.1 exists but key does not match*
On Tue, Apr 8, 2014 at 7:51 PM, Srinivasa Rao Ragolu srag...@mvista.comwrote:
Hi,
I am trying to setup ceph cluster without using ceph-deploy.
Followed the link http://ceph.com/docs/master/install/manual
Hi,
I am trying to setup ceph cluster without using ceph-deploy.
Followed the link http://ceph.com/docs/master/install/manual-deployment/
Successfully able to create monitor node and results are as expected
I have copied *ceph.conf* and* ceph.client.admin.keyring* from monitor node
to OSD
Hi All,
I have built one distribution using yocto. I have written bitbakes for ceph
and ceph-deploy. Deployed in to same distribution. But..
While trying to create monitor node, I am getting Unsupported platform as
below. Please suggest me how can I use ceph and ceph-deploy independent of
Utilities you are specified are for ubuntu distributions. So no need of
them .
first restart httpd and check whether you can access from http of you
fully qualified domain name like http://{fqdn}:80
it if works then add rgw.conf to /etc/httpd/conf.d folder and restart
httpd. No do the same
Please check the kernel version . Only kernel version 3.10 and above are
supported to create format type 2 images.
On Tue, Mar 11, 2014 at 7:16 PM, Kasper Dieter dieter.kas...@ts.fujitsu.com
wrote:
When using rbd create ... --image-format 2 in some cases this CMD is
rejected by
EINVAL with
only the Ceph Userland services should be involved, shouldn't it ?
-Dieter
BTW the kernel version on the nodes hosting the OSDs processes is
2.6.32-358.el6.x86_64
but I can also boot with a 3.10.32 kernel.
On Tue, Mar 11, 2014 at 02:57:05PM +0100, Srinivasa Rao Ragolu wrote
Saravanan,
Please activate osds which are running on specific nodes from monitor node
like below
# cephdeploy osd activate ceph-node1:sdb1
Thanks restart ceph service on the same node
#sudo service ceph restart
simple check is your osds will be mounted on the respective nodes.
Please see
Hi Karan,
First of all many many thanks for your blog written on openstack
integration with CEPH
I could able to integrate Openstack Cinder with CEPH successfully and
attach volumes to running VMs
But facing the issue with Glance service while uploading image as shown
below
Hi Yehuda/Joseph/Any,
My ceph object storage cluster setup is node1(OSD1), node2(OSD2) and
monitor(MON).
From any client I could able to access the cluster if i copy ceph.conf and
ceph.client.keyring. Ceph HEALTH is OK and active+ clean.
My requirement is to run swift APIs initially and later
on seperate host outside the cluster' is a right
practice. And I can see it running:
$ ps -ef|grep -i rado
root 11004 1 0 Feb19 ?00:17:57 /usr/bin/radosgw -n
client.radosgw.gateway
On Feb 26, 2014, at 6:07 AM, Srinivasa Rao Ragolu srag...@mvista.com
wrote:
Hi Yehuda/Joseph/Any
else: for swift API, what is the auth url? The
command swift -A already works fine for me. Can't fine the swift
auth url on the doc site.
From: Srinivasa Rao Ragolu srag...@mvista.com
Date: Thursday, February 20, 2014 10:05 PM
To: Microsoft Office User larry@disney.com
Cc
Please create /var/lib/{odd,mon,msd,mon,odd,bootstrap-osd} folders.
Be in the path of /etc/cepf/conf directory. Execute ceph-deploy gatherkeys
monitor node. Then u should be able to see below keyrings in /etc/ceph/conf
-rw-r--r-- 1 root root 72 Oct 25 16:33 ceph.bootstrap-mds.keyring
-rw-r--r-- 1
Please cross verify by following blog written very detailed manner.
http://karan-mj.blogspot.in/2013/12/ceph-storage-part-2.html
and
http://karan-mj.blogspot.in/2013/12/ceph-installation-part-2.html
It will definitely help you in resolving the issue. Please follow every
step mentioned on your
Yes Sahana,
*First of all uninstall ceph packages from your node.*
*then*
*Approach for rpm based:*
You just open /etc/yum.repos.d/ceph.repo
Replace the {ceph-stable-release} with emperor and {distro} with rpm based
distro
baseurl=http://ceph.com/rpm-{ceph-stable-release}/{distro}/noarch
url) configurable?
On 2/19/14 7:42 AM, Yehuda Sadeh yeh...@inktank.com wrote:
On Wed, Feb 19, 2014 at 2:37 AM, Srinivasa Rao Ragolu
srag...@mvista.com wrote:
Hi all,
I have setup cluster successfully and one node using to setup rados
gateway.
Machine is Fedora 19(all nodes)
Steps I
39 matches
Mail list logo