My PC had problems to quick install,so I followed the Installation (Manual)
guide, but when I was at the step populate the monitor daemon(s) with the
monitor map and keyring., error occured and the print is :
IO error: /var/lib/ceph/mon/ceph-node1/store.db/LOCK: No such file or directory
I was building a small test cluster and noticed a difference with trying
to rbd map depending on whether the cluster was built using fedora or
CentOS.
When I used CentOS osds, and tried to rbd map from arch linux or fedora,
I would get rbd: add failed: (34) Numerical result out of range. It
On Mon, Jun 9, 2014 at 11:48 AM, lists+c...@deksai.com wrote:
I was building a small test cluster and noticed a difference with trying
to rbd map depending on whether the cluster was built using fedora or
CentOS.
When I used CentOS osds, and tried to rbd map from arch linux or fedora,
I
Hi all,
I adding a new ceph-data host, but
#ceph -s -k /etc/ceph/ceph.client.admin.keyring
2014-06-09 17:39:51.686082 7fade4f14700 0 librados: client.admin
authentication error (1) Operation not permitted
Error connecting to cluster: PermissionError
my ceph.conf:
[global]
auth cluster
i solved this by export key from ceph auth export... :D
above question, i use key with old format version.
On 06/09/2014 05:44 PM, Ta Ba Tuan wrote:
Hi all,
I adding a new ceph-data host, but
#ceph -s -k /etc/ceph/ceph.client.admin.keyring
2014-06-09 17:39:51.686082 7fade4f14700 0
Hi all,
I installed cep firefly and now I am playing with rbd snapshot.
I created a pool (libvirt-pool) with two images:
libvirtimage1 (format 1)
image2 (format 2).
When I try to protect the first image:
rbd --pool libvirt-pool snap protect --image libvirtimage1 --snap
libvirt-snap
it gives me
On Mon, Jun 9, 2014 at 3:01 PM, Ignazio Cassano
ignaziocass...@gmail.com wrote:
Hi all,
I installed cep firefly and now I am playing with rbd snapshot.
I created a pool (libvirt-pool) with two images:
libvirtimage1 (format 1)
image2 (format 2).
When I try to protect the first image:
rbd
Many thanks...
Can I create a format 2 image (with support for linear snapshot) using
qemu-img command ?
2014-06-09 13:05 GMT+02:00 Ilya Dryomov ilya.dryo...@inktank.com:
On Mon, Jun 9, 2014 at 3:01 PM, Ignazio Cassano
ignaziocass...@gmail.com wrote:
Hi all,
I installed cep firefly and
On 06/09/2014 02:00 PM, Ignazio Cassano wrote:
Many thanks...
Can I create a format 2 image (with support for linear snapshot) using
qemu-img command ?
Yes:
qemu-img create -f raw rbd:rbd/image1:rbd_default_format=2 10G
'rbd_default_format' is a Ceph setting which is passed down to librbd
Many thanks
2014-06-09 14:04 GMT+02:00 Wido den Hollander w...@42on.com:
On 06/09/2014 02:00 PM, Ignazio Cassano wrote:
Many thanks...
Can I create a format 2 image (with support for linear snapshot) using
qemu-img command ?
Yes:
qemu-img create -f raw
We have an NFS to RBD gateway with a large number of smaller RBDs. In
our use case we are allowing users to request their own RBD containers
that are then served up via NFS into a mixed cluster of clients.Our
gateway is quite beefy, probably more than it needs to be, 2x8 core
cpus and 96GB
Hi All,
We've experienced a lot of issues since EPEL started packaging a
0.80.1-2 version that YUM
will see as higher than 0.80.1 and therefore will choose to install
the EPEL one.
That package has some issues from what we have seen and in most cases
will break the installation
process.
There
Hi,
I am trying to run schedule_suite.sh on our custom Ceph build for leveraging
InkTank suites in our testing. Can someone help me in using this shell script,
where I can provide my own targets instead of the script picking from Ceph lab?
Also kindly let me know if anyone has setup a lock
Thanks Alfredo , happy to see your email.
I was a victim of this problem , hope 1.5.4 will take away my pain :-)
- Karan Sing -
On 09 Jun 2014, at 15:33, Alfredo Deza alfredo.d...@inktank.com wrote:
http://ceph.com/ceph-deploy/docs/changelog.html#id1
More detail to this. I recently upgraded my Ceph cluster from Emperor to
Firefly. After the upgrade had been done, I noticed 1 of the OSD not coming
back to life. While in the process of troubleshooting, rebooted the osd server
and the keyring shifted.
My $ENV.
4x OSD servers (each has 12, 1
Miki,
osd crush chooseleaf type is set to 1 by default, which means that it looks
to peer with placement groups on another node, not the same node. You would
need to set that to 0 for a 1-node cluster.
John
On Sun, Jun 8, 2014 at 10:40 PM, Miki Habryn dic...@rcpt.to wrote:
I set up a
Barring a newly-introduced bug (doubtful), that assert basically means
that your computer lied to the ceph monitor about the durability or
ordering of data going to disk, and the store is now inconsistent. If
you don't have data you care about on the cluster, by far your best
option is:
1) Figure
I've correlated a large deep scrubbing operation to cluster stability
problems.
My primary cluster does a small amount of deep scrubs all the time, spread
out over the whole week. It has no stability problems.
My secondary cluster doesn't spread them out. It saves them up, and tries
to do all
On Mon, Jun 9, 2014 at 3:22 PM, Craig Lewis cle...@centraldesktop.com wrote:
I've correlated a large deep scrubbing operation to cluster stability
problems.
My primary cluster does a small amount of deep scrubs all the time, spread
out over the whole week. It has no stability problems.
My
Craig,
I've struggled with the same issue for quite a while. If your i/o is
similar to mine, I believe you are on the right track. For the past
month or so, I have been running this cronjob:
* * * * * for strPg in `ceph pg dump | egrep
'^[0-9]\.[0-9a-f]{1,4}' | sort -k20 | awk '{
Hi,
I fail for the cooperation of Openstack and Ceph.
I was set on the basis of the url.
http://ceph.com/docs/next/rbd/rbd-openstack/
Can look at the state of cephcluster from Openstack(cephClient)
Failure occurs at cinder create
Ceph Cluster:
CentOS release 6.5
Ceph 0.80.1
OpenStack:
Ubuntu
21 matches
Mail list logo