Hello there,
I am deploying a development system with 3 hosts. I want to deploy a monitor on
each of those hosts and several osds, 1 per disk.
In addition I have an admin machine to use ceph-deploy from. So far I have 1
mon on ceph01 and a total of 6 osds on ceph01 and ceph02 in a healthy
On Thu, Aug 15, 2013 at 7:45 AM, Nico Massenberg
nico.massenb...@kontrast.de wrote:
Hello there,
I am deploying a development system with 3 hosts. I want to deploy a
monitor on each of those hosts and several osds, 1 per disk.
In addition I have an admin machine to use ceph-deploy from. So
On Wed, Aug 14, 2013 at 4:27 PM, Jim Summers jbsumm...@gmail.com wrote:
Hello All,
I just re-installed the ceph-release package on my RHEL system in an
effort to get dumpling installed.
After doing that I can not yum install ceph-deploy. Then I tyum installed
ceph but still no
On Wed, Aug 14, 2013 at 04:24:55PM -0700, Josh Durgin wrote:
On 08/14/2013 02:22 PM, Michael Morgan wrote:
Hello Everyone,
I have a Ceph test cluster doing storage for an OpenStack Grizzly
platform
(also testing). Upgrading to 0.67 went fine on the Ceph side with the
cluster
showing
Hi all,
I would like to use ceph in our company and had some test setups running.
Now all of a sudden ceph-deploy is not in the repos anymore
this is my sources list,
...
deb http://archive.ubuntu.com/ubuntu raring main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu
Hi,
Did anyone manage to use striped rbd volumes with OpenStack Cinder (Grizzly)? I
noticed in the current OpenStack master code that there are options for
striping the new _backup_ volumes, but there's still nothing to do with
striping in the master Cinder rbd driver. Is there a way to set
mount cephfs to /mnt/mycehfs on debian 7, kernel3.10
eg: have one file
root@test-debian:/mnt/mycephfs# ls -i test.txt
1099511627776 test.txt
root@test-debian:/mnt/mycephfs# ceph osd map volumes test.txt
osdmap e351 pool 'volumes' (3) object 'test.txt' - pg 3.8b0b6108 (3.8) -
up [5,3] acting [5,3]
On Thu, Aug 15, 2013 at 11:10 AM, Jim Summers jbsumm...@gmail.com wrote:
Hello All,
Since the release of dumpling, a couple of things are now not working. I
can not yum install ceph-deploy and earlier I tried to manually modify the
ceph.repo file. That did get ceph-deploy installed but it
I ran:
ceph-deploy mon create chost0 chost1
It seemed to be working and then hung at:
[chost0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-chost0/done
[chost0][INFO ] create a done file to avoid re-doing the mon deployment
[chost0][INFO ] create the init path if it does not exist
On Monday, August 5, 2013, Kevin Weiler wrote:
Thanks for looking Sage,
I came to this conclusion myself as well and this seemed to work. I'm
trying to replicate a ceph cluster that was made with ceph-deploy
manually. I noted that these capabilities entries were not in the
ceph-deploy
thanks a lot to your reply.
i know that [5,3], osd.5 is the primary osd, since my replicate size
is 2.and in my testing cluster. test.txt only have this only one
file.
i just mount -t cephfs 192.168.250.15:6789:/ , so means ,use pool data
by default ?
##The acting OSDs however are the OSD
They're unclean because CRUSH isn't generating an acting set of
sufficient size so the OSDs/monitors are keeping them remapped in
order to maintain replication guarantees. Look in the docs for the
crush tunables options for a discussion on this.
-Greg
Software Engineer #42 @ http://inktank.com |
That cluster was not deployed by ceph-deploy; ceph-deploy has never put
entries for the daemons into ceph.conf.
On 08/06/2013 12:08 PM, Kevin Weiler wrote:
Hi again Ceph devs,
I'm trying to deploy ceph using puppet and I'm hoping to add my osds
non-sequentially. I spoke with dmick on #ceph
many thanks . i did and resolved it by :
#ceph osd getcrushmap -o /tmp/crush
#crushtool -i /tmp/crush --enable-unsafe-tunables
--set-choose-local-tries 0 --set-choose-local-fallback-tries 0
--set-choose-total-tries 50 -o /tmp/crush.new
root@ceph-admin:/etc/ceph# ceph osd setcrushmap -i
14 matches
Mail list logo