When the time comes to replace an OSD I've used the following procedure
1) Stop/down/out the osd and replace the drive
2) Create the ceph osd directory: ceph-osd -i N --mkfs
3) Copy the osd key out of the authorized keys list
4) ceph osd crush rm osd.N
5) ceph osd crush add osd.$i $osd_size root=
http://www.sebastien-han.fr/blog/2013/06/03/ceph-integration-in-openstack-grizzly-update-and-roadmap-for-havana/
suggests it is possible to run openstack instances (not only images) off
of RBDs in grizzly and havana (which I'm running), and to use RBDs in
lieu of a shared file system.
I've fo
k and accurate response.
-Gaylord
On 10/24/2013 08:11 AM, Sage Weil wrote:
Try passing --cluster csceph instead of the config file path and I
suspect it will work.
sage
Gaylord Holder wrote:
I'm trying to bring a ceph cluster not named ceph.
I'm running version 0.61.
Fro
I'm trying to bring a ceph cluster not named ceph.
I'm running version 0.61.
From my reading of the documentation, the $cluster metavariable is set
by the basename of the configuration file: specifying the configuration
file "/etc/ceph/mycluster.conf" sets the $cluster metavariable to
"myclus
re Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Oct 8, 2013 at 10:19 AM, Gaylord Holder wrote:
I'm testing how many rbds I can map on a single server.
I've created 10,000 rbds in the rbd pool, but I can only actually map 230.
Mapping the 230th one fails with:
rbd: add failed
I'm testing how many rbds I can map on a single server.
I've created 10,000 rbds in the rbd pool, but I can only actually map 230.
Mapping the 230th one fails with:
rbd: add failed: (16) Device or resource busy
Is there a way to bump this up?
-Gaylord
_
On 09/22/2013 02:12 AM, yy-nm wrote:
On 2013/9/10 6:38, Gaylord Holder wrote:
Indeed, that pool was created with the default 8 pg_nums.
8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
I bumped up the pg_num to 600 for that pool and nothing happened.
I bumped up the pgp_num to
There are a lot of numbers ceph status prints.
Is there any documentation on what they are?
I'm particulary curious about what seems a total data.
ceph status says I have 314TB, when I calculate I have 24TB.
It also says:
10615 GB used, 8005 GB / 18621 GB avail;
which I take to be 10TB used/
n Mon, Sep 9, 2013 at 10:32 AM, Gaylord Holder wrote:
I'm starting to load up my ceph cluster.
I currently have 12 2TB drives (10 up and in, 2 defined but down and out).
rados df
says I have 8TB free, but I have 2 nearly full OSDs.
I don't understand how/why these two disks are filled w
I'm starting to load up my ceph cluster.
I currently have 12 2TB drives (10 up and in, 2 defined but down and out).
rados df
says I have 8TB free, but I have 2 nearly full OSDs.
I don't understand how/why these two disks are filled while the others
are relatively empty.
How do I tell ceph t
Is it possible know if an RBD is mapped by a machine?
-Gaylord
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
n for unsticking me.
-Gaylord
On 09/03/2013 10:44 AM, Sage Weil wrote:
On Sun, 1 Sep 2013, Gaylord Holder wrote:
I created a pool with no replication and an RBD within that pool. I mapped
the RBD to a machine, formatted it with a file system and dumped data on it.
Just to see what kind of trouble
I created a pool with no replication and an RBD within that pool. I
mapped the RBD to a machine, formatted it with a file system and dumped
data on it.
Just to see what kind of trouble I can get into, I stopped the OSD the
RBD was using, marked the OSD as out, and reformatted the OSD tree.
Is it possible to find out which machines are mapping and RBD?
-Gaylord
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Sage Weil wrote:
On Mon, 22 Jul 2013, Gaylord Holder wrote:
Sage,
The crush tunables did the trick.
why? Could you explain what was causing the problem?
This has a good explanation, I think:
http://ceph.com/docs/master/rados/operations/crush-map/#tunables
I've haven't inst
22/2013 02:27 PM, Sage Weil wr:
On Mon, 22 Jul 2013, Gaylord Holder wrote:
I have a 12 OSD/3 host set up, and have be stuck with a bunch of stuck pages.
I've verified the OSDs are all up and in. The crushmap looks fine.
I've tried restarting all the daemons.
root@never:/var/lib/c
I have a 12 OSD/3 host set up, and have be stuck with a bunch of stuck
pages.
I've verified the OSDs are all up and in. The crushmap looks fine.
I've tried restarting all the daemons.
root@never:/var/lib/ceph/mon# ceph status
health HEALTH_WARN 139 pgs degraded; 461 pgs stuck unclean; r
On 07/17/2013 05:49 PM, Josh Durgin wrote:
[please keep replies on the list]
On 07/17/2013 04:04 AM, Gaylord Holder wrote:
On 07/16/2013 09:22 PM, Josh Durgin wrote:
On 07/16/2013 06:06 PM, Gaylord Holder wrote:
Now whenever I try to map an RBD to a machine, mon0 complains:
feature set
I had RBD's working and mapping working. Then I grew the cluster and
increased the OSDs.
Now whenever I try to map an RBD to a machine, mon0 complains:
feature set mismatch, my 2 < server's 2040002, missing 204
missing required protocol features.
I don't see any other problems with the cl
19 matches
Mail list logo