Re: [ceph-users] Ceph Maintenance

2017-08-01 Thread Richard Hesketh
On 01/08/17 12:41, Osama Hasebou wrote: > Hi, > > What would be the best possible and efficient way for big Ceph clusters when > maintenance needs to be performed ? > > Lets say that we have 3 copies of data, and one of the servers needs to be > maintained, and maintenance might take 1-2 days d

[ceph-users] Ceph Maintenance

2017-08-01 Thread Osama Hasebou
Hi, What would be the best possible and efficient way for big Ceph clusters when maintenance needs to be performed ? Lets say that we have 3 copies of data, and one of the servers needs to be maintained, and maintenance might take 1-2 days due to some unprepared issues that come up. Settin

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
Hi Vasu, Thank you that is good to know! I am running ceph version 10.2.3 and CentOS 7.2.1511 (Core) minimal. Cheers, Mike On Tue, Nov 29, 2016 at 7:26 PM, Vasu Kulkarni wrote: > you can ignore that, its a known issue http://tracker.ceph.com/ > issues/15990 > > regardless waht version of ceph

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Vasu Kulkarni
you can ignore that, its a known issue http://tracker.ceph.com/issues/15990 regardless waht version of ceph are you running and what are the details of os version you updated to ? On Tue, Nov 29, 2016 at 7:12 PM, Mike Jacobacci wrote: > Found some more info, but getting weird... All three OSD n

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
Found some more info, but getting weird... All three OSD nodes shows the same unknown cluster message on all the OSD disks. I don't know where it came from, all the nodes were configured using ceph-deploy on the admin node. In any case, the OSD's seem to be up and running, the health is ok. no c

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
Sorry about that... Here is the output of ceph-disk list: ceph-disk list /dev/dm-0 other, xfs, mounted on / /dev/dm-1 swap, swap /dev/dm-2 other, xfs, mounted on /home /dev/sda : /dev/sda2 other, LVM2_member /dev/sda1 other, xfs, mounted on /boot /dev/sdb : /dev/sdb1 ceph journal /dev/sdb2 cep

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
I forgot to add: On Tue, Nov 29, 2016 at 6:28 PM, Mike Jacobacci wrote: > So it looks like the journal partition is mounted: > > ls -lah /var/lib/ceph/osd/ceph-0/journal > lrwxrwxrwx. 1 ceph ceph 9 Oct 10 16:11 /var/lib/ceph/osd/ceph-0/journal > -> /dev/sdb1 > > Here is the output of journalctl

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
So it looks like the journal partition is mounted: ls -lah /var/lib/ceph/osd/ceph-0/journal lrwxrwxrwx. 1 ceph ceph 9 Oct 10 16:11 /var/lib/ceph/osd/ceph-0/journal -> /dev/sdb1 Here is the output of journalctl -xe when I try to start the ceph-diak@dev-sdb1 service: sh[17481]: mount_activate: Fai

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
Hi John, Thanks I wasn't sure if something happened to the journal partitions or not. Right now, the ceph-osd.0-9 services are back up and the cluster health is good, but none of the ceph-disk@dev-sd* services are running. How can I get the Journal partitions mounted again? Cheers, Mike On Tu

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread John Petrini
Also, don't run sgdisk again; that's just for creating the journal partitions. ceph-disk is a service used for prepping disks, only the OSD services need to be running as far as I know. Are the ceph-osd@x. services running now that you've mounted the disks? ___ John Petrini NOC Systems Administr

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread John Petrini
What command are you using to start your OSD's? ___ John Petrini NOC Systems Administrator // *CoreDial, LLC* // coredial.com // [image: Twitter] [image: LinkedIn] [image: Google Plus]

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
I was able to bring the osd's up by looking at my other OSD node which is the exact same hardware/disks and finding out which disks map. But I still cant bring up any of the start ceph-disk@dev-sd* services... When I first installed the cluster and got the OSD's up, I had to run the following: #

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
OK I am in some trouble now and would love some help! After updating none of the OSDs on the node will come back up: ● ceph-disk@dev-sdb1.service loaded failed failedCeph disk activation: /dev/sdb1 ● ceph-disk@dev-sdb2.service loaded failed failedCeph disk activation: /dev/sdb2 ● ceph-d

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread David Turner
ge is prohibited. From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Mike Jacobacci [mi...@flowjo.com] Sent: Tuesday, November 29, 2016 11:41 AM To: ceph-users Subject: [ceph-users] Ceph Maintenance Hello, I would like to install OS updates on the ceph clust

[ceph-users] Ceph Maintenance

2016-11-29 Thread Mike Jacobacci
Hello, I would like to install OS updates on the ceph cluster and activate a second 10gb port on the OSD nodes, so I wanted to verify the correct steps to perform maintenance on the cluster. We are only using rbd to back our xenserver vm's at this point, and our cluster consists of 3 OSD nodes, 3