Re: [ceph-users] OSD down
Hi Daniel Could you be more precise on your issue please? What is the OS under which your ceph is running and what is the ceph version you are currently running? Anyway, I have exeprienced an issue that looks like yours. I have installed and configured a small cluster "microceph" on my PC for quick demo. AOn this cluster I have 4 OSDs and 1 MON . There is no MDS. I have written a script that starts the cluster. In this script I start the monitor: ceph-mon -c /path/to/yourceph/confile -i I also start manually the 4 OSD like this :ceph-osd -c /path/to/yourceph/confile -i I also forced the OSD to be "in" after the start. Right now it works fine.But I don't think it's the right ay to process(start manually the OSD and putting them in ) May be it can give you an idea where to start investigation. Regards Alex Le 05/02/2015 11:29, Daniel Takatori Ohara a écrit : Hi, anyone help me please. I have a cluster with 4 OSD's, 1 MDS and 1 MON. The osd.3 was down, and i need restart in the host with the command /etc/init.d/ceph restart osd.3. The osd.0 is marked down sometimes, but he is marked up automatically. [ceph@ceph-admin my-cluster]$ ceph osd tree # idweight type name up/down reweight -1 50.63 root default -2 13.84 host ceph-osd1 0 13.84 osd.0 up 1 -3 14.76 host ceph-osd2 1 14.76 osd.1 up 1 -4 22.03 host ceph-osd3 2 10.09 osd.2 up 0.8 3 11.94 osd.3 down0 Anyone, can help me, please? Thank's, Att. --- Daniel Takatori Ohara. System Administrator - Lab. of Bioinformatics Molecular Oncology Center Instituto Sírio-Libanês de Ensino e Pesquisa Hospital Sírio-Libanês Phone: +55 11 3155-0200 (extension 1927) R: Cel. Nicolau dos Santos, 69 São Paulo-SP. 01308-060 http://www.bioinfo.mochsl.org.br ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- logo Orange <http://www.orange.com/> *Alexis KOALLA* Orange/IMT/OLPS/ASE/DAPI/CSE Spécialiste en Technologies/Cloud Storage Services & Plateformes Specialist in Technologies/Cloud Storage Services & Platforms Tel :+33(0) 299 124 939 / +33 670 698 929 alexis.koa...@orange.com <mailto:alexis.koa...@orange.com> ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] JCloud on Ceph
Hi all, Does anyone using JCloud on Ceph? Any feedback on the topic is welcome and will be very appreciated. Regards Alex ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] [Solved] No auto-mount of OSDs after server reboot
Hi Anthony, Thanks for the solution you described. I will test your proposition later. Right now I tried to put the mount command (copy/past from /etc/mtab to /etc/fstab) and after the reboot the mount is done successfully. Regards Alex Le 30/01/2015 23:43, Anthony D'Atri a écrit : One thing than can cause this is messed-up partition ID's / typecodes. Check out the ceph-disk script to see how they get applied. I have a few systems that somehow got messed up -- at boot they don't get started, but if I mounted them manually on /mnt, checked out the whoami file and remounted accordingly, then started, they ran fine. # for i in b c d e f g h i j k ; do sgdisk --typecode=1:4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D /dev/sd$i ; done # for i in b c d e f g h i j k ; do sgdisk --typecode=2:45B0969E-9B03-4F30-B4C6-B4B80CEFF106 /dev/sd$i ; done One system I botched and set all the GUID's to a constant; I went back and fixed that: # for i in b c d e f g h i j k ; do sgdisk --typecode=2:45B0969E-9B03-4F30-B4C6-B4B80CEFF106 --partition-guid=$(uuidgen -r) /dev/sd$i ; done Note that I have not yet rebooted these systems to validate this approach, so YMMV, proceed at your own risk, this advice is not FDIC-insured and may lose value. # sgdisk -i 1 /dev/sdb Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) Partition unique GUID: 61397DDD-E203-4D9A-9256-24E0F5F97344 First sector: 20973568 (at 10.0 GiB) Last sector: 5859373022 (at 2.7 TiB) Partition size: 5838399455 sectors (2.7 TiB) Attribute flags: Partition name: 'ceph data' # sgdisk -i 2 /dev/sdb Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown) Partition unique GUID: EF292AB7-985E-40A2-B185-DD5911D17BD7 First sector: 2048 (at 1024.0 KiB) Last sector: 20971520 (at 10.0 GiB) Partition size: 20969473 sectors (10.0 GiB) Attribute flags: Partition name: 'ceph journal' --aad ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- logo Orange <http://www.orange.com/> *Alexis KOALLA* Orange/IMT/OLPS/ASE/DAPI/CSE Spécialiste en Technologies/Cloud Storage Services & Plateformes Specialist in Technologies/Cloud Storage Services & Platforms Tel :+33(0) 299 124 939 / +33 670 698 929 alexis.koa...@orange.com <mailto:alexis.koa...@orange.com> ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] No auto-mount of OSDs after server reboot
Hi Lindsay and Daniel Thanks for your replies. Apologize for not specifying my LAB env details : Here is the details: OS: Ubuntu 14.04 LTS, Kernel 3.8.0-29-generic Ceph version: Firefly 0.80.8 env: LAB @Lindsay : I'm wonderring if putting the mount command in fstab is new to ceph or it is recommended since the beginning of ceph. Anyway I plan to copy-paste the mount commands from /etc/mtab to /etc/fstab and I hope the issue will be fixed. @Daniel: I have checked in the top level of each OSDs but there is no file name "sysvinit" in the directory where the "whoami" file is located. Should I create it manually or there is way to auto-generate this file please. Thanks for your help Best and kindest regards Alexis Le 29/01/2015 15:11, Lindsay Mathieson a écrit : On Thu, 29 Jan 2015 03:05:41 PM Alexis KOALLA wrote: Hi, Today we encountered an issue in our Ceph cluster in LAB. Issue: The servers that host the OSDs have rebooted and we have observed that after the reboot there is no auto mount of OSD devices and we need to manually performed the mount and then start the OSD as below: 1- [root@osd.0] mount /dev/sdb2 /var/lib/ceph/osd/ceph-0 2- [root@osd.0] start ceph-osd id=0 As far as I'm aware, ceph does not handle mounting of the base filesystem - its up to you to create an fstab entry for it. The osd should autostart, but it will of course fail if the filesystem is not mounted. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] No auto-mount of OSDs after server reboot
Hi, Today we encountered an issue in our Ceph cluster in LAB. Issue: The servers that host the OSDs have rebooted and we have observed that after the reboot there is no auto mount of OSD devices and we need to manually performed the mount and then start the OSD as below: 1- [root@osd.0] mount /dev/sdb2 /var/lib/ceph/osd/ceph-0 2- [root@osd.0] start ceph-osd id=0 After performing the two commands above the OSD is up again. The question: Is it the normal behaviour of OSD server or probably something is wrong in our configuration. Any help or idea will be appreciated. Thanks and regards Alex ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com