[ceph-users] image map failed

2016-06-17 Thread Ishmael Tsoaela
Hi, Will someone please assist, I am new to cepph and I am trying to map image and this happens: cluster-admin@nodeB:~/.ssh/ceph-cluster$ rbd map data_01 --pool data rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map failed: (13) Permission d

Re: [ceph-users] image map failed

2016-06-17 Thread Ishmael Tsoaela
Hi, Thank you for the response but with sudo all it does is freeze: rbd map data_01 --pool data cluster-admin@nodeB:~/.ssh/ceph-cluster$ date && sudo rbd map data_01 --pool data && date Fri Jun 17 14:36:41 SAST 2016 On Fri, Jun 17, 2016 at 2:01 PM, Ishmael Tsoaela wrote:

[ceph-users] cluster ceph -s error

2016-06-17 Thread Ishmael Tsoaela
Hi All, please assist to fix the error: 1 X admin 2 X admin(hosting admin as well) 4 osd each node cluster a04e9846-6c54-48ee-b26f-d6949d8bacb4 health HEALTH_ERR 819 pgs are stuck inactive for more than 300 seconds 883 pgs degraded 64 pgs stale

Re: [ceph-users] cluster ceph -s error

2016-06-19 Thread Ishmael Tsoaela
> drives, network links working? More detail please. Any/all of the following > would help: > > ceph health detail > ceph osd stat > ceph osd tree > Your ceph.conf > Your crushmap > > On 17 Jun 2016 14:14, "Ishmael Tsoaela" wrote: > > > > Hi

[ceph-users] image map failed

2016-06-23 Thread Ishmael Tsoaela
Hi All, I have created an image but cannot map the image, anybody know what could be the problem: sudo rbd map data/data_01 rbd: sysfs write failed RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable". In some cases useful info is found

Re: [ceph-users] image map failed

2016-06-23 Thread Ishmael Tsoaela
it worked thanks: cluster_master@nodeC:~$ sudo rbd map data/data_01 /dev/rbd0 On Thu, Jun 23, 2016 at 4:37 PM, Jason Dillaman wrote: > On Thu, Jun 23, 2016 at 10:16 AM, Ishmael Tsoaela > wrote: > > cluster_master@nodeC:~$ rbd --image data_01 -p data info > > rbd image &

Re: [ceph-users] image map failed

2016-06-27 Thread Ishmael Tsoaela
gt; #rbd create --image pool-name/image-name --size 15G --mage-feature > layering > # rbd map --image pool-name/image-name > > Thanks > Rakesh Parkiti > On Jun 23, 2016 19:46, Ishmael Tsoaela wrote: > > Hi All, > > I have created an image but cannot map the im

[ceph-users] ceph not replicating to all osds

2016-06-27 Thread Ishmael Tsoaela
Hi ALL, Anyone can help with this issue would be much appreciated. I have created an image on one client and mounted it on both 2 client I have setup. When I write data on one client, I cannot access the data on another client, what could be causing this issue? root@nodeB:/mnt# ceph osd tree I

Re: [ceph-users] ceph not replicating to all osds

2016-06-27 Thread Ishmael Tsoaela
Hello, > > On Mon, 27 Jun 2016 17:00:42 +0200 Ishmael Tsoaela wrote: > > > Hi ALL, > > > > Anyone can help with this issue would be much appreciated. > > > Your subject line has nothing to do with your "problem". > > You're alluding to OSD r

Re: [ceph-users] ceph not replicating to all osds

2016-06-28 Thread Ishmael Tsoaela
Thanks Brad, I have looked through OCFS2 and does exactly what I wanted. On Tue, Jun 28, 2016 at 1:04 PM, Brad Hubbard wrote: > On Tue, Jun 28, 2016 at 4:17 PM, Ishmael Tsoaela > wrote: > > Hi, > > > > I am new to Ceph and most of the concepts are new. > > >

[ceph-users] osd reweight

2016-08-30 Thread Ishmael Tsoaela
Hi All, Is there a way to have ceph reweight osd automatically? As well could a osd reaching 92% cause the entire cluster to reboot? Thank you in advance, Ishmael Tsoaela ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

Re: [ceph-users] osd reweight

2016-08-30 Thread Ishmael Tsoaela
Hi Wido, thank for the response. I had a weird incident where servers in the cluster all rebooted, I can't pin point what the issue could be. Thanks again. On Tue, Aug 30, 2016 at 11:06 AM, Wido den Hollander wrote: > > > Op 30 augustus 2016 om 10:16 schreef Ishmael Tsoa

[ceph-users] ceph warning

2016-09-01 Thread Ishmael Tsoaela
Hi All, Can someone please decipher this errors for me, after all nodes rebooted in my cluster on Monday. the warning has not gone. Will the warning ever clear? cluster df3f96d8-3889-4baa-8b27-cc2839141425 health HEALTH_WARN 2 pgs backfill_toofull 532 pgs backfill

Re: [ceph-users] ceph warning

2016-09-01 Thread Ishmael Tsoaela
osd.19 up 1.0 1.0 On Thu, Sep 1, 2016 at 10:56 AM, Christian Balzer wrote: > > > Hello, > > On Thu, 1 Sep 2016 10:18:39 +0200 Ishmael Tsoaela wrote: > > > Hi All, > > > > Can someone please decipher this errors

Re: [ceph-users] ceph warning

2016-09-01 Thread Ishmael Tsoaela
Thank you again. I will add 3 more osd today and leave untouched, maybe over weekend. On Thu, Sep 1, 2016 at 1:16 PM, Christian Balzer wrote: > > Hello, > > On Thu, 1 Sep 2016 11:20:33 +0200 Ishmael Tsoaela wrote: > >> thanks for the response >> >> >>

Re: [ceph-users] ceph warning

2016-09-01 Thread Ishmael Tsoaela
meant for that osd should be copied to the osd? if so then why do pg get full if they were not full before osd went down? On Thu, Sep 1, 2016 at 1:29 PM, Ishmael Tsoaela wrote: > Thank you again. > > I will add 3 more osd today and leave untouched, maybe over weekend. > > On Thu, S

Re: [ceph-users] ceph warning

2016-09-01 Thread Ishmael Tsoaela
84/3096070 objects misplaced (40.664%) recovery now: recovery 8917/3217724 objects degraded (0.277%) recovery 1120479/3217724 objects misplaced (34.822%) On Thu, Sep 1, 2016 at 4:13 PM, Christian Balzer wrote: > > Hello, > > On Thu, 1 Sep 2016 14:00:53 +0200 Ishm

Re: [ceph-users] ceph warning

2016-09-01 Thread Ishmael Tsoaela
930G 571G 358G 61.43 1.02 109 19 0.90868 1.0 930G 566G 363G 60.89 1.01 116 11 0.90869 1.0 930G 530G 400G 57.00 0.95 104 On Fri, Sep 2, 2016 at 2:59 AM, Christian Balzer wrote: > > Hello, > > On Thu, 1 Sep 2016 16:24:28 +0200 Ishmael Tsoaela wrote: > >&g