Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread M Ranga Swami Reddy
Thats great.. Will try this.. Thanks Swami On Wed, Jun 8, 2016 at 10:38 AM, Blair Bethwaite wrote: > It runs by default in dry-run mode, which IMHO opinion should be the > default for operations like this. IIRC you add "-d -r" to make it > actually apply the re-weighting. > > Cheers, > > On 8 Ju

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread Blair Bethwaite
It runs by default in dry-run mode, which IMHO opinion should be the default for operations like this. IIRC you add "-d -r" to make it actually apply the re-weighting. Cheers, On 8 June 2016 at 15:04, M Ranga Swami Reddy wrote: > Blair - Thanks for the script...Btw, is this script has option for

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread M Ranga Swami Reddy
Blair - Thanks for the script...Btw, is this script has option for dry run? Thanks Swami On Wed, Jun 8, 2016 at 6:35 AM, Blair Bethwaite wrote: > Swami, > > Try > https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py, > that'll work with Firefly and allow y

[ceph-users] monitor clock skew warning when date/time is the same

2016-06-07 Thread pixelfairy
test cluster running on vmware fusion. all 3 nodes are both monitor and osd, and are running opentpd $ ansible ceph1 -a "ceph -s" ceph1 | SUCCESS | rc=0 >> cluster d7d2a02c-915f-4725-8d8d-8d42fcd87242 health HEALTH_WARN clock skew detected on mon.ceph2, mon.ceph3 M

Re: [ceph-users] Must host bucket name be the same with hostname ?

2016-06-07 Thread Christian Balzer
Hello, you will want to read: https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ especially section III and IV. Another approach w/o editing the CRUSH map is here: https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/ Christian On Wed, 8 Jun

[ceph-users] Must host bucket name be the same with hostname ?

2016-06-07 Thread ????
Hi all, There are SASes & SSDs in my nodes at the same time. Now i want divide them into 2 groups, one composed of SASes and one only contained SSDs. When i configure CRUSH rulesets, segment below: # buckets host robert-a { id -2 # do not change unn

Re: [ceph-users] RBD rollback error mesage

2016-06-07 Thread Jason Dillaman
OK -- looks like it's an innocent error message. I just wanted to ensure that "flags" didn't include "object map invalid" since that would indicate a real issue. I'll update that ticket to include the other use-case where it appears. On Tue, Jun 7, 2016 at 10:03 PM, Brendan Moloney wrote: > Her

Re: [ceph-users] RBD rollback error mesage

2016-06-07 Thread Brendan Moloney
Here you go: rbd image 'aircds': size 8192 MB in 2048 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.39202eb141f2 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: parent: rbd/xenial-bas

Re: [ceph-users] RBD rollback error mesage

2016-06-07 Thread Jason Dillaman
Can you run "rbd info" against that image? I suspect it is a harmless but alarming error message. I actually just opened a tracker ticket this morning for a similar issue for rbd-mirror [1] when it bootstraps an image to a peer cluster. In that case, it was a harmless error message that we will

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread Blair Bethwaite
Swami, Try https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py, that'll work with Firefly and allow you to only tune down weight of a specific number of overfull OSDs. Cheers, On 7 June 2016 at 23:11, M Ranga Swami Reddy wrote: > OK, understood... > To f

[ceph-users] Filestore update script?

2016-06-07 Thread WRIGHT, JON R (JON R)
I'm trying to recover an OSD after running xfs_repair on the disk. It seems to be ok now. There is a log message that includes the following: "Please run the FileStore update script before starting the OSD, or set filestore_update_to to 4" What is the FileStore update script? Google search d

[ceph-users] RBD rollback error mesage

2016-06-07 Thread Brendan Moloney
Hi, I am trying out a Ceph 10.2.1 cluster and noticed this message almost every time I do a snap rollback: 12:18:08.203349 7f11ee46c700 -1 librbd::object_map::LockRequest: failed to lock object map: (17) File exists The rollback still seems to work fine. Nothing else should be accessing the RB

[ceph-users] Disk failures

2016-06-07 Thread Gandalf Corvotempesta
Hi, How ceph detect and manage disk failures? What happens if some data are wrote on a bad sector? Are there any change to get the bad sector "distributed" across the cluster due to the replication? Is ceph able to remove the OSD bound to the failed disk automatically? __

Re: [ceph-users] New user questions with radosgw with Jewel 10.2.1

2016-06-07 Thread Karol Mroz
Hi Eric, Please see inline... On Tue, Jun 07, 2016 at 05:14:25PM +, Sylvain, Eric wrote: > > Yes, my system is set to run as “ceph”: > > /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.p6-os1-mon7.service

Re: [ceph-users] New user questions with radosgw with Jewel 10.2.1

2016-06-07 Thread Sylvain, Eric
Hi JC, Thanks for the reply. Yes, my system is set to run as “ceph”: /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.p6-os1-mon7.service ExecStart=/usr/bin/radosgw -f --cluster ${C

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread M Ranga Swami Reddy
In my cluster: 351 OSDs with same size and 8192 pgs per pool. And 60% RAW space used. Thanks Swami On Tue, Jun 7, 2016 at 7:22 PM, Corentin Bonneton wrote: > Hello, > You how much your PG pools since he first saw you have left too big. > > -- > Cordialement, > Corentin BONNETON > > > Le 7 juin

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread Markus Blank-Burian
Hello Sage, are there any development plans to improve PG distribution to OSDs? We use CephFS on Infernalis and objects are distributed very well across the PGs. But the automatic PG distribution creates large fluctuations, even in the simplest case for same-sized OSDs and a flat hierarchy using

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread Corentin Bonneton
Hello, You how much your PG pools since he first saw you have left too big. -- Cordialement, Corentin BONNETON > Le 7 juin 2016 à 15:21, Sage Weil a écrit : > > On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote: >> OK, understood... >> To fix the nearfull warn, I am reducing the weight of a specif

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread Sage Weil
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote: > OK, understood... > To fix the nearfull warn, I am reducing the weight of a specific OSD, > which filled >85%.. > Is this work-around advisable? Sure. This is what reweight-by-utilization does for you, but automatically. sage > > Thanks > Swami

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread M Ranga Swami Reddy
OK, understood... To fix the nearfull warn, I am reducing the weight of a specific OSD, which filled >85%.. Is this work-around advisable? Thanks Swami On Tue, Jun 7, 2016 at 6:37 PM, Sage Weil wrote: > On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote: >> Hi Sage, >> >Jewel and the latest hammer po

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread Sage Weil
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote: > Hi Sage, > >Jewel and the latest hammer point release have an improved > >reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry > > run) to correct this. > > Thank youBut not planning to upgrade the cluster soon. > So, in thi

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread M Ranga Swami Reddy
Hi Sage, >Jewel and the latest hammer point release have an improved >reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry > run) to correct this. Thank youBut not planning to upgrade the cluster soon. So, in this case - are there any tunable options will help? like "crush

Re: [ceph-users] un-even data filled on OSDs

2016-06-07 Thread Sage Weil
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote: > Hello, > I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled > with >85% of data and few OSDs filled with ~60%-70% of data. > > Any reason why the unevenly OSDs filling happned? do I need to any > tweaks on configuration to fix the

Re: [ceph-users] CephFS mount via internet

2016-06-07 Thread João Castro
Thank you, Wido! Anyway, it is the same story, if it cannot see the OSD's I cannot mount it :( argh. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CephFS mount via internet

2016-06-07 Thread Wido den Hollander
> Op 7 juni 2016 om 10:59 schreef João Castro : > > > Hello guys, > Some information: > > ceph version 10.2.1 > 72 OSD (24x per machine) > 3 monitor > 2 MDS > > I have a few outside servers I need to connect CephFS to. My monitors have 2 > interfaces, one private and one public (eth0 and eth

[ceph-users] CephFS mount via internet

2016-06-07 Thread João Castro
Hello guys, Some information: ceph version 10.2.1 72 OSD (24x per machine) 3 monitor 2 MDS I have a few outside servers I need to connect CephFS to. My monitors have 2 interfaces, one private and one public (eth0 and eth1). I am trying to mount CephFS via eth1 on monitor01 from an outside serve

Re: [ceph-users] no osds in jewel

2016-06-07 Thread Jaemyoun Lee
Thanks for the feedback. I removed "ceph-deploy mon create + ceph-deploy gatherkeys." And my system disk is sde. As your opinion, the disk cannot be umounted when purgedata was run. Is it bug on Ubuntu 16.04? *$ ssh csAnt lsblk* NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:00 3.7

[ceph-users] un-even data filled on OSDs

2016-06-07 Thread M Ranga Swami Reddy
Hello, I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled with >85% of data and few OSDs filled with ~60%-70% of data. Any reason why the unevenly OSDs filling happned? do I need to any tweaks on configuration to fix the above? Please advise. PS: Ceph version is - 0.80.7 Thanks

Re: [ceph-users] Migrating files from ceph fs from cluster a to cluster b without low downtime

2016-06-07 Thread Eneko Lacunza
El 06/06/16 a las 20:53, Oliver Dzombic escribió: Hi, thank you for your suggestion. Rsync will copy the whole file new, if the size is different. Since we talk about raw image files of virtual servers, rsync is no option. We need something which will inside of a file just copy the delta's.