Re: [ceph-users] Permanente Mount RBD blocs device RHEL7

2015-03-07 Thread Gian
Hi, are you using /etc/ceph/rbdmount as a 'mapping fstab' plus your mountpoints in normal fstab plus the systemctl service ? Gian > On 07 Mar 2015, at 05:26, Jesus Chavez (jeschave) wrote: > > Still not working does anybody know show to automap and Mount rbd image on > redhat? > > Regards

Re: [ceph-users] tgt and krbd

2015-03-07 Thread Steffen W Sørensen
On 06/03/2015, at 22.47, Jake Young wrote: > > I wish there was a way to incorporate a local cache device into tgt with > > librbd backends. > What about a ram disk device like rapid disk+cache in front of your rbd block > device > > http://www.rapiddisk.org/?page_id=15#rapiddisk > > /Steffe

[ceph-users] problem with yum install ceph-deploy command

2015-03-07 Thread khyati joshi
Hello ceph-users, I am new to ceph.I am using centos-5.11 (i386) for deploying ceph with and epel-release-5.4.noarch.rpm is sucessfuly installed. But running " yum install ceph-deploy" command is giving following error. ceph-deploy-1.5.21-0.noarch from ceph-noarch has depsolving problem -->m

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-03-07 Thread mad Engineer
*Update:* *Hardware:* Upgraded RAID controller to LSI Megaraid 9341 -12Gbps 3 Samsung 840 EVO - was showing 45K iops for fio test with 7 threads and 4k block size in *JBOD *mode CPU- 16 cores @2.27Ghz RAM- 24Gb NIC- 10Gbits with *under 1 ms latency, *iperf shows 9.18 Gbps between host and client

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-03-07 Thread Nick Fisk
You are hitting serial latency limits. For a 4kb sync write to happen it has to:- 1. Travel across network from client to Primary OSD 2. Be processed by Ceph 3. Get Written to Pri OSD 4. Ack travels across network to client At 4kb these 4 steps take up a very high percentage of the actual proces

Re: [ceph-users] Permanente Mount RBD blocs device RHEL7

2015-03-07 Thread Jesus Chavez (jeschave)
Sorry I couldn’t answer back sooner, I made it with /etc/systemd/system/rbd-{ceph_pool}-{ceph_image}.service, I had some mistakes in the name of the config file: [Unit] Description=RADOS block device mapping for "{ceph_pool}"/"{ceph_image}" Conflicts=shutdown.target Wants=network-online.target

[ceph-users] [rbd cache experience - given]

2015-03-07 Thread Andrija Panic
Hi there, just wanted to share some benchmark experience with RBD caching, that I have just (partially) implemented. This is not nicely formated results, just raw numbers to understadn the difference *INFRASTRUCTURE: - 3 hosts with: 12 x 4TB drives, 6 Journals on 1 SSD, 6 journals on se

[ceph-users] adding osd node best practice

2015-03-07 Thread tombo
Hi guys, I have few question regarding adding another osd node to cluster. I already have in production cluster with 7 mon, 72 osd, we are using mainly librados to interact with saved objects in ceph. Our osds are 3TB WD discs and they reside on two servers (36 osds per server) so long story

[ceph-users] EC Pool and Cache Tier Tuning

2015-03-07 Thread Nick Fisk
Hi All, I have been experimenting with EC pools and Cache Tiers to make them more useful for more active data sets on RBD volumes and I thought I would share my findings so far as they have made quite a significant difference. My Ceph cluster comprises of 4 Nodes each with the following:- 12x2.1g

Re: [ceph-users] Cascading Failure of OSDs

2015-03-07 Thread Quentin Hartman
Now that I have a better understanding of what's happening, I threw together a little one-liner to create a report of the errors that the OSDs are seeing. Lots of missing / corrupted pg shards: https://gist.github.com/qhartman/174cc567525060cb462e I've experimented with exporting / importing the

Re: [ceph-users] problem with yum install ceph-deploy command

2015-03-07 Thread Travis Rhoden
Hi Khyati, On Sat, Mar 7, 2015 at 5:18 AM, khyati joshi wrote: > Hello ceph-users, > > I am new to ceph.I am using centos-5.11 (i386) for deploying ceph > with and epel-release-5.4.noarch.rpm is sucessfuly installed. ceph (and ceph-deploy) is not packaged for CentOS 5. You'll need to use 6 o

Re: [ceph-users] Cascading Failure of OSDs

2015-03-07 Thread Quentin Hartman
So I'm not sure what has changed, but in the last 30 minutes the errors which were all over the place, have finally settled down to this: http://pastebin.com/VuCKwLDp The only thing I can think of is that I also net the noscrub flag in addition to the nodeep-scrub when I first got here, and that f

[ceph-users] correct hardware configuration

2015-03-07 Thread Andy Seltzer
We're a highly technical server & Linux partner so if people are having problems caused by hardware configs that do not meet the best case reference architecture for Ceph feel free to contact us. Thanks, Andy Andy Seltzer Even Enterprises aselt...@

Re: [ceph-users] correct hardware configuration

2015-03-07 Thread Mark Nelson
Hi Andy, Please do feel free to share any hardware guidance you'd like to offer the Ceph community on the list. :) Mark On 03/07/2015 04:10 PM, Andy Seltzer wrote: We’re a highly technical server & Linux partner so if people are having problems caused by hardware configs that do not meet the

Re: [ceph-users] Prioritize Heartbeat packets

2015-03-07 Thread Daniel Swarbrick
Judging by the commit, this ought to do the trick: osd heartbeat use min delay socket = true On 07/03/15 01:20, Robert LeBlanc wrote: I see that Jian Wen has done work on this for 0.94. I tried looking through the code to see if I can figure out how to configure this new option, but it all went

Re: [ceph-users] Firefly, cephfs issues: different unix rights depending on the client and ls are slow

2015-03-07 Thread Francois Lafont
Hello, Thanks to Jcsp (John Spray I guess) that helps me on IRC. On 06/03/2015 04:04, Francois Lafont wrote: >> ~# mkdir /cephfs >> ~# mount -t ceph 10.0.2.150,10.0.2.151,10.0.2.152:/ /cephfs/ -o >> name=cephfs,secretfile=/etc/ceph/ceph.client.cephfs.secret >> >> Then in ceph-testfs, I do: >>

Re: [ceph-users] adding osd node best practice

2015-03-07 Thread Anthony D'Atri
1) That's an awful lot of mons. Are they VM's or something? My sense is that mons >5 have diminishing returns at best. 2) Only two OSD nodes? Assume you aren't running 3 copies of data or racks. 3) The new nodes will have fewer OSD's? Be careful with host / OSD weighting to avoid a gro