RDMA/Infiniband

2015-12-07 Thread Gandalf Corvotempesta
Hi to all Any update about infiniband / rdma support in latest ceph version? I have read that rdma support was planned some time ago, is it now supported ? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo

Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-05-01 0:20 GMT+02:00 Matt W. Benjamin : > Hi, > > Sure, that's planned for integration in Giant (see Blueprints). Great. Any ETA? Firefly was planned for February :) -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org

Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-05-01 0:11 GMT+02:00 Mark Nelson : > Usable is such a vague word. I imagine it's testable after a fashion. :D Ok but I prefere an "official" support with IB integrated in main ceph repo -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to major

Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-04-30 22:27 GMT+02:00 Mark Nelson : > Check out the xio work that the linuxbox/mellanox folks are working on. > Matt Benjamin has posted quite a bit of info to the list recently! Is that usable ? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message

Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-04-30 14:18 GMT+02:00 Sage Weil : > Today we are announcing some very big news: Red Hat is acquiring Inktank. Great news. Any changes to get native Infiniband support in ceph like in GlusterFS ? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message

Re: Ceph Ansible Repo

2014-03-06 Thread Gandalf Corvotempesta
2014-03-06 17:31 GMT+01:00 Sebastien Han : > They can both host journals, and you usually want to manage them with lvm, > this is easier than managing partition. > I just open an issue as a feature/enhancement request. Good idea! But first, check my request :-D when implemented I'll try the ansib

Re: Ceph Ansible Repo

2014-03-06 Thread Gandalf Corvotempesta
2014-03-06 13:07 GMT+01:00 David McBride : > This causes the IO load to be nicely balanced across the two SSDs, > removing any hot spots, at the cost of enlarging the failure domain of > the loss of an SSD from half a node to a full node. This is not a solution for me. Why not using LVM with a VG

Re: Ceph Ansible Repo

2014-03-06 Thread Gandalf Corvotempesta
2014-03-06 11:42 GMT+01:00 Sebastien Han : > Cool :) > > Yes you will be able to pre-provision all the disks. > During the next run it will retry all the disks, some will be installed some > will not therefore it will configure them :). Perfect. Now the only missing thing is https://github.com/ce

Re: Ceph Ansible Repo

2014-03-06 Thread Gandalf Corvotempesta
2014-03-06 11:14 GMT+01:00 Sebastien Han : > No this is currently not supported. > We can only select on drive per server. > Can you submit an issue on Github for enhancement? I'll do in a couple of minutes. > Yes they must be identical. If a device doesn't exist Ansible will simply > hang on th

Re: Ceph Ansible Repo

2014-03-06 Thread Gandalf Corvotempesta
Hi to all. Some question about ansible repo. 1. is possible to specify multiple journal devices for each OSD node? We have OSD with 12 spinning disks and 2 SSD as journal. We would like to use SSD1 for OSD1 to 6 and SSD2 for OSD 7 to 12. Is this possible ? 2. With this repo, each OSD node must be

Re: [ceph-users] Help needed porting Ceph to RSockets

2014-02-05 Thread Gandalf Corvotempesta
2013-10-31 Hefty, Sean : > Can you please try the attached patch in place of all previous patches? Any updates on ceph with rsockets? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.k

Re: creation of the puppet-ceph repository

2014-01-24 Thread Gandalf Corvotempesta
2014/1/24 Loic Dachary : > Hi, > > At the moment the module can only be used to deploy mons. It's making slow > progress but a number of patches went in Ceph and should significantly > simplify writing the puppet module, starting in Firefly. Based on docs, OSD seems to be deployable from puppet

Re: creation of the puppet-ceph repository

2014-01-24 Thread Gandalf Corvotempesta
2013/12/14 Loic Dachary : > Hi, > > Could someone with sufficient permission please create the puppet-ceph > repository (empty, no README or .gitignore) at http://github.com/ceph/ ? > > A few weeks ago it was decided to host the development of the module on > stackforge ( https://github.com/stack

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-09-16 Thread Gandalf Corvotempesta
2013/9/12 Andreas Bluemle : > I have not yet done any performance testing. > > The next step I have to take is more related to setting up > a larger cluster with sth. like 150 osd's without hitting any > resource limitations. How do you manage failover ? Will you use mulitple HBA (or dual-port HBA

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-09-12 Thread Gandalf Corvotempesta
2013/9/10 Andreas Bluemle : > Since I have added these workarounds to my version of the librdmacm > library, I can at least start up ceph using LD_PRELOAD and end up in > a healthy ceph cluster state. Have you seen any performance improvement by using LD_PRELOAD with ceph? Which throughput are you

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Gandalf Corvotempesta
2013/5/13 Greg : > thanks a lot for pointing this out, it indeed makes a *huge* difference ! >> >> # dd if=/mnt/t/1 of=/dev/zero bs=4M count=100 >> >> 100+0 records in >> 100+0 records out >> 419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s > > (caches dropped before each test of course) What

Re: Request for ceph.conf environment extension

2013-05-06 Thread Gandalf Corvotempesta
2013/5/6 Mark Nelson : > On the front end > portion of the network you'll always have client<->server communication, so > the pattern there will be less all-to-all than the backend traffic (more 1st > half <-> 2nd half). What do you suggest for the frontend portion? 10GbE or 2Gb (2x 1GbE bonded t

Re: Request for ceph.conf environment extension

2013-05-06 Thread Gandalf Corvotempesta
2013/5/6 Jens Kristian Søgaard : > I was actually going to put both the public and private network on the same > 10 GbE. Why do you think I need more features? I'm just supposing, i'm also evaluating the same network topology like you. There is a lowcost. 12x 10GbaseT switch from Netgear. I'm al

Re: Request for ceph.conf environment extension

2013-05-06 Thread Gandalf Corvotempesta
2013/5/6 Jens Kristian Søgaard : > So an 8 port switch would be approx. 920$ USD. > > I'm looking at just a bare-bones switchs that does VLANs, jumbo frames and > port trunking. The network would be used exclusively for Ceph. You should also consider 10GbE for the public network, and there you sho

Re: Request for ceph.conf environment extension

2013-05-06 Thread Gandalf Corvotempesta
2013/5/6 Jens Kristian Søgaard : > My costs for a cobber-based 10 GbE setup per server would be approximate: > > Switch: 115$ > NIC: 345$ > Cable: 2$ 115$ for a 10GB switch? Which kind of switch? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a messag

Re: Request for ceph.conf environment extension

2013-05-06 Thread Gandalf Corvotempesta
2013/5/6 Mark Nelson : > It would be very interesting to hear how SDP does. With IPoIB I've gotten > about 2GB/s on QDR with Ceph, which is roughly also what I can get in an > ideal round-robin setup with 2 bonded 10GbE links. Yes, but IB costs 1/4 than 10GbE and will be much expandible in future

Re: Request for ceph.conf environment extension

2013-05-06 Thread Gandalf Corvotempesta
2013/5/6 Andreas Friedrich : > To enable the LD_PRELOAD mechanism for the Ceph daemons only, a little > generic extension in the global section of /etc/ceph/ceph.conf would > be helpful, e.g.: > > [global] > environment = LD_PRELOAD=/usr/lib64/libsdp.so.1 > > The appending patch adds 5 line

Re: ceph and efficient access of distributed resources

2013-04-17 Thread Gandalf Corvotempesta
Il giorno 16/apr/2013 22:44, "Mark Kampe" ha scritto: > > The client does a 12MB read, which (because of the striping) > gets broken into 3 separate 4MB reads, each of which is sent, > all in parallel, to 3 distinct OSDs. The only bottle-neck > in such an operation is the client-NIC. Thank you,

Re: ceph and efficient access of distributed resources

2013-04-16 Thread Gandalf Corvotempesta
2013/4/16 Mark Kampe : > RADOS is the underlying storage cluster, but the access methods (block, > object, and file) stripe their data across many RADOS objects, which > CRUSH very effectively distributes across all of the servers. A 100MB > read or write turns into dozens of parallel operations t

Re: ceph and efficient access of distributed resources

2013-04-16 Thread Gandalf Corvotempesta
2013/4/16 Mark Kampe : > The entire web is richly festooned with cache servers whose > sole raison d'etre is to solve precisely this problem. They > are so good at it that back-bone providers often find it more > cash-efficient to buy more cache servers than to lay more > fiber. Cache servers don

Re: ceph and efficient access of distributed resources

2013-04-15 Thread Gandalf Corvotempesta
2013/4/12 Mark Nelson > Currently reads always come from the primary OSD in the placement group > rather than a secondary even if the secondary is closer to the client. > In this way, only one OSD will be involved in reading an object, this will result in a bottleneck if multiple clients needs t