Corey,
Make sure to test with direct I/O, otherwise the caching can give you
unrealistic expectations of your actual throughput. Typically, using the ipoib
driver is not recommended with Infiniband since you will introduce unnecessary
overhead via TCP.
Knowing how you have Gluster configured
Fernando Frediani (Qube)
Sent: Monday, September 10, 2012 8:14 AM
To: 'Stephan von Krawczynski'; 'Whit Blauvelt'
Cc: 'gluster-users@gluster.org'; 'Brian Candler'
Subject: Re: [Gluster-users] Throughout over infiniband
Well, I would say there is a reason, if t
On 09/10/2012 08:56 AM, Stephan von Krawczynski wrote:
On Mon, 10 Sep 2012 08:06:51 -0400
Whit Blauvelt wrote:
On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote:
[...]
If you're lucky you reach something like 1/3 of the NFS
performance.
[Gluster NFS Client]
Whit
There
users] Throughout over infiniband
On Mon, 10 Sep 2012 08:06:51 -0400
Whit Blauvelt wrote:
> On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote:
> > [...]
> > If you're lucky you reach something like 1/3 of the NFS performance.
> [Gluster NFS Client]
> Whit
On Mon, 10 Sep 2012 08:06:51 -0400
Whit Blauvelt wrote:
> On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote:
> > [...]
> > If you're lucky you reach something like 1/3 of the NFS
> > performance.
> [Gluster NFS Client]
> Whit
There is a reason why one would switch from NFS
On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote:
> If you have small files you are busted, if you have workload on the clients
> you are busted and if you have lots of concurrent FS action on the client you
> are busted. Which leaves you with test cases nowhere near real li
On Mon, 10 Sep 2012 09:44:26 +0100
Brian Candler wrote:
> On Mon, Sep 10, 2012 at 10:03:14AM +0200, Stephan von Krawczynski wrote:
> > > Yes - so in workloads where you have many concurrent clients, this isn't a
> > > problem. It's only a problem if you have a single client doing a lot of
> > >
On Mon, Sep 10, 2012 at 10:03:14AM +0200, Stephan von Krawczynski wrote:
> > Yes - so in workloads where you have many concurrent clients, this isn't a
> > problem. It's only a problem if you have a single client doing a lot of
> > sequential operations.
>
> That is not correct for most cases. Gl
On Mon, 10 Sep 2012 08:48:03 +0100
Brian Candler wrote:
> On Sun, Sep 09, 2012 at 09:28:47PM +0100, Andrei Mikhailovsky wrote:
> >While trying to figure out the cause of the bottleneck i've realised
> >that the bottle neck is coming from the client side as running
> >concurrent test f
On Sun, Sep 09, 2012 at 09:28:47PM +0100, Andrei Mikhailovsky wrote:
>While trying to figure out the cause of the bottleneck i've realised
>that the bottle neck is coming from the client side as running
>concurrent test from two clients would give me about 650mb/s per each
>client.
Ok, now you can see why I am talking about dropping the long-gone unix
versions (BSD/Solaris/name-one) and concentrate on doing a linux-kernel module
for glusterfs without fuse overhead. It is the _only_ way to make this project
a really successful one. Everything happening now is just a project pr
tefs to perform worse than nfs.
I would be grateful if anyone would share their experience with glusterfs over
infiniband and their tips on improving performance.
cheers
Andrei
- Original Message -
From: "Corey Kovacs"
To: gluster-users@gluster.org
Sent: Friday, 7 Se
Folks,
I finally got my hands on a 4x FDR (56Gb) Infiniband switch and 4 cards to
do some testing of GlusterFS over that interface.
So far, I am not getting the throughput I _think_ I should see.
My config is made up of..
4 dl360-g8's (three bricks and one client)
4 4xFDR, dual port IB cards (o
13 matches
Mail list logo