Re: [Gluster-users] Infiniband performance issues answered?

2012-12-18 Thread Sabuj Pattanayek
i think qperf just writes to and from memory on both systems so that
it can best test the network and not disk, then tosses the packets
away

On Tue, Dec 18, 2012 at 3:34 AM, Andrew Holway  wrote:
>
> On Dec 18, 2012, at 2:15 AM, Sabuj Pattanayek wrote:
>
>> I have R610's with a similar setup but with HT turned on and I'm
>> getting 3.5GB/s for one way RDMA tests between two QDR connected
>> clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
>> IPoIB connections (seem to be limited to 10gbe). Note, I had problems
>> with the 1.x branch of OFED and am using the latest 3.x RC .
>
> What are you writing to and from?
>
>
>
>>
>> On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian  wrote:
>>> In IRC today, someone who was hitting that same IB performance ceiling that
>>> occasionally gets reported had this to say
>>>
>>> [11:50]  first, I ran fedora which is not supported by Mellanox OFED
>>> distro
>>> [11:50]  so I moved to CentOS 6.3
>>> [11:51]  next I removed all distibution related infiniband rpms and
>>> build the latest OFED package
>>> [11:52]  disabled ServerSpeed service
>>> [11:52]  disabled BIOS hyperthreading
>>> [11:52]  disabled BIOS power mgmt
>>> [11:53]  ran ib_write_test and goot 5000MB/s
>>> [11:53]  got 5000MB/s on localhost
>>>
>>> fwiw, if someone's encountering that issue, between this and the changes
>>> since 3.4.0qa5 it might be worth knowing about.
>>>
>>> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Infiniband performance issues answered?

2012-12-18 Thread Andrew Holway

On Dec 18, 2012, at 2:15 AM, Sabuj Pattanayek wrote:

> I have R610's with a similar setup but with HT turned on and I'm
> getting 3.5GB/s for one way RDMA tests between two QDR connected
> clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
> IPoIB connections (seem to be limited to 10gbe). Note, I had problems
> with the 1.x branch of OFED and am using the latest 3.x RC .

What are you writing to and from? 



> 
> On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian  wrote:
>> In IRC today, someone who was hitting that same IB performance ceiling that
>> occasionally gets reported had this to say
>> 
>> [11:50]  first, I ran fedora which is not supported by Mellanox OFED
>> distro
>> [11:50]  so I moved to CentOS 6.3
>> [11:51]  next I removed all distibution related infiniband rpms and
>> build the latest OFED package
>> [11:52]  disabled ServerSpeed service
>> [11:52]  disabled BIOS hyperthreading
>> [11:52]  disabled BIOS power mgmt
>> [11:53]  ran ib_write_test and goot 5000MB/s
>> [11:53]  got 5000MB/s on localhost
>> 
>> fwiw, if someone's encountering that issue, between this and the changes
>> since 3.4.0qa5 it might be worth knowing about.
>> 
>> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Infiniband performance issues answered?

2012-12-18 Thread Bryan Whitehead
Sorry, I meant to ask if someone had the latest OFED packages bundled into
RPM's for CentOS6.3 (or RedHat 6.3).


On Mon, Dec 17, 2012 at 8:48 PM, Bryan Whitehead wrote:

> does anyone have 3.4.0qa5 rpm's available? I'd like to give them a whirl.
>
>
> On Mon, Dec 17, 2012 at 5:17 PM, Sabuj Pattanayek wrote:
>
>> and yes on some Dells you'll get strange network and RAID controller
>> performance characteristics if you turn on the BIOS power management.
>>
>> On Mon, Dec 17, 2012 at 7:15 PM, Sabuj Pattanayek 
>> wrote:
>> > I have R610's with a similar setup but with HT turned on and I'm
>> > getting 3.5GB/s for one way RDMA tests between two QDR connected
>> > clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
>> > IPoIB connections (seem to be limited to 10gbe). Note, I had problems
>> > with the 1.x branch of OFED and am using the latest 3.x RC .
>> >
>> > On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian 
>> wrote:
>> >> In IRC today, someone who was hitting that same IB performance ceiling
>> that
>> >> occasionally gets reported had this to say
>> >>
>> >> [11:50]  first, I ran fedora which is not supported by
>> Mellanox OFED
>> >> distro
>> >> [11:50]  so I moved to CentOS 6.3
>> >> [11:51]  next I removed all distibution related infiniband
>> rpms and
>> >> build the latest OFED package
>> >> [11:52]  disabled ServerSpeed service
>> >> [11:52]  disabled BIOS hyperthreading
>> >> [11:52]  disabled BIOS power mgmt
>> >> [11:53]  ran ib_write_test and goot 5000MB/s
>> >> [11:53]  got 5000MB/s on localhost
>> >>
>> >> fwiw, if someone's encountering that issue, between this and the
>> changes
>> >> since 3.4.0qa5 it might be worth knowing about.
>> >>
>> >> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Infiniband performance issues answered?

2012-12-17 Thread Bryan Whitehead
does anyone have 3.4.0qa5 rpm's available? I'd like to give them a whirl.


On Mon, Dec 17, 2012 at 5:17 PM, Sabuj Pattanayek  wrote:

> and yes on some Dells you'll get strange network and RAID controller
> performance characteristics if you turn on the BIOS power management.
>
> On Mon, Dec 17, 2012 at 7:15 PM, Sabuj Pattanayek 
> wrote:
> > I have R610's with a similar setup but with HT turned on and I'm
> > getting 3.5GB/s for one way RDMA tests between two QDR connected
> > clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
> > IPoIB connections (seem to be limited to 10gbe). Note, I had problems
> > with the 1.x branch of OFED and am using the latest 3.x RC .
> >
> > On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian 
> wrote:
> >> In IRC today, someone who was hitting that same IB performance ceiling
> that
> >> occasionally gets reported had this to say
> >>
> >> [11:50]  first, I ran fedora which is not supported by Mellanox
> OFED
> >> distro
> >> [11:50]  so I moved to CentOS 6.3
> >> [11:51]  next I removed all distibution related infiniband rpms
> and
> >> build the latest OFED package
> >> [11:52]  disabled ServerSpeed service
> >> [11:52]  disabled BIOS hyperthreading
> >> [11:52]  disabled BIOS power mgmt
> >> [11:53]  ran ib_write_test and goot 5000MB/s
> >> [11:53]  got 5000MB/s on localhost
> >>
> >> fwiw, if someone's encountering that issue, between this and the changes
> >> since 3.4.0qa5 it might be worth knowing about.
> >>
> >> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Infiniband performance issues answered?

2012-12-17 Thread Sabuj Pattanayek
and yes on some Dells you'll get strange network and RAID controller
performance characteristics if you turn on the BIOS power management.

On Mon, Dec 17, 2012 at 7:15 PM, Sabuj Pattanayek  wrote:
> I have R610's with a similar setup but with HT turned on and I'm
> getting 3.5GB/s for one way RDMA tests between two QDR connected
> clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
> IPoIB connections (seem to be limited to 10gbe). Note, I had problems
> with the 1.x branch of OFED and am using the latest 3.x RC .
>
> On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian  wrote:
>> In IRC today, someone who was hitting that same IB performance ceiling that
>> occasionally gets reported had this to say
>>
>> [11:50]  first, I ran fedora which is not supported by Mellanox OFED
>> distro
>> [11:50]  so I moved to CentOS 6.3
>> [11:51]  next I removed all distibution related infiniband rpms and
>> build the latest OFED package
>> [11:52]  disabled ServerSpeed service
>> [11:52]  disabled BIOS hyperthreading
>> [11:52]  disabled BIOS power mgmt
>> [11:53]  ran ib_write_test and goot 5000MB/s
>> [11:53]  got 5000MB/s on localhost
>>
>> fwiw, if someone's encountering that issue, between this and the changes
>> since 3.4.0qa5 it might be worth knowing about.
>>
>> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Infiniband performance issues answered?

2012-12-17 Thread Sabuj Pattanayek
I have R610's with a similar setup but with HT turned on and I'm
getting 3.5GB/s for one way RDMA tests between two QDR connected
clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
IPoIB connections (seem to be limited to 10gbe). Note, I had problems
with the 1.x branch of OFED and am using the latest 3.x RC .

On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian  wrote:
> In IRC today, someone who was hitting that same IB performance ceiling that
> occasionally gets reported had this to say
>
> [11:50]  first, I ran fedora which is not supported by Mellanox OFED
> distro
> [11:50]  so I moved to CentOS 6.3
> [11:51]  next I removed all distibution related infiniband rpms and
> build the latest OFED package
> [11:52]  disabled ServerSpeed service
> [11:52]  disabled BIOS hyperthreading
> [11:52]  disabled BIOS power mgmt
> [11:53]  ran ib_write_test and goot 5000MB/s
> [11:53]  got 5000MB/s on localhost
>
> fwiw, if someone's encountering that issue, between this and the changes
> since 3.4.0qa5 it might be worth knowing about.
>
> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Infiniband performance issues answered?

2012-12-17 Thread Joe Julian
In IRC today, someone who was hitting that same IB performance ceiling 
that occasionally gets reported had this to say


[11:50]  first, I ran fedora which is not supported by Mellanox 
OFED distro

[11:50]  so I moved to CentOS 6.3
[11:51]  next I removed all distibution related infiniband rpms 
and build the latest OFED package

[11:52]  disabled ServerSpeed service
[11:52]  disabled BIOS hyperthreading
[11:52]  disabled BIOS power mgmt
[11:53]  ran ib_write_test and goot 5000MB/s
[11:53]  got 5000MB/s on localhost

fwiw, if someone's encountering that issue, between this and the changes 
since 3.4.0qa5 it might be worth knowing about.


http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users