Re: [Gluster-users] 40 gig ethernet

2013-06-21 Thread Maik Kulbe
On 21 Jun 2013, at 14:00, Shawn Nock wrote: And why do makers of RAID cards make it so hard to update firmware? They persist in requiring DOS, Java or even Windows, I almost always have to resort to some unsupported hack in order to get updates done on Linux. I'm pretty sure with the 3ware con

Re: [Gluster-users] 40 gig ethernet

2013-06-21 Thread Marcus Bointon
On 21 Jun 2013, at 14:00, Shawn Nock wrote: > I had to keep a stock of spares in-house until I migrated to 3ware (now > LSI). I haven't had any trouble with these cards in several years (and > haven't needed to RMA or contact support). I've got a 3Ware ​9650SE-8LPML SATA RAID controller that's

Re: [Gluster-users] 40 gig ethernet

2013-06-20 Thread Bryan Whitehead
Weird, I have a bunch of servers with Areca ARC-1680 8-ports and they have never given me a problem. The first thing I did was update the firmware to the latest - my brand new cards had firmware 2 years old - and didn't recognize disks > 1TB. On Thu, Jun 20, 2013 at 7:11 AM, Shawn Nock wrote: >

Re: [Gluster-users] 40 gig ethernet

2013-06-20 Thread Shawn Nock
Justin Clift writes: >> The other issue I have is with hardware RAID. I'm not sure if folks >> are using that with gluster or if they're using software RAID, but >> the closed source nature and crappy proprietary tools annoys all the >> devops guys I know. What are you all doing for your gluster s

Re: [Gluster-users] 40 gig ethernet

2013-06-19 Thread James
On Wed, Jun 19, 2013 at 4:00 PM, Justin Clift wrote: > > > >> PS: FWIW I wrote a puppet module to manage LSI RAID. It drove me crazy >> using their tool on some supermicro hardware I had. If anyone shows >> interest, I can post the code. > > That corresponds to this blog post doesn't it? :) > >

Re: [Gluster-users] 40 gig ethernet

2013-06-19 Thread Justin Clift
On 17/06/2013, at 4:01 AM, James wrote: > I have to jump in here and add that I'm with you for the drivers > aspect. I had a lot of problems with the 10gE drivers when getting > gluster going. I haven't tested recently, but it's a huge worry when > buying hardware. Even RedHat had a lot of trouble

Re: [Gluster-users] 40 gig ethernet

2013-06-17 Thread Bryan Whitehead
I'm using the inbuilt Infiniband drivers that come with CentOS 6.x. I did go through the pain of downloading an ISO from Mellanox and installing all their specially built tools, went through their tuning guide, and saw no speed improvements at all. the IPoIB module cannot push the speeds like nati

Re: [Gluster-users] 40 gig ethernet

2013-06-16 Thread James
On Sun, Jun 16, 2013 at 9:03 PM, sal poliandro wrote: > On a side not the native linux drivers have not really kept up with the 40gb > cards. Linux still has issues with some 10gb cards. If you are going 40gb > talk to the people that license the dna driver. They are in paramus nj and > do a lot

Re: [Gluster-users] 40 gig ethernet

2013-06-16 Thread sal poliandro
Most of the 40gb stuff is designed for mostly East/West traffic as that tends to be the majority of traffic in the datacenter these days. All the big guys make platforms that can keep full port to port across the platform between 4-7. 40gb has not fallen that far where it is not still a decent siz

Re: [Gluster-users] 40 gig ethernet

2013-06-16 Thread Nathan Stratton
On Fri, Jun 14, 2013 at 2:13 PM, Bryan Whitehead wrote: > I'm using 40G Infiniband with IPoIB for gluster. Here are some ping > times (from host 172.16.1.10): > > --- 172.16.1.11 ping statistics --- > 10 packets transmitted, 10 received, 0% packet loss, time 8999ms > rtt min/avg/max/mdev = 0.093/0

Re: [Gluster-users] 40 gig ethernet

2013-06-15 Thread Justin Clift
On 14/06/2013, at 8:13 PM, Bryan Whitehead wrote: > I'm using 40G Infiniband with IPoIB for gluster. Here are some ping > times (from host 172.16.1.10): > > [root@node0.cloud ~]# ping -c 10 172.16.1.11 > PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data. > 64 bytes from 172.16.1.11: icmp_seq=1 t

Re: [Gluster-users] 40 gig ethernet

2013-06-15 Thread Robert Hajime Lanning
On 06/15/13 00:50, Stephan von Krawczynski wrote: Uh, you should throw away your GigE switch. Example: # ping 192.168.83.1 PING 192.168.83.1 (192.168.83.1) 56(84) bytes of data. 64 bytes from 192.168.83.1: icmp_seq=1 ttl=64 time=0.310 ms 64 bytes from 192.168.83.1: icmp_seq=2 ttl=64 time=0.199 m

Re: [Gluster-users] 40 gig ethernet

2013-06-15 Thread Stephan von Krawczynski
On Fri, 14 Jun 2013 14:35:26 -0700 Bryan Whitehead wrote: > GigE is slower. Here is ping from same boxes but using the 1GigE cards: > > [root@node0.cloud ~]# ping -c 10 10.100.0.11 > PING 10.100.0.11 (10.100.0.11) 56(84) bytes of data. > 64 bytes from 10.100.0.11: icmp_seq=1 ttl=64 time=0.628 ms

Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Bryan Whitehead
GigE is slower. Here is ping from same boxes but using the 1GigE cards: [root@node0.cloud ~]# ping -c 10 10.100.0.11 PING 10.100.0.11 (10.100.0.11) 56(84) bytes of data. 64 bytes from 10.100.0.11: icmp_seq=1 ttl=64 time=0.628 ms 64 bytes from 10.100.0.11: icmp_seq=2 ttl=64 time=0.283 ms 64 bytes f

Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Stephan von Krawczynski
On Fri, 14 Jun 2013 12:13:53 -0700 Bryan Whitehead wrote: > I'm using 40G Infiniband with IPoIB for gluster. Here are some ping > times (from host 172.16.1.10): > > [root@node0.cloud ~]# ping -c 10 172.16.1.11 > PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data. > 64 bytes from 172.16.1.11: ic

Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Bryan Whitehead
I'm using 40G Infiniband with IPoIB for gluster. Here are some ping times (from host 172.16.1.10): [root@node0.cloud ~]# ping -c 10 172.16.1.11 PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data. 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 172.16.1.11: icmp_seq=2 ttl=

[Gluster-users] 40 gig ethernet

2013-06-14 Thread Nathan Stratton
I have been playing around with Gluster on and off for the last 6 years or so. Most of the things that have been keeping me from using it have been related to latency. In the past I have been using 10 gig infiniband or 10 gig ethernet, recently the price of 40 gig ethernet has fallen quite a bit w