Re: [Gluster-users] 40 gig ethernet

2013-06-21 Thread Marcus Bointon

On 21 Jun 2013, at 14:00, Shawn Nock n...@nocko.se wrote:

 I had to keep a stock of spares in-house until I migrated to 3ware (now
 LSI). I haven't had any trouble with these cards in several years (and
 haven't needed to RMA or contact support).

I've got a 3Ware ​9650SE-8LPML SATA RAID controller that's been a bit 
troublesome. It was working fine but died on a scheduled reboot, in such a way 
that even the BIOS wouldn't POST! 3Ware were good about replacing it, but the 
replacement they sent was DOA, the second one worked ok. I still find reboots 
on this machine very stressful!

And why do makers of RAID cards make it so hard to update firmware? They 
persist in requiring DOS, Java or even Windows, I almost always have to resort 
to some unsupported hack in order to get updates done on Linux.

Marcus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 40 gig ethernet

2013-06-21 Thread Maik Kulbe

On 21 Jun 2013, at 14:00, Shawn Nock  wrote:

And why do makers of RAID cards make it so hard to update firmware? They
persist in requiring DOS, Java or even Windows, I almost always have to
resort to some unsupported hack in order to get updates done on Linux.


I'm pretty sure with the 3ware controllers(or at least most of the newer ones 
from the 9xxx series) you can flash under Linux with some CLI utility. If I 
remember correctly one of our 3ware SAS controllers even had an update button 
on the 3dm2 webpanel.



Marcus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-20 Thread Shawn Nock
Justin Clift jcl...@redhat.com writes:
 The other issue I have is with hardware RAID. I'm not sure if folks
 are using that with gluster or if they're using software RAID, but
 the closed source nature and crappy proprietary tools annoys all the
 devops guys I know. What are you all doing for your gluster setups?
 Is there some magical RAID controller that has Free tools, or are
 people using mdadm, or are people just unhappy or ?

 Before joining Red Hat I was using Areca hardware.  But Areca (the
 company) was weird/dishonest when I tried to RMA a card that went bad.

 So, I advise people to keep away from that crowd.  Haven't tried any
 others in depth since. :/

I second the thoughts on Areca. They are a terrible company; avoid at
all costs. I've RMA'd every card I've installed of theirs that had been
in service for more that 6 months, some servers have had RMA returns
fail within months.

Their only US support option is we'll ship it to Taiwan for repair and
return it is 6-8 weeks. There is no option to pay for advanced
replacement.

I had to keep a stock of spares in-house until I migrated to 3ware (now
LSI). I haven't had any trouble with these cards in several years (and
haven't needed to RMA or contact support).

-- 
Shawn Nock (OpenPGP: 0x65118FA5)


pgpyp9RwsWROw.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 40 gig ethernet

2013-06-20 Thread Bryan Whitehead
Weird, I have a bunch of servers with Areca ARC-1680 8-ports and they
have never given me a problem.

The first thing I did was update the firmware to the latest - my brand
new cards had firmware 2 years old - and didn't recognize disks  1TB.

On Thu, Jun 20, 2013 at 7:11 AM, Shawn Nock n...@nocko.se wrote:
 Justin Clift jcl...@redhat.com writes:
 The other issue I have is with hardware RAID. I'm not sure if folks
 are using that with gluster or if they're using software RAID, but
 the closed source nature and crappy proprietary tools annoys all the
 devops guys I know. What are you all doing for your gluster setups?
 Is there some magical RAID controller that has Free tools, or are
 people using mdadm, or are people just unhappy or ?

 Before joining Red Hat I was using Areca hardware.  But Areca (the
 company) was weird/dishonest when I tried to RMA a card that went bad.

 So, I advise people to keep away from that crowd.  Haven't tried any
 others in depth since. :/

 I second the thoughts on Areca. They are a terrible company; avoid at
 all costs. I've RMA'd every card I've installed of theirs that had been
 in service for more that 6 months, some servers have had RMA returns
 fail within months.

 Their only US support option is we'll ship it to Taiwan for repair and
 return it is 6-8 weeks. There is no option to pay for advanced
 replacement.

 I had to keep a stock of spares in-house until I migrated to 3ware (now
 LSI). I haven't had any trouble with these cards in several years (and
 haven't needed to RMA or contact support).

 --
 Shawn Nock (OpenPGP: 0x65118FA5)

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-19 Thread Justin Clift
On 17/06/2013, at 4:01 AM, James wrote:
snip
 I have to jump in here and add that I'm with you for the drivers
 aspect. I had a lot of problems with the 10gE drivers when getting
 gluster going. I haven't tested recently, but it's a huge worry when
 buying hardware. Even RedHat had a lot of trouble confirming if
 certain chips would work!

That's good to know about.  In my personal home dev/test lab here
I'm just using Mellanox DDR ConnectX cards ($39 on eBay! :) and
running things with either IPoIB or in RDMA mode.  I did try switching
the cards into 10GbE mode (worked fine), but don't really see the
point of running these cards at half speed and worse (10GbE) in a
home lab. :)


 The other issue I have is with hardware RAID. I'm not sure if folks
 are using that with gluster or if they're using software RAID, but the
 closed source nature and crappy proprietary tools annoys all the
 devops guys I know. What are you all doing for your gluster setups? Is
 there some magical RAID controller that has Free tools, or are people
 using mdadm, or are people just unhappy or ?

Before joining Red Hat I was using Areca hardware.  But Areca (the
company) was weird/dishonest when I tried to RMA a card that went bad.

So, I advise people to keep away from that crowd.  Haven't tried any
others in depth since. :/


 PS: FWIW I wrote a puppet module to manage LSI RAID. It drove me crazy
 using their tool on some supermicro hardware I had. If anyone shows
 interest, I can post the code.

That corresponds to this blog post doesn't it? :)

  https://ttboj.wordpress.com/2013/06/17/puppet-lsi-hardware-raid-module/

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-19 Thread James
On Wed, Jun 19, 2013 at 4:00 PM, Justin Clift jcl...@redhat.com wrote:



 PS: FWIW I wrote a puppet module to manage LSI RAID. It drove me crazy
 using their tool on some supermicro hardware I had. If anyone shows
 interest, I can post the code.

 That corresponds to this blog post doesn't it? :)

   https://ttboj.wordpress.com/2013/06/17/puppet-lsi-hardware-raid-module/

Yup, just posted it after some people emailed me asking for it.
Hope it helps.
Feedback welcome.

James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-17 Thread Bryan Whitehead
I'm using the inbuilt Infiniband drivers that come with CentOS 6.x. I
did go through the pain of downloading an ISO from Mellanox and
installing all their specially built tools, went through their tuning
guide, and saw no speed improvements at all.

the IPoIB module cannot push the speeds like native RDMA - but I've
not been able to get gluster to work with infiniband correctly. (Get
massive CPU spikes from glusterd, filesystem stalls, and terrible
speeds - bassically native rdma was unusable). I've not tried the 3.4
branch yet (my native rdma attempts have all been with the 3.3.x
series). Anyway, I can completely blow out the raw speed of my
underlying RAID10 arrays across my boxes with IPoIB/Infiniband so it
doesn't matter.

I chose Infiniband because overall it is far cheaper than 10G cards
and associated switches (2 years ago). Prices have no moved enough for
me to bother with 10G.

On Sat, Jun 15, 2013 at 5:34 PM, Justin Clift jcl...@redhat.com wrote:
 On 14/06/2013, at 8:13 PM, Bryan Whitehead wrote:
 I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
 times (from host 172.16.1.10):

 [root@node0.cloud ~]# ping -c 10 172.16.1.11
 PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
 64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms
 64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms
 64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms
 64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms
 64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms
 64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms


 Out of curiosity, are you using connected mode or datagram mode
 for this?  Also, are you using the inbuilt OS infiniband drivers,
 or Mellanox's OFED? (Or Intel/QLogic's equivalent if using
 their stuff)

 Asking because I haven't yet seen any real best practise stuff
 on ways to set this up for Gluster (yet). ;)

 Regards and best wishes,

 Justin Clift

 --
 Open Source and Standards @ Red Hat

 twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-16 Thread Nathan Stratton
On Fri, Jun 14, 2013 at 2:13 PM, Bryan Whitehead dri...@megahappy.netwrote:

 I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
 times (from host 172.16.1.10):

 --- 172.16.1.11 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 8999ms
 rtt min/avg/max/mdev = 0.093/0.142/0.198/0.035 ms


Interesting, you have a lower min and slightly lower avg, but your max is
actually higher then I am seeing on my 10 gig setup, since Gluster FS uses
a lot of small packets, it does not look like it is worth upgrading from 10
to 40 gig ethernet...

--- virt1.exarionetworks.com ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8999ms
rtt min/avg/max/mdev = 0.112/0.163/0.191/0.027 ms

Also note, that I am running 10GBase-T, if I was running over fiber, I
would think my overall numbers would be lower then your 40 gig Infiniband,
how can that be?

-- 

Nathan Stratton   Founder, CTO
Exario Networks, Inc.
nathan at robotics.net nathan at
exarionetworks.com
http://www.robotics.net
http://www.exarionetworks.com/

Building the WebRTC solutions today that your customers will demand
tomorrow.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 40 gig ethernet

2013-06-16 Thread sal poliandro
Most of the 40gb stuff is designed for mostly East/West traffic as that
tends to be the majority of traffic in the datacenter these days. All the
big guys make platforms that can keep full port to port across the platform
between 4-7.

40gb has not fallen that far where it is not still a decent size investment
to do right and as someone who keeps trying to fit gluster into production
I have found that other storage platforms always beat out gluster in top
end hardware. When 40gb is more high end and the 100gb starts to take
marketshare gluster may work in some enviornments but when running top of
the line network and server gear the TCO of a commerical storage product
(and the support that comes with it) always wins, at least for me.

On a side not the native linux drivers have not really kept up with the
40gb cards. Linux still has issues with some 10gb cards. If you are going
40gb talk to the people that license the dna driver.  They are in paramus
nj and do a lot of higer end networks with proper Linux drivers. The media
(dac cables or om4 mtp) dont seem to affect performance much as long as you
dont push the dac longer than 3-5 meeters.

Salvatore Popsikle Poliandro

Sent from my mobile, please excuse any typos. One day we will have mobile
devices where we don't need this footer :)
 On Jun 14, 2013 10:04 AM, Nathan Stratton nat...@robotics.net wrote:

 I have been playing around with Gluster on and off for the last 6 years or
 so. Most of the things that have been keeping me from using it have been
 related to latency.

 In the past I have been using 10 gig infiniband or 10 gig ethernet,
 recently the price of 40 gig ethernet has fallen quite a bit with guys like
 Arista.

 My question is, is this worth it at all for something like Gluster? The
 port to port latency looks impressive at under 4 microseconds, but I don't
 yet know what total system to system latency would look like assuming QSPF+
 copper cables and linux stack.

 --
 
 Nathan Stratton   Founder, CTO
 Exario Networks, Inc.
 nathan at robotics.net nathan at
 exarionetworks.com
 http://www.robotics.net
 http://www.exarionetworks.com/

 Building the WebRTC solutions today that your customers will demand
 tomorrow.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 40 gig ethernet

2013-06-16 Thread James
On Sun, Jun 16, 2013 at 9:03 PM, sal poliandro popsi...@gmail.com wrote:
 On a side not the native linux drivers have not really kept up with the 40gb
 cards. Linux still has issues with some 10gb cards. If you are going 40gb
 talk to the people that license the dna driver.  They are in paramus nj and
 do a lot of higer end networks with proper Linux drivers. The media (dac
 cables or om4 mtp) dont seem to affect performance much as long as you dont
 push the dac longer than 3-5 meeters.

I have to jump in here and add that I'm with you for the drivers
aspect. I had a lot of problems with the 10gE drivers when getting
gluster going. I haven't tested recently, but it's a huge worry when
buying hardware. Even RedHat had a lot of trouble confirming if
certain chips would work!

The other issue I have is with hardware RAID. I'm not sure if folks
are using that with gluster or if they're using software RAID, but the
closed source nature and crappy proprietary tools annoys all the
devops guys I know. What are you all doing for your gluster setups? Is
there some magical RAID controller that has Free tools, or are people
using mdadm, or are people just unhappy or ?

Cheers,
James

PS: FWIW I wrote a puppet module to manage LSI RAID. It drove me crazy
using their tool on some supermicro hardware I had. If anyone shows
interest, I can post the code.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-15 Thread Stephan von Krawczynski
On Fri, 14 Jun 2013 14:35:26 -0700
Bryan Whitehead dri...@megahappy.net wrote:

 GigE is slower. Here is ping from same boxes but using the 1GigE cards:
 
 [root@node0.cloud ~]# ping -c 10 10.100.0.11
 PING 10.100.0.11 (10.100.0.11) 56(84) bytes of data.
 64 bytes from 10.100.0.11: icmp_seq=1 ttl=64 time=0.628 ms
 64 bytes from 10.100.0.11: icmp_seq=2 ttl=64 time=0.283 ms
 64 bytes from 10.100.0.11: icmp_seq=3 ttl=64 time=0.307 ms
 64 bytes from 10.100.0.11: icmp_seq=4 ttl=64 time=0.275 ms
 64 bytes from 10.100.0.11: icmp_seq=5 ttl=64 time=0.313 ms
 64 bytes from 10.100.0.11: icmp_seq=6 ttl=64 time=0.278 ms
 64 bytes from 10.100.0.11: icmp_seq=7 ttl=64 time=0.309 ms
 64 bytes from 10.100.0.11: icmp_seq=8 ttl=64 time=0.197 ms
 64 bytes from 10.100.0.11: icmp_seq=9 ttl=64 time=0.267 ms
 64 bytes from 10.100.0.11: icmp_seq=10 ttl=64 time=0.187 ms
 
 --- 10.100.0.11 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 9000ms
 rtt min/avg/max/mdev = 0.187/0.304/0.628/0.116 ms
 
 Note: The Infiniband interfaces have a constant load of traffic from
 glusterfs. The Nic cards comparatively have very little traffic.

Uh, you should throw away your GigE switch. Example:

# ping 192.168.83.1
PING 192.168.83.1 (192.168.83.1) 56(84) bytes of data.
64 bytes from 192.168.83.1: icmp_seq=1 ttl=64 time=0.310 ms
64 bytes from 192.168.83.1: icmp_seq=2 ttl=64 time=0.199 ms
64 bytes from 192.168.83.1: icmp_seq=3 ttl=64 time=0.119 ms
64 bytes from 192.168.83.1: icmp_seq=4 ttl=64 time=0.115 ms
64 bytes from 192.168.83.1: icmp_seq=5 ttl=64 time=0.099 ms
64 bytes from 192.168.83.1: icmp_seq=6 ttl=64 time=0.082 ms
64 bytes from 192.168.83.1: icmp_seq=7 ttl=64 time=0.091 ms
64 bytes from 192.168.83.1: icmp_seq=8 ttl=64 time=0.096 ms
64 bytes from 192.168.83.1: icmp_seq=9 ttl=64 time=0.097 ms
64 bytes from 192.168.83.1: icmp_seq=10 ttl=64 time=0.095 ms
64 bytes from 192.168.83.1: icmp_seq=11 ttl=64 time=0.097 ms
64 bytes from 192.168.83.1: icmp_seq=12 ttl=64 time=0.102 ms
64 bytes from 192.168.83.1: icmp_seq=13 ttl=64 time=0.103 ms
64 bytes from 192.168.83.1: icmp_seq=14 ttl=64 time=0.108 ms
64 bytes from 192.168.83.1: icmp_seq=15 ttl=64 time=0.098 ms
64 bytes from 192.168.83.1: icmp_seq=16 ttl=64 time=0.093 ms
64 bytes from 192.168.83.1: icmp_seq=17 ttl=64 time=0.099 ms
64 bytes from 192.168.83.1: icmp_seq=18 ttl=64 time=0.102 ms
64 bytes from 192.168.83.1: icmp_seq=19 ttl=64 time=0.092 ms
64 bytes from 192.168.83.1: icmp_seq=20 ttl=64 time=0.111 ms
64 bytes from 192.168.83.1: icmp_seq=21 ttl=64 time=0.112 ms
64 bytes from 192.168.83.1: icmp_seq=22 ttl=64 time=0.099 ms
64 bytes from 192.168.83.1: icmp_seq=23 ttl=64 time=0.092 ms
64 bytes from 192.168.83.1: icmp_seq=24 ttl=64 time=0.102 ms
64 bytes from 192.168.83.1: icmp_seq=25 ttl=64 time=0.108 ms
^C
--- 192.168.83.1 ping statistics ---
25 packets transmitted, 25 received, 0% packet loss, time 23999ms
rtt min/avg/max/mdev = 0.082/0.112/0.310/0.047 ms

That is _loaded_.

-- 
Regards,
Stephan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-15 Thread Robert Hajime Lanning

On 06/15/13 00:50, Stephan von Krawczynski wrote:

Uh, you should throw away your GigE switch. Example:

# ping 192.168.83.1
PING 192.168.83.1 (192.168.83.1) 56(84) bytes of data.
64 bytes from 192.168.83.1: icmp_seq=1 ttl=64 time=0.310 ms
64 bytes from 192.168.83.1: icmp_seq=2 ttl=64 time=0.199 ms
64 bytes from 192.168.83.1: icmp_seq=3 ttl=64 time=0.119 ms
64 bytes from 192.168.83.1: icmp_seq=4 ttl=64 time=0.115 ms


What is the make and model of your GigE switch?

I get:
114 packets transmitted, 114 received, 0% packet loss, time 113165ms
rtt min/avg/max/mdev = 0.350/0.380/0.608/0.027 ms

On a not loaded WS-C3560X-48.  Though it might not be the switch.
It could be the NIC on either side of the ping, Or anything up through 
the kernel, where the ping response is generated.


Granted, my numbers are at home, between an Atom 330 and an AMD G-T56N, 
both with RealTek on motherboard NICs.


AMD G-T56N = RealTek = WS-C3560X-48 = RealTek = Atom 330

So, now data from work:
48 packets transmitted, 48 received, 0% packet loss, time 47828ms
rtt min/avg/max/mdev = 0.110/0.158/0.187/0.022 ms

That is through a WS-C6513-E with a 2T supp card, then through the TOR 
WS-C3560X-48.  So, I have lower latency with the ADDITION of the 6513 
(not replacement, extra switch hop).  Which means my NICs and up to 
Layer 7 (kernel) are the major players here.


Work ping is between two identical HP DL360s (Xeon E5649, with Broadcom 
NetXtreme II GigE)


Xeon E5649 = Broadcom = WS-C6513-E = WS-C3560X-48 = Broadcom = 
Xeon E5649


--
Mr. Flibble
King of the Potato People
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-15 Thread Justin Clift
On 14/06/2013, at 8:13 PM, Bryan Whitehead wrote:
 I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
 times (from host 172.16.1.10):
 
 [root@node0.cloud ~]# ping -c 10 172.16.1.11
 PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
 64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms
 64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms
 64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms
 64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms
 64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms
 64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms


Out of curiosity, are you using connected mode or datagram mode
for this?  Also, are you using the inbuilt OS infiniband drivers,
or Mellanox's OFED? (Or Intel/QLogic's equivalent if using
their stuff)

Asking because I haven't yet seen any real best practise stuff
on ways to set this up for Gluster (yet). ;)

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Bryan Whitehead
I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
times (from host 172.16.1.10):

[root@node0.cloud ~]# ping -c 10 172.16.1.11
PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms
64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms
64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms
64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms
64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms
64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms
64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms
64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms
64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms

--- 172.16.1.11 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8999ms
rtt min/avg/max/mdev = 0.093/0.142/0.198/0.035 ms

On Fri, Jun 14, 2013 at 7:03 AM, Nathan Stratton nat...@robotics.net wrote:
 I have been playing around with Gluster on and off for the last 6 years or
 so. Most of the things that have been keeping me from using it have been
 related to latency.

 In the past I have been using 10 gig infiniband or 10 gig ethernet, recently
 the price of 40 gig ethernet has fallen quite a bit with guys like Arista.

 My question is, is this worth it at all for something like Gluster? The port
 to port latency looks impressive at under 4 microseconds, but I don't yet
 know what total system to system latency would look like assuming QSPF+
 copper cables and linux stack.

 --

 Nathan Stratton   Founder, CTO
 Exario Networks, Inc.
 nathan at robotics.net nathan at
 exarionetworks.com
 http://www.robotics.net
 http://www.exarionetworks.com/

 Building the WebRTC solutions today that your customers will demand
 tomorrow.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Stephan von Krawczynski
On Fri, 14 Jun 2013 12:13:53 -0700
Bryan Whitehead dri...@megahappy.net wrote:

 I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
 times (from host 172.16.1.10):
 
 [root@node0.cloud ~]# ping -c 10 172.16.1.11
 PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
 64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms
 64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms
 64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms
 64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms
 64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms
 64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms
 
 --- 172.16.1.11 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 8999ms
 rtt min/avg/max/mdev = 0.093/0.142/0.198/0.035 ms

What you like to say is that there is no significant difference compared to
GigE, right?
Anyone got a ping between two kvm-qemu virtio-net cards at hand?

-- 
Regards,
Stephan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Bryan Whitehead
GigE is slower. Here is ping from same boxes but using the 1GigE cards:

[root@node0.cloud ~]# ping -c 10 10.100.0.11
PING 10.100.0.11 (10.100.0.11) 56(84) bytes of data.
64 bytes from 10.100.0.11: icmp_seq=1 ttl=64 time=0.628 ms
64 bytes from 10.100.0.11: icmp_seq=2 ttl=64 time=0.283 ms
64 bytes from 10.100.0.11: icmp_seq=3 ttl=64 time=0.307 ms
64 bytes from 10.100.0.11: icmp_seq=4 ttl=64 time=0.275 ms
64 bytes from 10.100.0.11: icmp_seq=5 ttl=64 time=0.313 ms
64 bytes from 10.100.0.11: icmp_seq=6 ttl=64 time=0.278 ms
64 bytes from 10.100.0.11: icmp_seq=7 ttl=64 time=0.309 ms
64 bytes from 10.100.0.11: icmp_seq=8 ttl=64 time=0.197 ms
64 bytes from 10.100.0.11: icmp_seq=9 ttl=64 time=0.267 ms
64 bytes from 10.100.0.11: icmp_seq=10 ttl=64 time=0.187 ms

--- 10.100.0.11 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.187/0.304/0.628/0.116 ms

Note: The Infiniband interfaces have a constant load of traffic from
glusterfs. The Nic cards comparatively have very little traffic.

On Fri, Jun 14, 2013 at 12:40 PM, Stephan von Krawczynski
sk...@ithnet.com wrote:
 On Fri, 14 Jun 2013 12:13:53 -0700
 Bryan Whitehead dri...@megahappy.net wrote:

 I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
 times (from host 172.16.1.10):

 [root@node0.cloud ~]# ping -c 10 172.16.1.11
 PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
 64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms
 64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms
 64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms
 64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms
 64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms
 64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms
 64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms

 --- 172.16.1.11 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 8999ms
 rtt min/avg/max/mdev = 0.093/0.142/0.198/0.035 ms

 What you like to say is that there is no significant difference compared to
 GigE, right?
 Anyone got a ping between two kvm-qemu virtio-net cards at hand?

 --
 Regards,
 Stephan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users