Re: 10G with Intel card - GBIC options

2014-01-24 Thread Henning Brauer
* Andy a...@brandwatch.com [2013-12-02 10:05]:
 Henning, could you please confirm for us if the 32bit bandwidth limit was
 lifted in the new queuing subsystem, or if it is just still in place whilst
 dual-running the new and the old?

otoh it is still there, but rather easy to lift.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS Services. Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: 10G with Intel card - GBIC options

2014-01-24 Thread Andy

On Fri 24 Jan 2014 09:59:04 GMT, Henning Brauer wrote:

* Andy a...@brandwatch.com [2013-12-02 10:05]:

Henning, could you please confirm for us if the 32bit bandwidth limit was
lifted in the new queuing subsystem, or if it is just still in place whilst
dual-running the new and the old?


otoh it is still there, but rather easy to lift.



Cool ok. We'll wait until the next release when the old queuing system 
is removed allowing for a code clean up and this limit is lifted :)


Thanks for your work, Andy.



Re: 10G with Intel card - GBIC options

2014-01-07 Thread Hrvoje Popovski
On 2.12.2013. 10:05, Andy wrote:
 Hmm surprised by that!
 
 Henning, could you please confirm for us if the 32bit bandwidth limit
 was lifted in the new queuing subsystem, or if it is just still in place
 whilst dual-running the new and the old?
 
 I guess considering Hrvoje's findings the limit is still in place until
 ALTQ is removed completely in 5.5??
 
 Cheers, Andy.
 



Hi,
second ix (82599) card is here and I have directly connected two servers.

With kern.pool_debug=0, net.inet.ip.ifq.maxlen=1024 and mtu 16110 on ix
cards bandwidth is ~7Gbps.
tcpbench runs with -B 262144 -S 262144

pf.conf:
set skip on lo
block
pass


10Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 10G max 10G
queue bulk@ix0 parent queue@ix0 bandwidth 10G default

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 1G default qlimit 50

tcpbench shows 1404Mbps


9Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 9G max 9G
queue bulk@ix0 parent queue@ix0 default bandwidth 9G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 410M, max 410M qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 410M default qlimit 50

tcpbench shows 206Mbps


8Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 8G max 8G
queue bulk@ix0 parent queue@ix0 default bandwidth 8G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 3G, max 3G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default qlimit 50

tcpbench shows 3690Mbps


7Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 7G max 7G
queue bulk@ix0 parent queue@ix0 default bandwidth 7G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 2G, max 2G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default qlimit 50

tcpbench shows 2695Mbps


6Gbps queue:
queue queue@ix0 on ix0 bandwidth 6G max 6G
queue bulk@ix0 parent queue@ix0 default bandwidth 6G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 1G default qlimit 50

tcpbench shows 1699Mbps


5Gbps queue
pf.conf
queue queue@ix0 on ix0 bandwidth 5G max 5G
queue bulk@ix0 parent queue@ix0 default bandwidth 5G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 705M, max 705M qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 705M default qlimit 50

tcpbench shows 218Mbps


4Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 4G max 4G
queue bulk@ix0 parent queue@ix0 bandwidth 4G default

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 4G, max 4G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 4G default qlimit 50

tcpbench shows 3986Mbps which is 99.65% of 4000Mbps.
Could this 0,35% or 14Mbps bandwidth loss be interpret as queue overhead?
If yes, then this is wonderful :)


3Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 3G max 3G
queue bulk@ix0 parent queue@ix0 default bandwidth 3G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 3G, max 3G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default qlimit 50

tcpbench shows 2988Mbps which is 0.40% bandwidth loss.


2Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 2G max 2G
queue bulk@ix0 parent queue@ix0 default bandwidth 2G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 2G, max 2G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default qlimit 50

tcpbench shows 1993Mbps which is 0,35% bandwidth loss


1Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 1G max 1G
queue bulk@ix0 parent queue@ix0 default bandwidth 1G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 1G default qlimit 50

tcpbench show 996Mbps which is 0.7% bandwidth loss



Re: 10G with Intel card - GBIC options

2013-12-03 Thread Stuart Henderson
On 2013-12-02, Kapetanakis Giannis bil...@edu.physics.uoc.gr wrote:
 I would love to go for the SFP+ path but we cannot afford it,

flexoptix do a xenpak-sfp+ converter (and lots of other interesting things)



Re: 10G with Intel card - GBIC options

2013-12-02 Thread Andy

Hmm surprised by that!

Henning, could you please confirm for us if the 32bit bandwidth limit 
was lifted in the new queuing subsystem, or if it is just still in 
place whilst dual-running the new and the old?


I guess considering Hrvoje's findings the limit is still in place until 
ALTQ is removed completely in 5.5??


Cheers, Andy.

On Fri 29 Nov 2013 22:10:20 GMT, Hrvoje Popovski wrote:

On 29.11.2013. 17:08, Andy wrote:

PS; I hope you have reeaaaly fast servers..
NB; ALTQ is currently 32bit so you cannot queue faster than 4 and a bit
gig, unless you go for Hennings new queueing system which I'm still yet
to do when I actually find time..



Hi,

I'm not sure if new queueing system is faster than 4.3Gbps or
pfctl -nvf pf.conf is lying or interface must be up and running to see
real bandwith with pfctl -vvsq.
I can't test it because I have one ix card. Will try to lend another ix
card to see.

# ifconfig ix0
ix0: flags=28843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,NOINET6 mtu 1500
 lladdr 90:e2:ba:19:29:a8
 priority: 0
 media: Ethernet autoselect
 status: no carrier
 inet 10.22.22.1 netmask 0xff00 broadcast 10.22.22.255



pf.conf with 10G on ix0:
queue queue@ix0 on ix0 bandwidth 10G max 10G
queue ackn@ix0 parent queue@ix0 bandwidth 5G
queue bulk@ix0 parent queue@ix0 bandwidth 5G default
match on ix0 set ( queue (bulk@ix0i, ackn@ix0), prio (1,7) )

pfctl -nvf pf.conf
queue queue@ix0 on ix0 bandwidth 1G, max 1G
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 705M
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 705M default

pfctl -vvsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue ack@ix0 parent queue@ix0 on ix0 bandwidth 705M qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 705M default qlimit 50



pf.conf with 6G on ix0:
queue queue@ix0 on ix0 bandwidth 6G max 6G
queue ackn@ix0 parent queue@ix0 bandwidth 3G
queue bulk@ix0 parent queue@ix0 bandwidth 3G default
match on ix0 set ( queue (bulk@ix0i, ackn@ix0), prio (1,7) )

pfctl -nvf pf.conf
queue queue@ix0 on ix0 bandwidth 1G, max 1G
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 3G
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default

pfctl -vvsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 3G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default qlimit 50



pf.conf with 4G on ix0:
queue queue@ix0 on ix0 bandwidth 4G max 4G
queue ackn@ix0 parent queue@ix0 bandwidth 2G
queue bulk@ix0 parent queue@ix0 bandwidth 2G default
match on ix0 set ( queue (bulk@ix0i, ackn@ix0), prio (1,7) )

pfctl -nvf pf.conf
queue queue@ix0 on ix0 bandwidth 4G, max 4G
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 2G
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default

pfctl -vvsq
queue queue@ix0 on ix0 bandwidth 4G, max 4G qlimit 50
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 2G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default qlimit 50




Re: 10G with Intel card - GBIC options

2013-12-02 Thread Kapetanakis Giannis

On 29/11/13 19:16, Andy wrote:

On Fri 29 Nov 2013 16:19:26 GMT, Kapetanakis Giannis wrote:

Unfortunately on the Cisco part I don't SFP+.
I have XENPACK option only which give me 3 options:

SR ~ 3K GPL
LRM ~ 1.5K GPL (I can't find any LRM GBIC for Intel side)
CX4 ~ 600 GPL


I'd avoid CX4, you wont find a CX4 NIC working well with OpenBSD nor 
would you want one tbh.. Stick with well known supported cards for 
OpenBSD..


Thanks for all the replies Andy.

Are we totally sure about this?
I'm talking about Intel - CX4 support on OpenBSD with ix(4).

The manual page lists these:
   o   Intel 82598EB 10GbE Adapter (10GbaseCX4)
   o   Intel 82598EB Dual Port 10GbE Adapter (10GbaseCX4)
   o   Intel 82599EB 10GbE Adapter (10GbaseCX4)

Thanks

Giannis



Re: 10G with Intel card - GBIC options

2013-12-02 Thread Jonathan Gray
On Mon, Dec 02, 2013 at 11:36:31AM +0200, Kapetanakis Giannis wrote:
 On 29/11/13 19:16, Andy wrote:
 On Fri 29 Nov 2013 16:19:26 GMT, Kapetanakis Giannis wrote:
 Unfortunately on the Cisco part I don't SFP+.
 I have XENPACK option only which give me 3 options:
 
 SR ~ 3K GPL
 LRM ~ 1.5K GPL (I can't find any LRM GBIC for Intel side)
 CX4 ~ 600 GPL
 
 I'd avoid CX4, you wont find a CX4 NIC working well with OpenBSD
 nor would you want one tbh.. Stick with well known supported cards
 for OpenBSD..
 
 Thanks for all the replies Andy.
 
 Are we totally sure about this?
 I'm talking about Intel - CX4 support on OpenBSD with ix(4).
 
 The manual page lists these:
o   Intel 82598EB 10GbE Adapter (10GbaseCX4)
o   Intel 82598EB Dual Port 10GbE Adapter (10GbaseCX4)
o   Intel 82599EB 10GbE Adapter (10GbaseCX4)
 

CX4 should work fine but has mostly been replaced by
SFP+ direct attach/copper and 10GBase-T with new cards.



Re: 10G with Intel card - GBIC options

2013-12-02 Thread Andy
Yea CX4 will work, its the chipset that matters. But CX4 is short range 
and superseded, and by using SFP+ you can pick and choose your 
transceivers for fibre or CAT cabling etc.



On Mon 02 Dec 2013 10:10:37 GMT, Jonathan Gray wrote:

On Mon, Dec 02, 2013 at 11:36:31AM +0200, Kapetanakis Giannis wrote:

On 29/11/13 19:16, Andy wrote:

On Fri 29 Nov 2013 16:19:26 GMT, Kapetanakis Giannis wrote:

Unfortunately on the Cisco part I don't SFP+.
I have XENPACK option only which give me 3 options:

SR ~ 3K GPL
LRM ~ 1.5K GPL (I can't find any LRM GBIC for Intel side)
CX4 ~ 600 GPL


I'd avoid CX4, you wont find a CX4 NIC working well with OpenBSD
nor would you want one tbh.. Stick with well known supported cards
for OpenBSD..


Thanks for all the replies Andy.

Are we totally sure about this?
I'm talking about Intel - CX4 support on OpenBSD with ix(4).

The manual page lists these:
o   Intel 82598EB 10GbE Adapter (10GbaseCX4)
o   Intel 82598EB Dual Port 10GbE Adapter (10GbaseCX4)
o   Intel 82599EB 10GbE Adapter (10GbaseCX4)



CX4 should work fine but has mostly been replaced by
SFP+ direct attach/copper and 10GBase-T with new cards.




Re: 10G with Intel card - GBIC options

2013-12-02 Thread Kapetanakis Giannis

On 02/12/13 17:15, Andy wrote:
Yea CX4 will work, its the chipset that matters. But CX4 is short 
range and superseded, and by using SFP+ you can pick and choose your 
transceivers for fibre or CAT cabling etc.




Well the Cisco CX4 costs ~ 600$ List price,
while the SR one costs 3.000$ List price.

That's my main problem...

I would love to go for the SFP+ path but we cannot afford it,
so the CX4 seems like my only choice so far if it's ok with OBSD.

G



Re: 10G with Intel card - GBIC options

2013-12-02 Thread Andy

The choice is of course yours.. ;)

It would be worth trying a Cisco 'compatible' first before spending the 
big bucks on 'branded' optics..

http://www.gbics.com/xenpak-10gb-sr/?gclid=CKv_96G-irsCFSX4wgodQDEAdA

Anyway, this is quite a personal decision and does affect support..


On Mon 02 Dec 2013 15:52:07 GMT, Kapetanakis Giannis wrote:

On 02/12/13 17:15, Andy wrote:

Yea CX4 will work, its the chipset that matters. But CX4 is short
range and superseded, and by using SFP+ you can pick and choose your
transceivers for fibre or CAT cabling etc.



Well the Cisco CX4 costs ~ 600$ List price,
while the SR one costs 3.000$ List price.

That's my main problem...

I would love to go for the SFP+ path but we cannot afford it,
so the CX4 seems like my only choice so far if it's ok with OBSD.

G




Re: 10G with Intel card - GBIC options

2013-12-02 Thread Chris Cappuccio
Kapetanakis Giannis [bil...@edu.physics.uoc.gr] wrote:
 On 02/12/13 17:15, Andy wrote:
 Yea CX4 will work, its the chipset that matters. But CX4 is short range
 and superseded, and by using SFP+ you can pick and choose your
 transceivers for fibre or CAT cabling etc.
 
 
 Well the Cisco CX4 costs ~ 600$ List price,
 while the SR one costs 3.000$ List price.
 
 That's my main problem...
 
 I would love to go for the SFP+ path but we cannot afford it,
 so the CX4 seems like my only choice so far if it's ok with OBSD.
 

ebay for a cisco CX4 Xenpak for less than $100 USD



10G with Intel card - GBIC options

2013-11-29 Thread Kapetanakis Giannis

Hi,

I've just received a Cisco 6704 for my 10G uplinks.
I'm looking for a network adapter to put on my OpenBSD primary firewall.
I had in mind to use Intel X520-SR2 but the SR module of Cisco is too 
expensive...


So I'm looking for either LRM or CX4 GBIC options to put on the C6704.

Has anyone connected to these transceivers with any quality Intel card?

I can't find any LRM GBIC from intel. I found a CX4 one but the card is EOL.

Thanks for any feedback.

G



Re: 10G with Intel card - GBIC options

2013-11-29 Thread Andy
We bought the Intel x520-DA2 cards as they gives you the flexibility of 
using any SFP+ transceiver.. If you buy the SR2 you are locked to using 
short range fibre and the optics for the other end can get expensive!
NB; Their is a whole world of compatible optics out there which are 
just as good but wont have support from the switch vendor and will need 
careful testing with your kit..


If you get the DA2 card, also buy a couple of the Intel optics which 
when added to the order cost the same as the SR2 and you end up with 
exactly the same product, but with the flexibility of being able to use 
'SFP+ Direct Connect cables', SR optics, LR optics etc..

http://www.intel.com/support/network/adapter/pro100/sb/CS-030612.htm

We bought the optics for the link to the WAN provider, and SFP+ Direct 
Connect cables for the link to our HP 8206zl switches.


Intel cards support most of the official branded Direct Connect cables 
(don't get real cheap ones) so get Cisco branded as you have a Cisco 
switch.. Much cheaper than Cisco optics but the same end result.


Andy.


On Fri 29 Nov 2013 15:07:34 GMT, Kapetanakis Giannis wrote:

Hi,

I've just received a Cisco 6704 for my 10G uplinks.
I'm looking for a network adapter to put on my OpenBSD primary firewall.
I had in mind to use Intel X520-SR2 but the SR module of Cisco is too
expensive...

So I'm looking for either LRM or CX4 GBIC options to put on the C6704.

Has anyone connected to these transceivers with any quality Intel card?

I can't find any LRM GBIC from intel. I found a CX4 one but the card
is EOL.

Thanks for any feedback.

G




Re: 10G with Intel card - GBIC options

2013-11-29 Thread Andy

PS; I hope you have reeaaaly fast servers..
NB; ALTQ is currently 32bit so you cannot queue faster than 4 and a bit 
gig, unless you go for Hennings new queueing system which I'm still yet 
to do when I actually find time..



On Fri 29 Nov 2013 16:05:35 GMT, Andy wrote:

We bought the Intel x520-DA2 cards as they gives you the flexibility
of using any SFP+ transceiver.. If you buy the SR2 you are locked to
using short range fibre and the optics for the other end can get
expensive!
NB; Their is a whole world of compatible optics out there which are
just as good but wont have support from the switch vendor and will
need careful testing with your kit..

If you get the DA2 card, also buy a couple of the Intel optics which
when added to the order cost the same as the SR2 and you end up with
exactly the same product, but with the flexibility of being able to
use 'SFP+ Direct Connect cables', SR optics, LR optics etc..
http://www.intel.com/support/network/adapter/pro100/sb/CS-030612.htm

We bought the optics for the link to the WAN provider, and SFP+ Direct
Connect cables for the link to our HP 8206zl switches.

Intel cards support most of the official branded Direct Connect cables
(don't get real cheap ones) so get Cisco branded as you have a Cisco
switch.. Much cheaper than Cisco optics but the same end result.

Andy.


On Fri 29 Nov 2013 15:07:34 GMT, Kapetanakis Giannis wrote:

Hi,

I've just received a Cisco 6704 for my 10G uplinks.
I'm looking for a network adapter to put on my OpenBSD primary firewall.
I had in mind to use Intel X520-SR2 but the SR module of Cisco is too
expensive...

So I'm looking for either LRM or CX4 GBIC options to put on the C6704.

Has anyone connected to these transceivers with any quality Intel card?

I can't find any LRM GBIC from intel. I found a CX4 one but the card
is EOL.

Thanks for any feedback.

G




Re: 10G with Intel card - GBIC options

2013-11-29 Thread Kapetanakis Giannis

On 29/11/13 18:05, Andy wrote:
We bought the Intel x520-DA2 cards as they gives you the flexibility 
of using any SFP+ transceiver.. If you buy the SR2 you are locked to 
using short range fibre and the optics for the other end can get 
expensive!
NB; Their is a whole world of compatible optics out there which are 
just as good but wont have support from the switch vendor and will 
need careful testing with your kit..


If you get the DA2 card, also buy a couple of the Intel optics which 
when added to the order cost the same as the SR2 and you end up with 
exactly the same product, but with the flexibility of being able to 
use 'SFP+ Direct Connect cables', SR optics, LR optics etc..

http://www.intel.com/support/network/adapter/pro100/sb/CS-030612.htm

We bought the optics for the link to the WAN provider, and SFP+ Direct 
Connect cables for the link to our HP 8206zl switches.


Intel cards support most of the official branded Direct Connect cables 
(don't get real cheap ones) so get Cisco branded as you have a Cisco 
switch.. Much cheaper than Cisco optics but the same end result.


Andy.




Unfortunately on the Cisco part I don't SFP+.
I have XENPACK option only which give me 3 options:

SR ~ 3K GPL
LRM ~ 1.5K GPL (I can't find any LRM GBIC for Intel side)
CX4 ~ 600 GPL

I'm going probably for the CX4 option so I have to find an Intel server 
card that has CX4.


My options for CX4 so far look like these:
82599EB which support all kinds of interfaces. I don't know if this is a 
server adapter or not (looks very cheap)

Also it might need an CX4 interface to be attached.
http://ark.intel.com/products/32207/Intel-82599EB-10-Gigabit-Ethernet-Controller
Is 82599 chip the same on all the adapters? (EN, EB, X520, X540). Is it 
the same card

with different optics?

Also there is this NetEffect Ethernet Server Cluster Adapter CX4
http://ark.intel.com/products/55362/NetEffect-Ethernet-Server-Cluster-Adapter-CX4
but I don't know OpenBSD's support on this. Also it's PCIe v1.1 but
I think this is my least problem.

G
ps. thanks for the reply



Re: 10G with Intel card - GBIC options

2013-11-29 Thread Andy

On Fri 29 Nov 2013 16:19:26 GMT, Kapetanakis Giannis wrote:

On 29/11/13 18:05, Andy wrote:

We bought the Intel x520-DA2 cards as they gives you the flexibility
of using any SFP+ transceiver.. If you buy the SR2 you are locked to
using short range fibre and the optics for the other end can get
expensive!
NB; Their is a whole world of compatible optics out there which are
just as good but wont have support from the switch vendor and will
need careful testing with your kit..

If you get the DA2 card, also buy a couple of the Intel optics which
when added to the order cost the same as the SR2 and you end up with
exactly the same product, but with the flexibility of being able to
use 'SFP+ Direct Connect cables', SR optics, LR optics etc..
http://www.intel.com/support/network/adapter/pro100/sb/CS-030612.htm

We bought the optics for the link to the WAN provider, and SFP+
Direct Connect cables for the link to our HP 8206zl switches.

Intel cards support most of the official branded Direct Connect
cables (don't get real cheap ones) so get Cisco branded as you have a
Cisco switch.. Much cheaper than Cisco optics but the same end result.

Andy.




Unfortunately on the Cisco part I don't SFP+.
I have XENPACK option only which give me 3 options:

SR ~ 3K GPL
LRM ~ 1.5K GPL (I can't find any LRM GBIC for Intel side)
CX4 ~ 600 GPL


I'd avoid CX4, you wont find a CX4 NIC working well with OpenBSD nor 
would you want one tbh.. Stick with well known supported cards for 
OpenBSD..


At a guess, I'd use an SR XENPACK on the Cisco side and connect to the 
Intel Optic Transceiver on the OBSD side, and use an LC to SC OM4 fibre 
etc..


NB; Try a 'Cisco compatible' XENPACK SR if you can't afford it, for 
example;

http://www.gbics.com/xenpak-10gb-sr/?gclid=CKv_96G-irsCFSX4wgodQDEAdA

http://www.intel.com/content/dam/doc/product-brief/ethernet-sfp-optics-brief.pdf

Note the standards compatibility; 10GBASE-SR

You will need to spend some money on a pair to try it (I don't know if 
these work but they look like they do), if it works get the others.. 
It'll either work perfectly or it wont at all..


Good luck, Andy



I'm going probably for the CX4 option so I have to find an Intel
server card that has CX4.

My options for CX4 so far look like these:
82599EB which support all kinds of interfaces. I don't know if this is
a server adapter or not (looks very cheap)
Also it might need an CX4 interface to be attached.
http://ark.intel.com/products/32207/Intel-82599EB-10-Gigabit-Ethernet-Controller

Is 82599 chip the same on all the adapters? (EN, EB, X520, X540). Is
it the same card
with different optics?

Also there is this NetEffect Ethernet Server Cluster Adapter CX4
http://ark.intel.com/products/55362/NetEffect-Ethernet-Server-Cluster-Adapter-CX4

but I don't know OpenBSD's support on this. Also it's PCIe v1.1 but
I think this is my least problem.

G
ps. thanks for the reply




Re: 10G with Intel card - GBIC options

2013-11-29 Thread Hrvoje Popovski
On 29.11.2013. 17:08, Andy wrote:
 PS; I hope you have reeaaaly fast servers..
 NB; ALTQ is currently 32bit so you cannot queue faster than 4 and a bit
 gig, unless you go for Hennings new queueing system which I'm still yet
 to do when I actually find time..
 

Hi,

I'm not sure if new queueing system is faster than 4.3Gbps or
pfctl -nvf pf.conf is lying or interface must be up and running to see
real bandwith with pfctl -vvsq.
I can't test it because I have one ix card. Will try to lend another ix
card to see.

# ifconfig ix0
ix0: flags=28843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,NOINET6 mtu 1500
lladdr 90:e2:ba:19:29:a8
priority: 0
media: Ethernet autoselect
status: no carrier
inet 10.22.22.1 netmask 0xff00 broadcast 10.22.22.255



pf.conf with 10G on ix0:
queue queue@ix0 on ix0 bandwidth 10G max 10G
queue ackn@ix0 parent queue@ix0 bandwidth 5G
queue bulk@ix0 parent queue@ix0 bandwidth 5G default
match on ix0 set ( queue (bulk@ix0i, ackn@ix0), prio (1,7) )

pfctl -nvf pf.conf
queue queue@ix0 on ix0 bandwidth 1G, max 1G
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 705M
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 705M default

pfctl -vvsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue ack@ix0 parent queue@ix0 on ix0 bandwidth 705M qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 705M default qlimit 50



pf.conf with 6G on ix0:
queue queue@ix0 on ix0 bandwidth 6G max 6G
queue ackn@ix0 parent queue@ix0 bandwidth 3G
queue bulk@ix0 parent queue@ix0 bandwidth 3G default
match on ix0 set ( queue (bulk@ix0i, ackn@ix0), prio (1,7) )

pfctl -nvf pf.conf
queue queue@ix0 on ix0 bandwidth 1G, max 1G
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 3G
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default

pfctl -vvsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 3G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default qlimit 50



pf.conf with 4G on ix0:
queue queue@ix0 on ix0 bandwidth 4G max 4G
queue ackn@ix0 parent queue@ix0 bandwidth 2G
queue bulk@ix0 parent queue@ix0 bandwidth 2G default
match on ix0 set ( queue (bulk@ix0i, ackn@ix0), prio (1,7) )

pfctl -nvf pf.conf
queue queue@ix0 on ix0 bandwidth 4G, max 4G
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 2G
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default

pfctl -vvsq
queue queue@ix0 on ix0 bandwidth 4G, max 4G qlimit 50
queue ackn@ix0 parent queue@ix0 on ix0 bandwidth 2G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default qlimit 50