So I'm doing some testing with an SRX 650 cluster(11.2R6.3) and am starting to see some odd throughput issues, being that this is my first SRX cluster, I'm most likely overlooking something minor.
Physical setup is pretty basic, each srx has multiple links connected into its own switch, as well as two cross connects between the pair of SRX for control and fabric. The switches are connected with a 4 port portchannel. All interfaces are gig and negotiated to 1000m. Switches are catalyst 4948's, if that matters. SRX-node0 ------------sw0 | | | | | SRX-node1 ------------sw1 I have two servers connected into sw0, when on the same vlan iperf udp tests show 900Mbits/sec and tcp tests show 940+Mbits/sec....so far so good. Moving one box to another vlan and a seperate reth, so traffic now traverses srx-node0...traffic plummets. Iperf udp shows 500Mbits/sec and tcp 317Mbits/sec. Prior to setting up the cluster I tested via a single SRX and saw 900Mbits/sec+ for both tcp and udp. Both hosts are on the same switch and traversing node0 which is also on the same switch. Each vlan terminates on its own reth. Any suggestions as to where to look next? _______________________________________________ juniper-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/juniper-nsp

