Yes. My HW ID 2 links were even worse than the ID 1's. The GUI via in-band management wouldn't even load. I couldn't get the links to push more than about 140Mbps. Trango put out 3.1.5, but it didn't work on my radios for whatever reason. They finished up 3.2.0 (which runs on ID2 OMUs only, and will also not talk to 3.1.5). They said they improved the internal switch/modem flow control. I'm able to get full throughput out of a 40MHz/256QAM link and a 56MHz/256QAM link now.

I haven't updated the ID 1's to 3.1.5 yet. Bench tested it on a couple spare radios and IBM has issues. I get no ARP reply from the radios. They said they reproduced it in the lab. All of our ApexPlus links have only the data port cabled, so we have to use IBM, and I cannot have it break.

On 3/6/2016 4:49 PM, Bill Prince wrote:
Are you saying there is some buffer bloat (or some other buffer issue) in the ApexPlus?

bp
<part15sbs{at}gmail{dot}com>

On 3/6/2016 1:42 PM, George Skorup wrote:
What about sync/GPS? And are you on 2.5 or 5ms framing?

Nate @ Blast and I had a similar issue. We had 450 + Ubiquiti + Trango ApexPlus. Throughput was fine on the PMP450. Directly attaching to the switch/router at the Trango link showed no apparent issues. Across a Rocket link, the clients behind it (some UBNT, FSK and 450) would see a burst of 10-12Mbps and then it would dwindle to 3-5Mbps. We swapped the Rockets for a set of ePMP. Throughput got much better with the link in ePTP (2.5ms) mode. So my first thought was that the Rocket link just sucked, and it did, but we were still getting some speed complaints and not even at peak utilization. Something else was going on.

And then Nate posted about very similar issues. He had a client on a 450 and an ePMP. Throughput on the 450 was fine, ePMP not so much. And really only when he had the traffic going across his Trango link. That's when Gino mentioned the ApexPlus ethernet packet buffer issue. The shorter frame duration would allow the Trango buffer to empty a little faster, hiding the problem just a little bit. Ultimately, Trango needed to get their buffer issue sorted out, and they did.

I wrote all that to ask this... what's farther upstream from this site?

On 3/6/2016 10:26 AM, Justin Wilson wrote:
Sp I have an issue. Tower is a 4 sector ePMP setup and two backhauls. We are seeing poor throughput when connected to the APs (all of them). Signals are great. Quality and capacity are great. When going through the AP we are consistently seeing 4-5 megs of download out to a known speedtest server. If we plug into the same wired port on which the AP, which gave us the poor throughput, we can max out the 100 meg uplink. Issue only happens when going through the wireless. Here is what I know:


1.APs are set to 75/25. SM can do a link test and get 50meg x 12meg on the link test. So RF is good. Isolated AP to where only one client was on. Same great results.

2.Firmware does not seem to be a factor. Can reproduce this on 2.6.1, 2.5.1, and 2.4.3.

3.Have replaced POE injectors 3 times with different manufactures.

4.Does not matter if it is DHCP or PPPoE. Hard wired is fine. Wireless stinks.


The AP is plugged into a POE which then plugs into a Mikrotik 2011. If I plug into the same exact port the aps plug into speeds are great. The only thing that is left is patch cable to POE, Cable to AP, or AP. I refuse to think 6 cables on the tower do the exact same thing. The odds for that are Powerball winning crazy. Speediest from a laptop hooked to an SM to the 2011 are poor. 4-5 megs consistent with spikes up to 10-13, but all over the place. Acts like negotiation. Have set Mikrotik to Auto, to 100 meg full, only accepted 100 meg on auto.

Customers who have 5 meg packages or below don’t see this. Those with 10 meg packages are the ones seeing 4-5 meg speeds. 10 meg packages did work.

Clients are mainly at 2.6.1, but we can reproduce this with a 2.5 and 2.4.x SMs. Nothign has changed in the network, which tests out just fine up to the point it gets handed off to the ePMP.

Anyone ran into this? Any thoughts?

Justin





Reply via email to