Hi

On 21/08/2019 3:32 pm, adamv0...@netconsultings.com wrote:
Thank you, much appreciated.
Out of curiosity what latency you get when pinging through the vMX please?

It's less than 1/10th of a millisecond (while routing roughly 3gbit of traffic and this via a GRE tunnel running over IPSEC terminated on the vMX), I haven't done more testing to get exact figures though as this is good enough for my needs.

I am actually curious though, why not use the vmx.sh script to start/stop it? I don't think JTAC will support more than basic troubleshooting with that configuration but I could be wrong.

The only thing that annoys me slightly with vmx.sh is the management interface on the host that gets used for OOB on the vFP/vRE loses it's IPv6 address when the IP's are moved to the bridge interface it creates. It's not a big deal as for the host management I use a different interface anyway and IPv6 continues to work fine.

If you are doing a new deployment I strongly recommend you jump to 19.1R1 or higher. The reason for this is the Juniper supplied drivers for i40e (and ixgbe) are no longer required (actually they are deprecated). All releases before 19.1R1 I have had constant issues with the vFP crashing and the closest to a fix I got was a software package that would restart the vFPC automatically. When the crash occured it would show in the hosts kernel log file that a PF reset has occured. This happened across multiple Ubuntu and CentOS releases. After deploying 19.1R1 with the latest Intel supplied i40e and iavf (replacement for i40evf) drivers it has been stable for me.

Since deploying 19.1R1, on startup I create the VF's and mark them as trusted instead of letting the vmx.sh script handle it. Happy to supply the startup script I made if its helpful.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to