On 30 December 2018 18:12:43 CET, Aaron1 <aar...@gvtc.com> wrote:
>With vMX I understand that as more performance is needed, more vcpu,
>network card(s) and memory are needed.  As you scale up, a single vcpu
>is still used for control plane, any additional vcpu‘s are used for
>forwarding plane.  The assignment of resources is automatic and not
>configurable.

I think this depends on which kind of vMX you use, the nested version allocates 
a fixed amount of vCPU and memory to the vCP machine so more doesn't help 
there. If using separate vCP and vFP machines then more resources to the vCP 
helps in a RR scenario and more resources to the vFP helps in the vPE scenario 
(I don't know what the ceiling is of either scenario though, can Junos make 
"good" use of 20 BGP threads for example?).


>> On Dec 30, 2018, at 2:53 AM, Robert Hass <robh...@gmail.com> wrote:
>> 
>> Hi
>> I have few questions regarding vMX deployed on platform:
>> - KVM+Ubuntu as Host/Hypervisor
>> - server with 2 CPUs, 8 core each, HT enabled

If you want high throughput, in a vPE style disabling HT is recommended.

>> - DualPort (2x10G) Intel X520 NIC (SR-IOV mode)
>> - DualPort Intel i350 NIC
>> - vMX performance-mode (SR-IOV only)
>> - 64GB RAM (4GB Ubuntu, 8GB vCP, 52GB vFPC)

I use vMX in the lab but haven't tried to test it's performance, you might need 
to enable hugepages on your KVM host if you haven't already. KVM has support 
for hugepages (somehow, I haven't used it).

>> - JunOS 18.2R1-S1.5 (but I can upgrade to 18.3 or even 18.4)
>> 
>> 1) vMX is using CPU-pinning technique. Can vMX use two CPUs for vFPC

Technically yes. If you manually assign multiple cores to the vFP machine from 
both physicla CPUs it will look like one multicore CPU to this vFP of you 
configure that VM in KVM to only have one CPU. However, this is generally bad. 
When you have 2 physical CPUs with cores from both assigned to the same CPU you 
will have some NUMA locality performance penalty (cache misses). You should 
place the all vCPUs of the same VM on the same NUMA node (because you're not 
dealing with TBs of memory). High performance VMs should use cores from the 
same CPU if possible and HT should be disabled in the case of vMX. The vFP uses 
DPDK to continuously poll the NIC queues for new packets, so core allocated to 
NIC queue processing are locked at 99% CPU usage all the  time, when you have 
1pps of traffic or 1Mpps. HT doesn't work very well when you want to lock the 
cores like this and is often adds a performance penalty due to the high rate of 
context switches.

>>   Eg. machine with two CPUs, 6 cores each. Total 12 cores. Will vMX
>>   use secondary CPU for packet processing ?

As above, depending on how you configure the VM it might see all cores as the 
same CPU.

>> 2) Performance mode for VFP requires cores=(4*number-of-ports)+3.
>>   So in my case (2x10GE SR-IOV) it's (4*2)+3=11. Will vMX count the
>>   cores resulting from HT (not physical) in that case?

If you have 4 physical core, 8 with HT and allocate 6 to a VM it just see's 6 
cores and doesn't differentiate between "real" cores or HT cores but as above, 
for high performance VMs HT is generally disabled. You can oversubscribe with 
KVM, if you host has 4 core, with or without HT, you can have two VMs with 4 
vCPUs but then they'll be firing for physical CPU resources. In the case of vMX 
you don't want to oversubscribe because, as above, DPDK locks the NIC queue 
cores 99%.

>> 3) How JunOS Upgrade process looks like on vMX ? Is it regular
>>   request system software add ...

Sorry don't know :)

I often make notes and never get around to publishing them online anywhere. 
Nearly 2 Yeats ago (where did the time go?) I was testing CRS1000v performance. 
This link might have some useful info under the HugePages and Virtualization 
sections: 
https://docs.google.com/document/d/1YUwU3T5GNgmi6e2JwgViFRO_QoyUXiaDGnA-cixAaRY/edit?usp=drivesdk

This page has some notes on NUMA affinity, its important (IMO) to understand 
why it causes problems: 
https://null.53bits.co.uk/index.php?page=numa-and-queue-affinity


Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to