'echo NO_NONTASK_CAPACITY > /sys/kernel/debug/sched_features'  in both guests.
Results:
VM1: STA is disabled -- no changes, still little bit bellow expected 90%
VM2: STA is enabled -- result is changed, but still bad. Hard to say
better or worse. It prefers to stuck at quarters (100% 75% 50% 25%)
Output is attached.

Thanks,
Alexey

On Tue, Oct 20, 2015 at 5:39 AM, Wanpeng Li <kernel...@gmail.com> wrote:
> Cc Peterz,
> 2015-10-20 5:58 GMT+08:00 Alexey Makhalov <makhal...@gmail.com>:
>>
>> Hi,
>>
>> I did benchmarking of scheduler fairness with enabled steal time
>> accounting(STA) in KVM.
>> And results are really interesting.
>>
>> Looks like STA provides worse scheduler fairness against disabled STA
>> (no-steal-acc cmdline param)
>>
>> I created benchmark, main idea is: 2 cgroups with cpu.shares
>> proportion like 1/9, run identical work in both groups, and expecting
>> to get the same proportion of work done – 1/9. Condition – CPU
>> overcommit.
>>
>> On bare metal it is fair +- some percentages of fluctation.
>> On KVM with no STA   it’s less fair. With STA enabled results are
>> ugly! One again – in CPU overcommit situation.
>>
>>
>> Host: ubuntu 14.04, Intel i5 – 2x2 CPUs
>> 2 VMs (4 vCPU each) are working in parallel.   2:1 cpu overcommit.
>>
>> Each VM has running benchmark:
>> cgroups cpu.shares proportion is 128/1152 (10%/90%), work – spinning
>> in cycle, number of cycles are being counted.
>
>
> Could you try if 'echo NO_NONTASK_CAPACITY >
> /sys/kernel/debug/sched_features' in guests works?
>
> Regards,
> Wanpeng Li
90% 28539
75% 31907
76% 30348
75% 43677
75% 29765
63% 30351
74% 28535
70% 29590
65% 31281
87% 29476
75% 29505
75% 29737
75% 29534
75% 29627
89% 30411
75% 29069
74% 29786
74% 29776
74% 29633
74% 29470
74% 29406
90% 30274
90% 29615
96% 30497
100% 29263
100% 29241
100% 29748
100% 29790
99% 29769
74% 66239
52% 118436
50% 117409
50% 118167
50% 118990
50% 38479
48% 29641
50% 29757
50% 29754
50% 29688
50% 29671
50% 29745
50% 29831
50% 29596
50% 29645
50% 29661
50% 29567
50% 29511
50% 29763
50% 29919
51% 28823
51% 29944
50% 29501
50% 29406
50% 29410
49% 29519
51% 30109
49% 29586
49% 29621
49% 29579
43% 80183
49% 118185
49% 113732
45% 29698
25% 29528
24% 30078
36% 29716
32% 28976
43% 77606
19% 131592
0% 39503
12% 29709
25% 29717
24% 29720
25% 29695
25% 29703
25% 29570
25% 29492
25% 29790
25% 29677
25% 29451
25% 29536
25% 29476
25% 29613
24% 30045
24% 30227
37% 52553
43% 29986
50% 29741
49% 30003
50% 29982
35% 29907
39% 30156
48% 30590
66% 58939
--- benchmark was stopped on the first VM. VM1 was in idle from here ---  
90% 58977
89% 58952
90% 58945
89% 58945
90% 58948
89% 59001
89% 58966
90% 58966
90% 59005
89% 58998
90% 58998
90% 59010
90% 59028
90% 59070
90% 58558
90% 58948
89% 59080
89% 58601
90% 58385
90% 59003
89% 57695
90% 58843
89% 59104
90% 59006
90% 57110
--- to here ---
97% 30300
100% 29335
100% 29541
100% 29609
100% 29880
100% 29925
100% 29517
100% 29723
100% 28467
99% 29597
100% 29817
100% 29772
100% 29761
100% 29547
100% 29556
100% 29848
100% 29713
100% 29616
100% 29659
100% 29709
100% 29757
100% 29806
100% 30357
100% 29784
100% 29812
100% 29679
100% 29839
100% 29633
100% 29650
100% 29730
100% 29964
100% 29993
100% 29899
100% 29951
100% 29924
100% 29938
100% 29784
100% 30043
100% 29308
99% 30625
95% 29907
99% 29811
100% 29626
100% 30054
100% 29744
100% 29693
100% 30008
100% 29935
100% 29825
100% 29684
100% 30010
100% 30163
93% 51597
89% 58972
89% 58692
90% 57975
100% 27946
100% 29624
100% 30365
98% 30468
99% 30971
100% 30907
100% 28079
100% 30074
100% 29001
100% 30366
99% 29814
100% 30405
97% 29866
100% 29003
100% 28545
100% 29693
100% 30013
100% 28179
100% 28644
100% 30081
100% 28191
100% 31325
100% 30223
100% 29848
100% 30057
100% 29675
100% 29859
100% 28160
100% 29679
100% 29689
100% 29727
100% 29715
100% 30547
100% 29281
100% 29761
100% 29596
100% 29797
100% 29664
100% 29768
100% 29708
100% 29614
100% 29627
100% 30158
100% 29801
100% 28569
100% 29118
100% 30540
100% 29335
100% 29188
99% 30413
100% 29438
100% 28382
100% 29492
100% 28555
100% 29700
99% 30374
75% 29759
75% 29954
75% 29588
75% 29540
75% 29616
75% 29890
75% 29457
75% 29466
75% 29724
74% 29395
74% 29613
75% 29776
75% 29455
75% 29788
75% 29592
75% 29679
75% 29749
75% 29715
75% 29992
75% 29271
73% 30173
74% 29480
75% 29415
75% 28416
75% 29786
75% 29086
75% 29293
75% 29670
75% 29898
75% 29756
75% 29498
75% 29946
75% 29328
75% 29120
75% 29821
75% 29765
75% 29673
75% 29682
75% 29657
74% 30045
75% 29641
75% 29662
75% 29595
75% 29635
75% 29720
73% 30315
75% 29589
75% 29607
75% 29629
75% 29738
75% 30068
72% 28898
75% 26530
74% 30075
73% 29182
74% 28506
73% 28135
74% 29083
75% 29609
73% 27956
74% 28921
74% 29045
74% 29576
74% 29228
74% 29494
74% 29218
73% 27872
74% 29787
74% 29633
68% 32062
98% 31992
98% 32120
90% 32984
98% 32568
98% 32307
97% 32300
98% 32262
99% 30436
94% 30023
96% 32511
99% 31075
99% 30112
100% 29835
100% 30079
100% 29980
100% 29954
100% 29771
100% 29993
100% 29796
100% 29895
100% 29748
100% 29846
100% 29950
100% 29900
100% 29871
100% 29826
100% 29912
100% 29892
100% 29975
100% 29889
100% 29909
100% 29789
100% 30011
100% 29810
100% 29788
100% 30089
100% 30807
100% 29916
100% 29806
100% 29807
98% 30052
100% 29610
100% 29595
100% 30051
100% 29552
99% 30988
100% 28300
100% 30214
100% 31421
100% 31879
100% 30244
100% 30461
100% 26227
100% 32101
100% 31911
100% 28771
100% 29708
100% 28853
100% 29722
100% 29518
100% 29798
100% 28860
100% 29670
100% 30006
100% 30149
100% 29480
100% 29508
100% 29624
100% 29798
100% 30669
100% 29555
100% 29566
100% 29495
100% 29493
100% 29571
100% 29510
100% 29635
100% 29655
100% 29516
100% 29507
100% 29683
100% 29508
100% 29526
100% 29633
100% 29580
100% 29583
100% 29457
100% 29605
100% 29658
100% 29558
100% 29511
100% 29392
100% 29519
100% 29349
100% 29505
100% 39608

Reply via email to