Does upstream know about this?
Stefan
Excuse my typo sent from my mobile phone.
> Am 12.05.2016 um 12:51 schrieb Wolfgang Bumiller :
>
> The default behavior introduced by kernel commit aca6ff29c
> (KVM: dynamic halt-polling) causes a spike in cpu usage and
> massive
> 2M : /run/hugepages/kvm
> 1G : /run/hugepages/kvm_1GB
>
> for example
>
> (It's possible to have /dev/hugepages and /run/hugepages/kvm at the same
> time, as the files are allocated by process using pages, so they are no
> conflict)
would be OK for me.
> a simple
>
> echo x >
> /sys/devices/system/node/nodeX/hugepages/hugepages-1048576kB/nr_hugepages
> echo x >
> /sys/devices/system/node/nodeX/hugepages/hugepages-2048kB/nr_hugepages
>
> is enough (need to be done for each numa node)
>
> freehuges can be checked
>
> cat
>
> Technically, it's possible to allocate, 1pages of 1GB and 250pages of 2MB,
> with 2 mounts point.
>
> But It's a little bit more complex
>
> Opinions ?
Why not. Also, allocation 1GB pages may can fail due to fragmentation, while
allocation of 2MB pages still work?
>>who mount that?
# mount|grep hugecgroup on /sys/fs/cgroup/hugetlb type cgroup
(rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
cgmanager.service
>>hugetlb on
I have tested dynamic allocation/deallocation of 2M and 1G hugepages,
It's working for me (at least if memory is not too fragmented)
a simple
echo x >
/sys/devices/system/node/nodeX/hugepages/hugepages-1048576kB/nr_hugepages
echo x >
Another question,
currently my patch manage only 1 size of hugepages.
one /dev/hupagepages mount, with 2MB or 1GB hugepages.
That mean that if user want to use 1GB hugepages, he can only define vm
memory with multiple of 1GB (ex: 1,5G will not work)
Technically, it's possible to allocate,
> but I have a default
>
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
I wonder what program creates that? Will try to find out ...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
>># mount|grep hugecgroup on /sys/fs/cgroup/hugetlb type cgroup
>>(rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/)
>>hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
>>hugetlb on /run/lxcfs/controllers/hugetlb type cgroup
looking at the old article
https://lwn.net/Articles/376606/
vm.nr_overcommit_hugepages
"Knowing the exact huge page requirements in advance may not be possible. For
example, the huge page requirements may be expected to vary throughout the
lifetime of the system. In this case, the maximum
>>Normally hugepages are setup via sysctl.conf. Default size is always 2 MB. No
>>need for kernel commandline editing.
for 2MB yes, for 1GB hugepage I'm not sure you can enable them through sysctl
>>Why exactly are you using hugepages?
needed for ovs + vhost-user + dpdk
>>Can KVM handle
On Thu, May 12, 2016 at 3:05 PM, Wolfgang Bumiller
wrote:
> On Thu, May 12, 2016 at 02:30:11PM +0300, Igor Vlasenko wrote:
>> On Thu, May 12, 2016 at 2:08 PM, Wolfgang Bumiller
>> wrote:
>> > Could you review the following modified version of your
>>who mount that? Do I still need above setup? How do I know the number
>>of required hugepages in advance? How can we make that more convenient for
>>the user?
It's quite possible to increase/decrease hugepage online, at least of 2MB
hugepages, through sysfs.
echo X > /proc/sys/vm/nr_hugepages
On Thu, May 12, 2016 at 02:30:11PM +0300, Igor Vlasenko wrote:
> On Thu, May 12, 2016 at 2:08 PM, Wolfgang Bumiller
> wrote:
> > On Thu, May 12, 2016 at 11:42:29AM +0300, Igor Vlasenko wrote:
> >> On Wed, May 11, 2016 at 10:56 PM, Igor Vlasenko
> >>
Normally hugepages are setup via sysctl.conf. Default size is always 2 MB.
No need for kernel commandline editing.
Why exactly are you using hugepages? Can KVM handle hugepages? Normally
hugepages implies no KSM, isn't that right?
On Thu, May 12, 2016 at 12:57 PM, Dietmar Maurer
On Thu, May 12, 2016 at 2:08 PM, Wolfgang Bumiller
wrote:
> On Thu, May 12, 2016 at 11:42:29AM +0300, Igor Vlasenko wrote:
>> On Wed, May 11, 2016 at 10:56 PM, Igor Vlasenko wrote:
>> > This is an improved version of my previous patch
>> > [ support
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Thu, May 12, 2016 at 11:42:29AM +0300, Igor Vlasenko wrote:
> On Wed, May 11, 2016 at 10:56 PM, Igor Vlasenko wrote:
> > This is an improved version of my previous patch
> > [ support for udev-style physical interface names (like enp3s0),
> >
> host configuration
> --
> hugepages need to be allocated at boot
>
> for 4GB of 2M hugepages
>
> /etc/default/grub
> -
> GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=2M hugepages=2048"
>
> /etc/fstab
> --
> hugetlbfs /dev/hugepages hugetlbfs
The default behavior introduced by kernel commit aca6ff29c
(KVM: dynamic halt-polling) causes a spike in cpu usage and
massive performance degradation with virtio network under
network load. This patch changes the newly introduced kvm
module parameters to reflect the old behavior.
---
Makefile
I prefer the following code:
my @f100 = sort @{$ifaces->{vmbr0}->{families}};
die "invalid families defined for vmbr0"
if (scalar(@f100) != 2) || ($f100[0] ne 'inet') || ($f100[1] ne 'inet6');
> +# Compare two arrays of strings
> +sub strarray_equals($$) {
> +my ($left, $right) = @_;
> +
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
(This is only a minor fix since these functions are run
while chrooted into the container directory anyway.)
---
src/PVE/LXC/Setup/Base.pm | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/PVE/LXC/Setup/Base.pm b/src/PVE/LXC/Setup/Base.pm
index d54c0cd..927f779
On Wed, May 11, 2016 at 10:56 PM, Igor Vlasenko wrote:
> This is an improved version of my previous patch
> [ support for udev-style physical interface names (like enp3s0),
> http://pve.proxmox.com/pipermail/pve-devel/2016-May/020958.html ]
> thanks to Wolfgang.
Yesterday I
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
26 matches
Mail list logo