Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Alexandre DERUMIER
Hi,

>>As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive for 
>>MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can manage 
>>his own numa-processes better than QEMU, and as i guess that also will exist 
>>many applications that will manage his own numa-processes better than QEMU, 
>>is that i would like to order that PVE GUI has a option of enable or disable 
>>the automatic administration of the numa-processes, also with the 
>>possibility of do live migration.

I'm not sure to understand what do you mean by 
"says that MS-SQL Server can manage his own numa-processes better than QEMU,"


Numa are not process, it's an architecture to regroup cpus with memory bank,for 
fast memory access.


They are 2 parts:

1)currently, qemu expose the virtual numa nodes to the guest.
(each numa node = X cores  with X memory)

This can be simply enabled with numa:1  with last patches,
(I'll create 1 numa node by virtual socket, and split the ram amount between 
each node


or if you want to custom memory access, cores by nodes,or setup specific 
virtual numa nodes to specific host numa nodes
you can do it with 
numa0: , 
numa1: 
"cpus=[[,hostnodes=][,policy=]]"


But this is always the application inside the guest which manage the memory 
access.


2) Now with kernel 3.10, we have also auto numabalancing at the host side.
I'll try to map if possible the virtual numa nodes to host numa node.

you can disable this feature with "echo 0 > /proc/sys/kernel/numa_balancing"


So for my point of view, numa:1 + auto numa balancing should give you already 
good results, 
and it's allow live migration between different hosts numa architecture


Maybe with only 1vm, you can try to manually map virtual nodes to specific 
nodes.

I'm interested to see results between both method (Maybe do you want last 
qemu-server deb from git ?)



I plan to add gui for part1.




- Mail original -
De: "Cesar Peschiera" 
À: "aderumier" , "dietmar" 
Cc: "pve-devel" 
Envoyé: Mardi 6 Janvier 2015 06:35:15
Objet: Re: [pve-devel] [PATCH] add numa options

Hi Alexandre and developers team. 

I would like to order a feature for the next release of pve-manager: 

As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive for 
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can manage 
his own numa-processes better than QEMU, and as i guess that also will exist 
many applications that will manage his own numa-processes better than QEMU, 
is that i would like to order that PVE GUI has a option of enable or disable 
the automatic administration of the numa-processes, also with the 
possibility of do live migration. 

Moreover, if you can to add such feature, i will can to run a test with 
MS-SQL Server for know which of the two options give me better results and 
publish it (with the times of wait for each case) 

@Alexandre: 
Moreover, with your temporal patches for manage the numa-processes, in 
MS-SQL Server i saw a difference of time between two to three times more 
quick for get the results (that it is fantastic, a great difference), but as 
i yet don't finish of do the tests (talking about of do some changes in the 
Bios Hardware, HugePages managed for the Windows Server, etc), is that yet i 
don't publish a resume very detailed of the tests. I guess that soon i will 
do it (I depend on third parties, and the PVE host not must lose the cluster 
communication). 

And talking about of lose the cluster communication, from that i have "I/OAT 
DMA engine" enabled in the Hardware Bios, the node never more lost the 
cluster communication, but i must do some extensive testing to confirm it. 

Best regards 
Cesar 

- Original Message - 
From: "Alexandre DERUMIER"  
To: "Dietmar Maurer"  
Cc:  
Sent: Tuesday, December 02, 2014 8:17 PM 
Subject: Re: [pve-devel] [PATCH] add numa options 


> Ok, 
> 
> Finally I found the last pieces of the puzzle: 
> 
> to have autonuma balancing, we just need: 
> 
> 2sockes-2cores-2gb ram 
> 
> -object memory-backend-ram,size=1024M,id=ram-node0 
> -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 
> -object memory-backend-ram,size=1024M,id=ram-node1 
> -numa node,nodeid=1,cpus=2-3,memdev=ram-node1 
> 
> Like this, the host kernel will try to balance the numa node. 
> This command line works if the host don't support numa. 
> 
> 
> 
> now if we want to bind guest numa node to specific host numa node, 
> 
> -object 
> memory-backend-ram,size=1024M,id=ram-node0,host-nodes=0,policy=preferred 
> -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 
> -object 
> memory-backend-ram,size=1024M,id=ram-node1,host-nodes=1,policy=bind \ 
> -numa node,nodeid=1,cpus=2-3,memdev=ram-node1 
> 
> This require that host-nodes=X exist on the physical host 
> and need also the qemu-kvm --enable-numa flag 
> 
> 
> 
> So, 
> I think we could add: 
> 
> numa:0|1. 
> 
> which generate the first config, create 1numa node by socket, and share 
> the ram across the the nodes 
> 
>

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-06 Thread Alexandre DERUMIER
>>I guess that with my manuals of Dell and seeing your configuration, i will 
can do it well. 

http://www.dell.com/downloads/global/products/pwcnt/en/app_note_6.pdf

PowerConnect 52xx: 
console# config 
console(config)# ip igmp snooping 
console(config)# ip igmp snooping querier 

cisco is similar to this, it's really simple.

You can also enable it on your routers if they support it.


Try to have at least 2 querier on your network for failover.
(the master querier election is done automaticaly)


- Mail original -
De: "Cesar Peschiera" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Mardi 6 Janvier 2015 05:41:49
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns 
off

Many thanks Alexandre !!!, it is the rule that i was searching long time 
ago, i will add to the rc.local file. 

Moreover and if you can, as i need permit multicast in some Windows servers 
VMs, workstations in the local network, and PVE nodes, can you show me the 
configuration of your switch managed in terms of igmp snooping and querier? 
(the switches managed Dell has configurations very similar to Cisco). It is 
due that i don't have practice for do this exercise and i need a model as 
point of start. 

I guess that with my manuals of Dell and seeing your configuration, i will 
can do it well. 

Best regards 
Cesar 

- Original Message - 
From: "Alexandre DERUMIER"  
To: "Cesar Peschiera"  
Cc: "dietmar" ; "pve-devel"  
Sent: Tuesday, January 06, 2015 12:37 AM 
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off 


>>And about of your suggestion: 
>>-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type 
>>MULTICAST -m udp --dport 5404:5405 -j RETURN 

Note that this is the rules for HOST-IN, 
if you want to adapt you can do 

-A FORWARD -s yournetwork/24 -p udp -m addrtype --dst-type MULTICAST -m 
udp --dport 5404:5405 -j DROP 

>>1) ¿Such rule will avoid the cluster communication to the VMs? 
>>2) ¿Such rule will not prejudice the normal use of the igmp protocol own 
>>of 
the Windows systems in the VMs? 

this will block multicast traffic, on udp port 5404:5405 (corosync default 
port), from your source network. 

>>3) If both answers are correct, where i should put the rule that you 
suggest? 

Currently it's not possible to do it with proxmox firewall, 
but you can add it in rc.local for example. 

iptables -A FORWARD -s yournetwork/24 -p udp -m addrtype --dst-type 
MULTICAST -m udp --dport 5404:5405 -j DROP 

Proxmox firewall don't override custom rules 


- Mail original - 
De: "Cesar Peschiera"  
À: "aderumier" , "dietmar"  
Cc: "pve-devel"  
Envoyé: Mardi 6 Janvier 2015 00:09:17 
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off 

Hi to all 

Recently i have tested in the company that igmp is necessary for the VMs 
(tested with tcpdump), the company has Windows Servers as VMs and several 
Windows systems as workstations in the local network, so i can tell you that 
i need to have the protocol igmp enabled in some VMs for that the Windows 
systems in the company work perfectly. 

And about of your suggestion: 
-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type 
MULTICAST -m udp --dport 5404:5405 -j RETURN 

I would like to do some questions: 
1) ¿Such rule will avoid the cluster communication to the VMs? 
2) ¿Such rule will not prejudice the normal use of the igmp protocol own of 
the Windows systems in the VMs? 
3) If both answers are correct, where i should put the rule that you 
suggest? 


- Original Message - 
From: "Alexandre DERUMIER"  
To: "dietmar"  
Cc: "pve-devel"  
Sent: Monday, January 05, 2015 6:18 AM 
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off 


>>>Following rule on your pve nodes should prevent igmp packages flooding 
>>>your bridge: 
>>>iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP 
>>> 
>>>If something happens you can remove the rule this way: 
>>>iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP 
> 
> Just be carefull that it'll block all igmp, so if you need multicast 
> inside your vms, 
> I'll block it too. 
> 
> Currently, we have a default rule for IN|OUT for host communication 
> 
> -A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type 
> MULTICAST -m udp --dport 5404:5405 -j RETURN 
> to open multicast between nodes. 
> 
> Bit indeed, currently, in proxmox firewall, we can't define global rule in 
> FORWARD. 
> 
> 
> 
> 
> @Dietmar: maybe can we add a default drop rule in -A PVEFW-FORWARD, to 
> drop multicast traffic from host ? 
> 
> Or maybe better, allow to create rules at datacenter level, and put them 
> in -A PVEFW-FORWARD ? 
> 
> 
> 
> - Mail original - 
> De: "datanom.net"  
> À: "pve-devel"  
> Envoyé: Dimanche 4 Janvier 2015 03:34:57 
> Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
> VMsturns off 
> 
> On Sat, 3 Jan 2015 21:32:54 -0300 
> "Cesar P

Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Cesar Peschiera

Hi Alexandre

Please excuse me if i don't talk with property, i meant the cpu pinning that
will have pve-manager and QEMU in the next release. Ie, that i would like to
have the option of enable or disable in PVE GUI the cpu pinning that QEMU
can apply for each VM, if so, i will can to choose if i want that QEMU or
the application inside of the VM managed the cpu pinning with the numa
nodes. And the DBA says that the MS-SQL Server will manage better the cpu
pinning that QEMU, and i would like to do some tests for confirm it.

Moreover, as i have 2 servers identical in Hardware, where is running this 
unique VM, i would like also to have the option of live migration enabled.



I'm interested to see results between both method

With pleasure i will report the results

Moreover, talking about of the download of qemu-server deb from git, as very
soon this server will be in production, i would like to wait that this
package is in the "pve-no-subscription" repository for apply a upgrade, that
being well, I will run less risks of down times, unless you tell me you have
already tested and is very stable.


- Original Message - 
From: "Alexandre DERUMIER" 

To: "Cesar Peschiera" 
Cc: "dietmar" ; "pve-devel" 
Sent: Tuesday, January 06, 2015 5:02 AM
Subject: Re: [pve-devel] [PATCH] add numa options


Hi,


As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive
for
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can
manage
his own numa-processes better than QEMU, and as i guess that also will
exist
many applications that will manage his own numa-processes better than
QEMU,
is that i would like to order that PVE GUI has a option of enable or
disable
the automatic administration of the numa-processes, also with the
possibility of do live migration.


I'm not sure to understand what do you mean by
"says that MS-SQL Server can manage his own numa-processes better than
QEMU,"


Numa are not process, it's an architecture to regroup cpus with memory
bank,for fast memory access.


They are 2 parts:

1)currently, qemu expose the virtual numa nodes to the guest.
(each numa node = X cores  with X memory)

This can be simply enabled with numa:1  with last patches,
(I'll create 1 numa node by virtual socket, and split the ram amount between
each node


or if you want to custom memory access, cores by nodes,or setup specific
virtual numa nodes to specific host numa nodes
you can do it with
numa0: ,
numa1:
"cpus=[[,hostnodes=][,policy=]]"


But this is always the application inside the guest which manage the memory
access.


2) Now with kernel 3.10, we have also auto numabalancing at the host side.
I'll try to map if possible the virtual numa nodes to host numa node.

you can disable this feature with "echo 0 > /proc/sys/kernel/numa_balancing"


So for my point of view, numa:1 + auto numa balancing should give you
already good results,
and it's allow live migration between different hosts numa architecture


Maybe with only 1vm, you can try to manually map virtual nodes to specific
nodes.

I'm interested to see results between both method (Maybe do you want last
qemu-server deb from git ?)



I plan to add gui for part1.




- Mail original -
De: "Cesar Peschiera" 
À: "aderumier" , "dietmar" 
Cc: "pve-devel" 
Envoyé: Mardi 6 Janvier 2015 06:35:15
Objet: Re: [pve-devel] [PATCH] add numa options

Hi Alexandre and developers team.

I would like to order a feature for the next release of pve-manager:

As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive for
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can manage
his own numa-processes better than QEMU, and as i guess that also will exist
many applications that will manage his own numa-processes better than QEMU,
is that i would like to order that PVE GUI has a option of enable or disable
the automatic administration of the numa-processes, also with the
possibility of do live migration.

Moreover, if you can to add such feature, i will can to run a test with
MS-SQL Server for know which of the two options give me better results and
publish it (with the times of wait for each case)

@Alexandre:
Moreover, with your temporal patches for manage the numa-processes, in
MS-SQL Server i saw a difference of time between two to three times more
quick for get the results (that it is fantastic, a great difference), but as
i yet don't finish of do the tests (talking about of do some changes in the
Bios Hardware, HugePages managed for the Windows Server, etc), is that yet i
don't publish a resume very detailed of the tests. I guess that soon i will
do it (I depend on third parties, and the PVE host not must lose the cluster
communication).

And talking about of lose the cluster communication, from that i have "I/OAT
DMA engine" enabled in the Hardware Bios, the node never more lost the
cluster communication, but i must do some extensive testing to confirm it.

Best regards
Cesar

- Original Message 

Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Alexandre DERUMIER
>>ase excuse me if i don't talk with property, i meant the cpu pinning that
>>will have pve-manager and QEMU in the next release. Ie, that i would like to
>>have the option of enable or disable in PVE GUI the cpu pinning that QEMU
>>can apply for each VM, if so, i will can to choose if i want that QEMU or
>>the application inside of the VM managed the cpu pinning with the numa
>>nodes. And the DBA says that the MS-SQL Server will manage better the cpu
>>pinning that QEMU, and i would like to do some tests for confirm it.

Oh,ok.

so numa:1 should do the trick, it's create numa nodes but don't pin cpu.

(Note that I don't see how mssql can pin vcpus on real host cpu).

host kernel 3.10 autonuma is doing autopinning,so you can try to disable it.



About qemu-server pve-no-subscription, I don't known if Dietmar plan to release 
it until next proxmox release.
Because big changes are coming in this package this week.



- Mail original -
De: "Cesar Peschiera" 
À: "aderumier" 
Cc: "dietmar" , "pve-devel" 
Envoyé: Mardi 6 Janvier 2015 17:33:42
Objet: Re: [pve-devel] [PATCH] add numa options

Hi Alexandre 

Please excuse me if i don't talk with property, i meant the cpu pinning that 
will have pve-manager and QEMU in the next release. Ie, that i would like to 
have the option of enable or disable in PVE GUI the cpu pinning that QEMU 
can apply for each VM, if so, i will can to choose if i want that QEMU or 
the application inside of the VM managed the cpu pinning with the numa 
nodes. And the DBA says that the MS-SQL Server will manage better the cpu 
pinning that QEMU, and i would like to do some tests for confirm it. 

Moreover, as i have 2 servers identical in Hardware, where is running this 
unique VM, i would like also to have the option of live migration enabled. 

>I'm interested to see results between both method 
With pleasure i will report the results 

Moreover, talking about of the download of qemu-server deb from git, as very 
soon this server will be in production, i would like to wait that this 
package is in the "pve-no-subscription" repository for apply a upgrade, that 
being well, I will run less risks of down times, unless you tell me you have 
already tested and is very stable. 


- Original Message - 
From: "Alexandre DERUMIER"  
To: "Cesar Peschiera"  
Cc: "dietmar" ; "pve-devel"  
Sent: Tuesday, January 06, 2015 5:02 AM 
Subject: Re: [pve-devel] [PATCH] add numa options 


Hi, 

>>As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive 
>>for 
>>MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can 
>>manage 
>>his own numa-processes better than QEMU, and as i guess that also will 
>>exist 
>>many applications that will manage his own numa-processes better than 
>>QEMU, 
>>is that i would like to order that PVE GUI has a option of enable or 
>>disable 
>>the automatic administration of the numa-processes, also with the 
>>possibility of do live migration. 

I'm not sure to understand what do you mean by 
"says that MS-SQL Server can manage his own numa-processes better than 
QEMU," 


Numa are not process, it's an architecture to regroup cpus with memory 
bank,for fast memory access. 


They are 2 parts: 

1)currently, qemu expose the virtual numa nodes to the guest. 
(each numa node = X cores with X memory) 

This can be simply enabled with numa:1 with last patches, 
(I'll create 1 numa node by virtual socket, and split the ram amount between 
each node 


or if you want to custom memory access, cores by nodes,or setup specific 
virtual numa nodes to specific host numa nodes 
you can do it with 
numa0: , 
numa1: 
"cpus=[[,hostnodes=][,policy=]]"
 


But this is always the application inside the guest which manage the memory 
access. 


2) Now with kernel 3.10, we have also auto numabalancing at the host side. 
I'll try to map if possible the virtual numa nodes to host numa node. 

you can disable this feature with "echo 0 > /proc/sys/kernel/numa_balancing" 


So for my point of view, numa:1 + auto numa balancing should give you 
already good results, 
and it's allow live migration between different hosts numa architecture 


Maybe with only 1vm, you can try to manually map virtual nodes to specific 
nodes. 

I'm interested to see results between both method (Maybe do you want last 
qemu-server deb from git ?) 



I plan to add gui for part1. 




- Mail original - 
De: "Cesar Peschiera"  
À: "aderumier" , "dietmar"  
Cc: "pve-devel"  
Envoyé: Mardi 6 Janvier 2015 06:35:15 
Objet: Re: [pve-devel] [PATCH] add numa options 

Hi Alexandre and developers team. 

I would like to order a feature for the next release of pve-manager: 

As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive for 
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can manage 
his own numa-processes better than QEMU, and as i guess that also will exist 
many applications that will manage his own numa-processes better than QEMU, 

Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Cesar Peschiera

(Note that I don't see how mssql can pin vcpus on real host cpu).

Being mssql into the VM, the DBA showed me as mssql can see the numa nodes,
and mssql has his own form of manage his own processes between the numa
nodes for get a better performance. It is for it that i think will be better
that in the PVE GUI we have the option of enable or disable the cpu pinning
for each VM, and obviously i would like to do some tests for compare which
of the two options is better.


host kernel 3.10 autonuma is doing autopinning, so you can try to disable
it.

If the autonuma isn't customizable for each VM, i guess that will be
better leave it as is, but i am not sure, due that we will have two systems
doing the auto balance: the 3.10 Kernel and the mssql into the VM ???

Maybe will be better do a more test, disabling autonuma in the 3.10 kernel.
Question:
How can i disable autonuma in the /etc/default/grub file?

Note:
The test that i did in the past, was with and without your patches, always
with the 3.10 kernel (without do changes on his configuration), and with
your patches, the performance was very top (two a three times more quick in
several tests, never minus of two, talking in terms of data base).


- Original Message - 
From: "Alexandre DERUMIER" 

To: "Cesar Peschiera" 
Cc: "dietmar" ; "pve-devel" 
Sent: Tuesday, January 06, 2015 2:31 PM
Subject: Re: [pve-devel] [PATCH] add numa options



ase excuse me if i don't talk with property, i meant the cpu pinning that
will have pve-manager and QEMU in the next release. Ie, that i would like 
to

have the option of enable or disable in PVE GUI the cpu pinning that QEMU
can apply for each VM, if so, i will can to choose if i want that QEMU or
the application inside of the VM managed the cpu pinning with the numa
nodes. And the DBA says that the MS-SQL Server will manage better the cpu
pinning that QEMU, and i would like to do some tests for confirm it.


Oh,ok.

so numa:1 should do the trick, it's create numa nodes but don't pin cpu.

(Note that I don't see how mssql can pin vcpus on real host cpu).

host kernel 3.10 autonuma is doing autopinning,so you can try to disable it.



About qemu-server pve-no-subscription, I don't known if Dietmar plan to 
release it until next proxmox release.

Because big changes are coming in this package this week.



- Mail original -
De: "Cesar Peschiera" 
À: "aderumier" 
Cc: "dietmar" , "pve-devel" 
Envoyé: Mardi 6 Janvier 2015 17:33:42
Objet: Re: [pve-devel] [PATCH] add numa options

Hi Alexandre

Please excuse me if i don't talk with property, i meant the cpu pinning that
will have pve-manager and QEMU in the next release. Ie, that i would like to
have the option of enable or disable in PVE GUI the cpu pinning that QEMU
can apply for each VM, if so, i will can to choose if i want that QEMU or
the application inside of the VM managed the cpu pinning with the numa
nodes. And the DBA says that the MS-SQL Server will manage better the cpu
pinning that QEMU, and i would like to do some tests for confirm it.

Moreover, as i have 2 servers identical in Hardware, where is running this
unique VM, i would like also to have the option of live migration enabled.


I'm interested to see results between both method

With pleasure i will report the results

Moreover, talking about of the download of qemu-server deb from git, as very
soon this server will be in production, i would like to wait that this
package is in the "pve-no-subscription" repository for apply a upgrade, that
being well, I will run less risks of down times, unless you tell me you have
already tested and is very stable.


- Original Message - 
From: "Alexandre DERUMIER" 

To: "Cesar Peschiera" 
Cc: "dietmar" ; "pve-devel" 
Sent: Tuesday, January 06, 2015 5:02 AM
Subject: Re: [pve-devel] [PATCH] add numa options


Hi,


As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive
for
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can
manage
his own numa-processes better than QEMU, and as i guess that also will
exist
many applications that will manage his own numa-processes better than
QEMU,
is that i would like to order that PVE GUI has a option of enable or
disable
the automatic administration of the numa-processes, also with the
possibility of do live migration.


I'm not sure to understand what do you mean by
"says that MS-SQL Server can manage his own numa-processes better than
QEMU,"


Numa are not process, it's an architecture to regroup cpus with memory
bank,for fast memory access.


They are 2 parts:

1)currently, qemu expose the virtual numa nodes to the guest.
(each numa node = X cores with X memory)

This can be simply enabled with numa:1 with last patches,
(I'll create 1 numa node by virtual socket, and split the ram amount between
each node


or if you want to custom memory access, cores by nodes,or setup specific
virtual numa nodes to specific host numa nodes
you can do it with
numa0: ,
numa1:
"cpus=[[,

Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Alexandre DERUMIER
>>Being mssql into the VM, the DBA showed me as mssql can see the numa nodes,
>>and mssql has his own form of manage his own processes between the numa
>>nodes for get a better performance. It is for it that i think will be better
>>that in the PVE GUI we have the option of enable or disable the cpu pinning
>>for each VM, and obviously i would like to do some tests for compare which
>>of the two options is better.

the mssql process will see the virtual numa nodes of the vms, so indeed I'll 
manage virtual memory and virtual cpus.

But that don't mean that physically, the virtual cpus|numa nodes will be mapped 
to correct physical cpus.

For this, you need :

-manual pinning
or
-auto numabalancing


maybe read this presentation, page 45
http://www.linux-kvm.org/wiki/images/7/75/01x07b-NumaAutobalancing.pdf


>>Maybe will be better do a more test, disabling autonuma in the 3.10 kernel. 
>>Question: 
>>How can i disable autonuma in the /etc/default/grub file? 

Don't known, maybe put a simple "echo 0 >..." in rc.local.

(But I'm pretty sure you'll got lower performance without pinning or without 
numa balancing)



- Mail original -
De: "Cesar Peschiera" 
À: "aderumier" 
Cc: "dietmar" , "pve-devel" 
Envoyé: Mardi 6 Janvier 2015 20:52:50
Objet: Re: [pve-devel] [PATCH] add numa options

>(Note that I don't see how mssql can pin vcpus on real host cpu). 
Being mssql into the VM, the DBA showed me as mssql can see the numa nodes, 
and mssql has his own form of manage his own processes between the numa 
nodes for get a better performance. It is for it that i think will be better 
that in the PVE GUI we have the option of enable or disable the cpu pinning 
for each VM, and obviously i would like to do some tests for compare which 
of the two options is better. 

>host kernel 3.10 autonuma is doing autopinning, so you can try to disable 
>it. 
If the autonuma isn't customizable for each VM, i guess that will be 
better leave it as is, but i am not sure, due that we will have two systems 
doing the auto balance: the 3.10 Kernel and the mssql into the VM ??? 

Maybe will be better do a more test, disabling autonuma in the 3.10 kernel. 
Question: 
How can i disable autonuma in the /etc/default/grub file? 

Note: 
The test that i did in the past, was with and without your patches, always 
with the 3.10 kernel (without do changes on his configuration), and with 
your patches, the performance was very top (two a three times more quick in 
several tests, never minus of two, talking in terms of data base). 


- Original Message - 
From: "Alexandre DERUMIER"  
To: "Cesar Peschiera"  
Cc: "dietmar" ; "pve-devel"  
Sent: Tuesday, January 06, 2015 2:31 PM 
Subject: Re: [pve-devel] [PATCH] add numa options 


>>ase excuse me if i don't talk with property, i meant the cpu pinning that 
>>will have pve-manager and QEMU in the next release. Ie, that i would like 
>>to 
>>have the option of enable or disable in PVE GUI the cpu pinning that QEMU 
>>can apply for each VM, if so, i will can to choose if i want that QEMU or 
>>the application inside of the VM managed the cpu pinning with the numa 
>>nodes. And the DBA says that the MS-SQL Server will manage better the cpu 
>>pinning that QEMU, and i would like to do some tests for confirm it. 

Oh,ok. 

so numa:1 should do the trick, it's create numa nodes but don't pin cpu. 

(Note that I don't see how mssql can pin vcpus on real host cpu). 

host kernel 3.10 autonuma is doing autopinning,so you can try to disable it. 



About qemu-server pve-no-subscription, I don't known if Dietmar plan to 
release it until next proxmox release. 
Because big changes are coming in this package this week. 



- Mail original - 
De: "Cesar Peschiera"  
À: "aderumier"  
Cc: "dietmar" , "pve-devel"  
Envoyé: Mardi 6 Janvier 2015 17:33:42 
Objet: Re: [pve-devel] [PATCH] add numa options 

Hi Alexandre 

Please excuse me if i don't talk with property, i meant the cpu pinning that 
will have pve-manager and QEMU in the next release. Ie, that i would like to 
have the option of enable or disable in PVE GUI the cpu pinning that QEMU 
can apply for each VM, if so, i will can to choose if i want that QEMU or 
the application inside of the VM managed the cpu pinning with the numa 
nodes. And the DBA says that the MS-SQL Server will manage better the cpu 
pinning that QEMU, and i would like to do some tests for confirm it. 

Moreover, as i have 2 servers identical in Hardware, where is running this 
unique VM, i would like also to have the option of live migration enabled. 

>I'm interested to see results between both method 
With pleasure i will report the results 

Moreover, talking about of the download of qemu-server deb from git, as very 
soon this server will be in production, i would like to wait that this 
package is in the "pve-no-subscription" repository for apply a upgrade, that 
being well, I will run less risks of down times, unless you tell me you have 
already tested an

Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Cesar Peschiera

But that don't mean that physically, the virtual cpus|numa nodes will be
mapped to correct physical cpus.


Many thanks for your reply, and please, let me to do a question:
Then, why I have a great difference of performance between the before and
the after of install your patches if i never touched the kernel nor his
configuration? (the patches installed are: pve-qemu-kvm_2.2-2_amd64.deb and 
qemu-server_3.3-5_amd64.deb)


Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] MySQL-Installation in Container / Debian-Wheezy - Problem openVZ

2015-01-06 Thread Detlef Bracker
Dear,

their is a problem with installation mysql-server 5.5 on debian wheezy
in a proxmox container same as in openvz

Only this helps for the problem, but why?

This site give a workarround:
http://unix.stackexchange.com/questions/152146/problems-installing-mysql-on-debian

>>The problem you are having is that the default configuration for newer 
>>versions of MySQL do not work
inside of an OpenVZ container. After the failure you can try
adding |innodb_use_native_aio = 0| to |/etc/mysql/my.cnf| and then
run |dpkg --configure -a|.
<<

I have many hours search for this solution. Before I get get every time
errors by the installation,
and I have found too in internet many admins with same problem, but only
in one site, I found this!

I have this problem only in debian wheezy 7.6 on AMD - with i386 never
before!

Regards

Detlef


signature.asc
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] MySQL-Installation in Container / Debian-Wheezy - Problem openVZ

2015-01-06 Thread Michael Rasmussen
On Wed, 07 Jan 2015 01:26:38 +0100
Detlef Bracker  wrote:

> 
> >>The problem you are having is that the default configuration for newer 
> >>versions of MySQL do not work
> inside of an OpenVZ container. After the failure you can try
> adding |innodb_use_native_aio = 0| to |/etc/mysql/my.cnf| and then
> run |dpkg --configure -a|.
> <<
> 
There is in fact two solutions. See
http://forum.openvz.org/index.php?t=msg&th=10181&goto=47380&S=4814f1b69ddeba17f6296ab9f349c7f5#msg_47380

For solution 2 see:
http://www.lowendtalk.com/discussion/40213/openvz-fs-aio-max-nr

Since users can change the settings in my.cnf I think this is not the
proper solution. Solution 2 is safe since the setting is transparent
to users.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
--
/usr/games/fortune -es says:
Work is the curse of the drinking classes.
-- Mike Romanoff


pgpeeBlMgIuQC.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/7] add optionnal current param to config api

2015-01-06 Thread Dietmar Maurer

On 01/02/2015 03:15 PM, Alexandre Derumier wrote:

@@ -685,11 +702,6 @@ __PACKAGE__->register_method({
type => 'string',
optional => 1,
},
-   pending => {
-   description => "Pending value.",
-   type => 'string',
-   optional => 1,
-   },
delete => {
description => "Indicated a pending delete request.",
type => 'boolean',
@@ -1156,6 +1168,10 @@ __PACKAGE__->register_method({
maxLength => 40,
optional => 1,
},
+   pending => {
+   optional => 1,
+   type => 'boolean',
+   }
}),
  },
  returns => { type => 'null' },


I don't really understand that change?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/7] add optionnal current param to config api

2015-01-06 Thread Dietmar Maurer

On 01/02/2015 03:15 PM, Alexandre Derumier wrote:

+
+   if(!$param->{current} || (defined($param->{current}) && 
$param->{current} == 0)) {
+   foreach my $opt (keys $conf->{pending}) {
+   foreach my $opt 
(PVE::Tools::split_list($conf->{pending}->{delete})) {
+   delete $conf->{$opt} if $conf->{$opt};
+   }
+   next if ref($conf->{pending}->{$opt}); # just to be sure
+   $conf->{$opt} = $conf->{pending}->{$opt};
+   }
+   }
+


This looks also a bit strange (loop inside loop?).I changed this to:

if (!$param->{current}) {
foreach my $opt (keys $conf->{pending}) {
next if $opt eq 'delete';
my $value = $conf->{pending}->{$opt};
next if ref($value); # just to be sure
$conf->{$opt} = $value;
}
foreach my $opt 
(PVE::Tools::split_list($conf->{pending}->{delete})) {

delete $conf->{$opt} if $conf->{$opt};
}
}


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Fix: root can now be disabled in GUI.

2015-01-06 Thread Wolfgang Link

Signed-off-by: Wolfgang Link 
---
 PVE/AccessControl.pm |7 ---
 1 file changed, 7 deletions(-)

diff --git a/PVE/AccessControl.pm b/PVE/AccessControl.pm
index db85d08..29dba39 100644
--- a/PVE/AccessControl.pm
+++ b/PVE/AccessControl.pm
@@ -356,8 +356,6 @@ sub check_user_enabled {
 
 return 1 if $data->{enable};
 
-return 1 if $username eq 'root@pam'; # root is always enabled
-
 die "user '$username' is disabled\n" if !$noerr;
  
 return undef;
@@ -694,11 +692,6 @@ sub userconfig_force_defaults {
 foreach my $r (keys %$special_roles) {
$cfg->{roles}->{$r} = $special_roles->{$r};
 }
-
-# fixme: remove 'root' group (not required)?
-
-# add root user 
-$cfg->{users}->{'root@pam'}->{enable} = 1;
 }
 
 sub parse_user_config {
-- 
1.7.10.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/7] add optionnal current param to config api

2015-01-06 Thread Alexandre DERUMIER
Oh yes, sorry, seem better with your code.


(I'm a lot busy today, It's first day of sales in France, so lot of activity on 
ecommerce websites)

- Mail original -
De: "dietmar" 
À: "pve-devel" , "aderumier" 
Envoyé: Mercredi 7 Janvier 2015 08:35:00
Objet: Re: [pve-devel] [PATCH 2/7] add optionnal current param to config api

On 01/02/2015 03:15 PM, Alexandre Derumier wrote: 
> + 
> + if(!$param->{current} || (defined($param->{current}) && $param->{current} 
> == 0)) { 
> + foreach my $opt (keys $conf->{pending}) { 
> + foreach my $opt (PVE::Tools::split_list($conf->{pending}->{delete})) { 
> + delete $conf->{$opt} if $conf->{$opt}; 
> + } 
> + next if ref($conf->{pending}->{$opt}); # just to be sure 
> + $conf->{$opt} = $conf->{pending}->{$opt}; 
> + } 
> + } 
> + 

This looks also a bit strange (loop inside loop?).I changed this to: 

if (!$param->{current}) { 
foreach my $opt (keys $conf->{pending}) { 
next if $opt eq 'delete'; 
my $value = $conf->{pending}->{$opt}; 
next if ref($value); # just to be sure 
$conf->{$opt} = $value; 
} 
foreach my $opt 
(PVE::Tools::split_list($conf->{pending}->{delete})) { 
delete $conf->{$opt} if $conf->{$opt}; 
} 
} 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel