Re: [pve-devel] memory consumption

2015-08-19 Thread Alexandre DERUMIER
>>So customized client settings (for cache etc..) will never work on other host 
>>servers which have no mons or osd's installed. 

If you want to customize client settings, you can still add a /etc/ceph.conf, 

and only add [client] section.

(or add a symlink /etc/ceph.conf to /etc/pve/ceph.conf)



when qemu start, it's simply read /etc/ceph.conf [client] section.


- Mail original -
De: "VELARTIS Philipp Dürhammer" 
À: "dietmar" , "aderumier" 
Cc: "pve-devel" 
Envoyé: Mercredi 19 Août 2015 10:33:45
Objet: AW: [pve-devel] memory consumption

So customized client settings (for cache etc..) will never work on other host 
servers which have no mons or osd's installed. 
Only if you know that you have to symlink that file. I didn't know that and it 
took me a while to find out why my vms where behaving differnet on machines 
with ceph installed and on machines without ceph just accessing the ceph 
machines for rbd storage. 
I think it is a normal use case to have some ceph machines for storage (and 
maybe some vms) and also having some other computing with lots of vms. And i 
would expect that on all hosts rbd storage will have have the same settings and 
behave the same. 

-Ursprüngliche Nachricht- 
Von: Dietmar Maurer [mailto:diet...@proxmox.com] 
Gesendet: Mittwoch, 19. August 2015 06:27 
An: 'Alexandre DERUMIER'; VELARTIS Philipp Dürhammer 
Cc: pve-devel 
Betreff: Re: [pve-devel] memory consumption 


> Proxmox bare metal now installs ceph client by default but does not symlink 
> /etc/ceph/ceph.conf to /etc/pve/ceph.conf So on all rbd client 
> nodes qemu runns with standard ceph conf because it is not able to 
> find the file 

Whats wrong with ceph standard conf? The file /etc/pve/ceph.conf is only used 
if you run ceph server on pve nodes. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] memory consumption

2015-08-19 Thread VELARTIS Philipp Dürhammer
So customized client settings (for cache etc..) will never work on other host 
servers which have no mons or osd's installed. 
Only if you know that you have to symlink that file. I didn't know that and it 
took me a while to find out why my vms where behaving differnet on machines 
with ceph installed and on machines without ceph just accessing the ceph 
machines for rbd storage.
I think it is a normal use case to have some ceph machines for storage (and 
maybe some vms) and also having some other computing with lots of vms. And i 
would expect that on all hosts rbd storage will have have the same settings and 
behave the same.

-Ursprüngliche Nachricht-
Von: Dietmar Maurer [mailto:diet...@proxmox.com] 
Gesendet: Mittwoch, 19. August 2015 06:27
An: 'Alexandre DERUMIER'; VELARTIS Philipp Dürhammer
Cc: pve-devel
Betreff: Re: [pve-devel] memory consumption


> Proxmox bare metal now installs ceph client by default but does not symlink
>   /etc/ceph/ceph.conf to  /etc/pve/ceph.conf So on all rbd client 
> nodes qemu runns with standard ceph conf because it is not able to 
> find the file

Whats wrong with ceph standard conf? The file /etc/pve/ceph.conf is only used 
if you run ceph server on pve nodes.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] memory consumption

2015-08-18 Thread Dietmar Maurer

> Proxmox bare metal now installs ceph client by default but does not symlink
>   /etc/ceph/ceph.conf to  /etc/pve/ceph.conf 
> So on all rbd client nodes qemu runns with standard ceph conf because it is
> not able to find the file

Whats wrong with ceph standard conf? The file /etc/pve/ceph.conf is only used if
you run ceph server on pve nodes.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] memory consumption

2015-08-18 Thread VELARTIS Philipp Dürhammer
Ok thank you. 
The strange behaviour with the same result but different cache settings 
confused me even more...

But one thing that i think still needs improvement:

Proxmox bare metal now installs ceph client by default but does not symlink   
/etc/ceph/ceph.conf to  /etc/pve/ceph.conf 
So on all rbd client nodes qemu runns with standard ceph conf because it is not 
able to find the file

-Ursprüngliche Nachricht-
Von: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
Gesendet: Dienstag, 18. August 2015 20:28
An: VELARTIS Philipp Dürhammer
Cc: Eneko Lacunza; pve-devel
Betreff: Re: AW: [pve-devel] memory consumption

>>To be more precise some values like network ip's etc still are shown 
>>and seem to work but not the cache setting and also backbill settings

The cache setting is client side setting, monitors don't known nothing about it.





- Mail original -
De: "VELARTIS Philipp Dürhammer" 
À: "aderumier" 
Cc: "Eneko Lacunza" , "pve-devel" 

Envoyé: Mardi 18 Août 2015 20:20:30
Objet: AW: [pve-devel] memory consumption

OK, 

i was testing around a bit and my conclution is: 
because of setting the cache size too big the vms used a lot more memory then 
expected. 
1) because oft he bug described by alexandre changing the cache setting didn't 
change anything
2) my ceph.conf does not show in " ceph -n mon.2 --show-config " 
3) but still seems to make effect on the cheph osd nodes
4) but not on ceph client nodes! 

So on ceph osd node the ceph rbd cache size changes seem to make changes - but 
not on other nodes where i just use the ceph client to mount the rbd 

So there are two thing that i dont understand: 
>>> why is "ceph -n mon.2 --show-config" not showing the ceph.config values - 
>>> but still makes effect (and only!) on nodes with ceph osd's and mons but 
>>> not on client nodes? 

To be more precise some values like network ip's etc still are shown and seem 
to work but not the cache setting and also backbill settings 


-Ursprüngliche Nachricht-
Von: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Gesendet: Dienstag, 18. August 2015 20:08
An: VELARTIS Philipp Dürhammer
Cc: Eneko Lacunza; pve-devel
Betreff: Re: [pve-devel] memory consumption 

>>if cache=none is set this should overwrite the ceph config from rbd 
>>cache = true=> false but still this machine is consuming 20G instead 
>>of 16G

Be carefull, they are a bug in current qemu librbd, setting 
"rbd_cache=true|false" in ceph.conf, always override qemu cache setting. 

This is fixed in qemu 2.4 (for proxmox4). 

But for the moment, the best way is to remove "rbd_cache" from ceph.conf. 


- Mail original -
De: "VELARTIS Philipp Dürhammer" 
À: "VELARTIS Philipp Dürhammer" , "Eneko Lacunza" 
, "pve-devel" 
Envoyé: Mardi 18 Août 2015 16:34:53
Objet: Re: [pve-devel] memory consumption 



Hi, 



debugging and reading more 

if cache=none is set this should overwrite the ceph config from rbd cache = 
true=> false 

but still this machine is consuming 20G instead of 16G 






Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von 
VELARTIS Philipp Dürhammer
Gesendet: Dienstag, 18. August 2015 16:01
An: 'Eneko Lacunza'; pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] memory consumption 




Hi Eneko! 



I use Proxmox Ceph Server. 

I found out that my config for rbd cache size was too big which could explain 
me the problem. 

But when i make: 

ceph -n mon.2 --show-config 

it shows me the standard config size from ceph. And also that rbd cache = 
false. Very strange. 

Is this not the way to get the running conf? 



My client section in the /etc/pve/ceph.conf looks like this: 

[client] rbd cache = true 

rbd cache size = 268435456 

rbd cache max dirty = 33554432 

rbd cache max dirty age = 3 



but seems to be ignored? 

I am using cache=writeback and cache=none. Same results 




Von: pve-devel [ mailto:pve-devel-boun...@pve.proxmox.com ] Im Auftrag von 
Eneko Lacunza
Gesendet: Dienstag, 18. August 2015 14:36
An: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] memory consumption 





Hi Philipp, 

What Ceph packages? 

We have 3 small clusters using creph/rbd and I'm not seeing this. Biggest VM is 
6GB of RAM though, biggest disk is 200GB and a VM has a total of 425GB in 5 
disks. 

Are you using Proxmox Ceph Server? 
What cache setting in VM harddisk config? 

Cheers
Eneko 

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió: 




Hi, 



some of my virtual machines on different servers are consuming a lot more ram 
then they should! 

(up to twice) which is a horror if the machines have about 15-20G but consume 
30-40GB 

I was checking: 

) all machines have the same pve packages installed 

) it only happens on machines with ceph osd ins

Re: [pve-devel] memory consumption

2015-08-18 Thread Alexandre DERUMIER
>>To be more precise some values like network ip's etc still are shown and seem 
>>to work but not the cache setting and also backbill settings

The cache setting is client side setting, monitors don't known nothing about it.





- Mail original -
De: "VELARTIS Philipp Dürhammer" 
À: "aderumier" 
Cc: "Eneko Lacunza" , "pve-devel" 

Envoyé: Mardi 18 Août 2015 20:20:30
Objet: AW: [pve-devel] memory consumption

OK, 

i was testing around a bit and my conclution is: 
because of setting the cache size too big the vms used a lot more memory then 
expected. 
1) because oft he bug described by alexandre changing the cache setting didn't 
change anything 
2) my ceph.conf does not show in " ceph -n mon.2 --show-config " 
3) but still seems to make effect on the cheph osd nodes 
4) but not on ceph client nodes! 

So on ceph osd node the ceph rbd cache size changes seem to make changes - but 
not on other nodes where i just use the ceph client to mount the rbd 

So there are two thing that i dont understand: 
>>> why is "ceph -n mon.2 --show-config" not showing the ceph.config values - 
>>> but still makes effect (and only!) on nodes with ceph osd's and mons but 
>>> not on client nodes? 

To be more precise some values like network ip's etc still are shown and seem 
to work but not the cache setting and also backbill settings 


-Ursprüngliche Nachricht- 
Von: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
Gesendet: Dienstag, 18. August 2015 20:08 
An: VELARTIS Philipp Dürhammer 
Cc: Eneko Lacunza; pve-devel 
Betreff: Re: [pve-devel] memory consumption 

>>if cache=none is set this should overwrite the ceph config from rbd 
>>cache = true=> false but still this machine is consuming 20G instead 
>>of 16G 

Be carefull, they are a bug in current qemu librbd, setting 
"rbd_cache=true|false" in ceph.conf, always override qemu cache setting. 

This is fixed in qemu 2.4 (for proxmox4). 

But for the moment, the best way is to remove "rbd_cache" from ceph.conf. 


- Mail original ----- 
De: "VELARTIS Philipp Dürhammer"  
À: "VELARTIS Philipp Dürhammer" , "Eneko Lacunza" 
, "pve-devel"  
Envoyé: Mardi 18 Août 2015 16:34:53 
Objet: Re: [pve-devel] memory consumption 



Hi, 



debugging and reading more 

if cache=none is set this should overwrite the ceph config from rbd cache = 
true=> false 

but still this machine is consuming 20G instead of 16G 






Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von 
VELARTIS Philipp Dürhammer 
Gesendet: Dienstag, 18. August 2015 16:01 
An: 'Eneko Lacunza'; pve-devel@pve.proxmox.com 
Betreff: Re: [pve-devel] memory consumption 




Hi Eneko! 



I use Proxmox Ceph Server. 

I found out that my config for rbd cache size was too big which could explain 
me the problem. 

But when i make: 

ceph -n mon.2 --show-config 

it shows me the standard config size from ceph. And also that rbd cache = 
false. Very strange. 

Is this not the way to get the running conf? 



My client section in the /etc/pve/ceph.conf looks like this: 

[client] rbd cache = true 

rbd cache size = 268435456 

rbd cache max dirty = 33554432 

rbd cache max dirty age = 3 



but seems to be ignored? 

I am using cache=writeback and cache=none. Same results 




Von: pve-devel [ mailto:pve-devel-boun...@pve.proxmox.com ] Im Auftrag von 
Eneko Lacunza 
Gesendet: Dienstag, 18. August 2015 14:36 
An: pve-devel@pve.proxmox.com 
Betreff: Re: [pve-devel] memory consumption 





Hi Philipp, 

What Ceph packages? 

We have 3 small clusters using creph/rbd and I'm not seeing this. Biggest VM is 
6GB of RAM though, biggest disk is 200GB and a VM has a total of 425GB in 5 
disks. 

Are you using Proxmox Ceph Server? 
What cache setting in VM harddisk config? 

Cheers 
Eneko 

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió: 




Hi, 



some of my virtual machines on different servers are consuming a lot more ram 
then they should! 

(up to twice) which is a horror if the machines have about 15-20G but consume 
30-40GB 

I was checking: 

) all machines have the same pve packages installed 

) it only happens on machines with ceph osd installed 

) which also are using rbd client for virtual hdds 



It does not happen if they are using nfs backend for virtual hdd. Or other 
machines which are just using the rbd client but doent have osd’s… 



Has somebody experienced something like this also? 

Any ideas how to debug further? 



Machines config: 

proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve) 

pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a) 

pve-kernel-3.10.0-7-pve: 3.10.0-27 

pve-kernel-2.6.32-32-pve: 2.6.32-136 

pve-kernel-2.6.32-30-pve: 2.6.32-130 

pve-kernel-2.6.32-37-pve: 2.6.32-150 

pve-kernel-2.6.32-29-pve: 2.6.32-1

Re: [pve-devel] memory consumption

2015-08-18 Thread VELARTIS Philipp Dürhammer
OK,

i was testing around a bit and my conclution is:
because of setting the cache size too big the vms used a lot more memory then 
expected.
1) because oft he bug described by alexandre changing the cache setting didn't 
change anything
2) my ceph.conf does not show in " ceph -n mon.2 --show-config " 
3) but still seems to make effect on the cheph osd nodes
4) but not on ceph client nodes!

So on ceph osd node the ceph rbd cache size changes seem to make changes - but 
not on other nodes where i just use the ceph client to mount the rbd

So there are two thing that i dont understand:
>>> why is "ceph -n mon.2 --show-config" not showing the ceph.config values - 
>>> but still makes effect (and only!) on nodes with ceph osd's and mons but 
>>> not on client nodes?

To be more precise some values like network ip's etc still are shown and seem 
to work but not the cache setting and also backbill settings


-Ursprüngliche Nachricht-
Von: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
Gesendet: Dienstag, 18. August 2015 20:08
An: VELARTIS Philipp Dürhammer
Cc: Eneko Lacunza; pve-devel
Betreff: Re: [pve-devel] memory consumption

>>if cache=none is set this should overwrite the ceph config from rbd 
>>cache = true=> false but still this machine is consuming 20G instead 
>>of 16G

Be carefull, they are a bug in current qemu librbd,  setting 
"rbd_cache=true|false" in ceph.conf, always override qemu cache setting.

This is fixed in qemu 2.4 (for proxmox4).

But for the moment, the best way is to remove "rbd_cache" from ceph.conf.


- Mail original -
De: "VELARTIS Philipp Dürhammer" 
À: "VELARTIS Philipp Dürhammer" , "Eneko Lacunza" 
, "pve-devel" 
Envoyé: Mardi 18 Août 2015 16:34:53
Objet: Re: [pve-devel] memory consumption



Hi, 



debugging and reading more 

if cache=none is set this should overwrite the ceph config from rbd cache = 
true=> false 

but still this machine is consuming 20G instead of 16G 






Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von 
VELARTIS Philipp Dürhammer
Gesendet: Dienstag, 18. August 2015 16:01
An: 'Eneko Lacunza'; pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] memory consumption 




Hi Eneko! 



I use Proxmox Ceph Server. 

I found out that my config for rbd cache size was too big which could explain 
me the problem. 

But when i make: 

ceph -n mon.2 --show-config 

it shows me the standard config size from ceph. And also that rbd cache = 
false. Very strange. 

Is this not the way to get the running conf? 



My client section in the /etc/pve/ceph.conf looks like this: 

[client] rbd cache = true 

rbd cache size = 268435456 

rbd cache max dirty = 33554432 

rbd cache max dirty age = 3 



but seems to be ignored? 

I am using cache=writeback and cache=none. Same results 




Von: pve-devel [ mailto:pve-devel-boun...@pve.proxmox.com ] Im Auftrag von 
Eneko Lacunza
Gesendet: Dienstag, 18. August 2015 14:36
An: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] memory consumption 





Hi Philipp, 

What Ceph packages? 

We have 3 small clusters using creph/rbd and I'm not seeing this. Biggest VM is 
6GB of RAM though, biggest disk is 200GB and a VM has a total of 425GB in 5 
disks. 

Are you using Proxmox Ceph Server? 
What cache setting in VM harddisk config? 

Cheers
Eneko 

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió: 




Hi, 



some of my virtual machines on different servers are consuming a lot more ram 
then they should! 

(up to twice) which is a horror if the machines have about 15-20G but consume 
30-40GB 

I was checking: 

) all machines have the same pve packages installed 

) it only happens on machines with ceph osd installed 

) which also are using rbd client for virtual hdds 



It does not happen if they are using nfs backend for virtual hdd. Or other 
machines which are just using the rbd client but doent have osd’s… 



Has somebody experienced something like this also? 

Any ideas how to debug further? 



Machines config: 

proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve) 

pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a) 

pve-kernel-3.10.0-7-pve: 3.10.0-27 

pve-kernel-2.6.32-32-pve: 2.6.32-136 

pve-kernel-2.6.32-30-pve: 2.6.32-130 

pve-kernel-2.6.32-37-pve: 2.6.32-150 

pve-kernel-2.6.32-29-pve: 2.6.32-126 

pve-kernel-3.10.0-3-pve: 3.10.0-11 

lvm2: 2.02.98-pve4 

clvm: 2.02.98-pve4 

corosync-pve: 1.4.7-1 

openais-pve: 1.1.4-3 

libqb0: 0.11.1-2 

redhat-cluster-pve: 3.2.0-2 

resource-agents-pve: 3.9.2-4 

fence-agents-pve: 4.0.10-3 

pve-cluster: 3.0-18 

qemu-server: 3.4-6 

pve-firmware: 1.1-4 

libpve-common-perl: 3.0-24 

libpve-access-control: 3.0-16 

libpve-storage-perl: 3.0-33 

pve-libspice-server1: 0.12.4-3 

vncterm: 1.1-8 

vzctl: 4.0-1pve6 

vzprocps: 2.0.11-2 

vzquota: 3.1-2 


Re: [pve-devel] memory consumption

2015-08-18 Thread Alexandre DERUMIER
>>if cache=none is set this should overwrite the ceph config from rbd cache = 
>>true=> false
>>but still this machine is consuming 20G instead of 16G

Be carefull, they are a bug in current qemu librbd,  setting 
"rbd_cache=true|false" in ceph.conf, always override qemu cache setting.

This is fixed in qemu 2.4 (for proxmox4).

But for the moment, the best way is to remove "rbd_cache" from ceph.conf.


- Mail original -
De: "VELARTIS Philipp Dürhammer" 
À: "VELARTIS Philipp Dürhammer" , "Eneko Lacunza" 
, "pve-devel" 
Envoyé: Mardi 18 Août 2015 16:34:53
Objet: Re: [pve-devel] memory consumption



Hi, 



debugging and reading more 

if cache=none is set this should overwrite the ceph config from rbd cache = 
true=> false 

but still this machine is consuming 20G instead of 16G 






Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von 
VELARTIS Philipp Dürhammer 
Gesendet: Dienstag, 18. August 2015 16:01 
An: 'Eneko Lacunza'; pve-devel@pve.proxmox.com 
Betreff: Re: [pve-devel] memory consumption 




Hi Eneko! 



I use Proxmox Ceph Server. 

I found out that my config for rbd cache size was too big which could explain 
me the problem. 

But when i make: 

ceph -n mon.2 --show-config 

it shows me the standard config size from ceph. And also that rbd cache = 
false. Very strange. 

Is this not the way to get the running conf? 



My client section in the /etc/pve/ceph.conf looks like this: 

[client] rbd cache = true 

rbd cache size = 268435456 

rbd cache max dirty = 33554432 

rbd cache max dirty age = 3 



but seems to be ignored? 

I am using cache=writeback and cache=none. Same results 




Von: pve-devel [ mailto:pve-devel-boun...@pve.proxmox.com ] Im Auftrag von 
Eneko Lacunza 
Gesendet: Dienstag, 18. August 2015 14:36 
An: pve-devel@pve.proxmox.com 
Betreff: Re: [pve-devel] memory consumption 





Hi Philipp, 

What Ceph packages? 

We have 3 small clusters using creph/rbd and I'm not seeing this. Biggest VM is 
6GB of RAM though, biggest disk is 200GB and a VM has a total of 425GB in 5 
disks. 

Are you using Proxmox Ceph Server? 
What cache setting in VM harddisk config? 

Cheers 
Eneko 

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió: 




Hi, 



some of my virtual machines on different servers are consuming a lot more ram 
then they should! 

(up to twice) which is a horror if the machines have about 15-20G but consume 
30-40GB 

I was checking: 

) all machines have the same pve packages installed 

) it only happens on machines with ceph osd installed 

) which also are using rbd client for virtual hdds 



It does not happen if they are using nfs backend for virtual hdd. Or other 
machines which are just using the rbd client but doent have osd’s… 



Has somebody experienced something like this also? 

Any ideas how to debug further? 



Machines config: 

proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve) 

pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a) 

pve-kernel-3.10.0-7-pve: 3.10.0-27 

pve-kernel-2.6.32-32-pve: 2.6.32-136 

pve-kernel-2.6.32-30-pve: 2.6.32-130 

pve-kernel-2.6.32-37-pve: 2.6.32-150 

pve-kernel-2.6.32-29-pve: 2.6.32-126 

pve-kernel-3.10.0-3-pve: 3.10.0-11 

lvm2: 2.02.98-pve4 

clvm: 2.02.98-pve4 

corosync-pve: 1.4.7-1 

openais-pve: 1.1.4-3 

libqb0: 0.11.1-2 

redhat-cluster-pve: 3.2.0-2 

resource-agents-pve: 3.9.2-4 

fence-agents-pve: 4.0.10-3 

pve-cluster: 3.0-18 

qemu-server: 3.4-6 

pve-firmware: 1.1-4 

libpve-common-perl: 3.0-24 

libpve-access-control: 3.0-16 

libpve-storage-perl: 3.0-33 

pve-libspice-server1: 0.12.4-3 

vncterm: 1.1-8 

vzctl: 4.0-1pve6 

vzprocps: 2.0.11-2 

vzquota: 3.1-2 

pve-qemu-kvm: 2.2-11 

ksm-control-daemon: 1.1-1 

glusterfs-client: 3.5.2-1 



___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 






-- 
Zuzendari Teknikoa / Director Técnico 
Binovo IT Human Project, S.L. 
Telf. 943575997 
943493611 
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa) 
www.binovo.es 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] memory consumption

2015-08-18 Thread VELARTIS Philipp Dürhammer
Hi,

debugging and reading more
if cache=none is set this should overwrite the ceph config from rbd cache = 
true=> false
but still this machine is consuming 20G instead of 16G


Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von 
VELARTIS Philipp Dürhammer
Gesendet: Dienstag, 18. August 2015 16:01
An: 'Eneko Lacunza'; pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] memory consumption

Hi Eneko!

I use Proxmox Ceph Server.
I found out that my config for rbd cache size was too big which could explain 
me the problem.
But when i make:
ceph -n mon.2 --show-config
it shows me the standard config size from ceph. And also that rbd cache = 
false. Very strange.
Is this not the way to get the running conf?

My client section in the /etc/pve/ceph.conf looks like this:
[client] rbd cache = true
rbd cache size = 268435456
rbd cache max dirty = 33554432
rbd cache max dirty age = 3

but seems to be ignored?
I am using cache=writeback and cache=none. Same results

Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von Eneko 
Lacunza
Gesendet: Dienstag, 18. August 2015 14:36
An: pve-devel@pve.proxmox.com<mailto:pve-devel@pve.proxmox.com>
Betreff: Re: [pve-devel] memory consumption

Hi Philipp,

What Ceph packages?

We have 3 small clusters using creph/rbd and I'm not seeing this. Biggest VM is 
6GB of RAM though, biggest disk is 200GB and a VM has a total of 425GB in 5 
disks.

Are you using Proxmox Ceph Server?
What cache setting in VM harddisk config?

Cheers
Eneko

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió:
Hi,

some of my virtual machines on different servers are consuming a lot more ram 
then they should!
(up to twice) which is a horror if the machines have about 15-20G but consume 
30-40GB
I was checking:
) all machines have the same pve packages installed
) it only happens on machines with ceph osd installed
) which also are using rbd client for virtual hdds

It does not happen if they are using nfs backend for virtual hdd. Or other 
machines which are just using the rbd client but doent have osd's...

Has somebody experienced something like this also?
Any ideas how to debug further?

Machines config:
proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-3.10.0-7-pve: 3.10.0-27
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-3.10.0-3-pve: 3.10.0-11
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1



___

pve-devel mailing list

pve-devel@pve.proxmox.com<mailto:pve-devel@pve.proxmox.com>

http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



--

Zuzendari Teknikoa / Director Técnico

Binovo IT Human Project, S.L.

Telf. 943575997

  943493611

Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)

www.binovo.es<http://www.binovo.es>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] memory consumption

2015-08-18 Thread VELARTIS Philipp Dürhammer
Hi Eneko!

I use Proxmox Ceph Server.
I found out that my config for rbd cache size was too big which could explain 
me the problem.
But when i make:
ceph -n mon.2 --show-config
it shows me the standard config size from ceph. And also that rbd cache = 
false. Very strange.
Is this not the way to get the running conf?

My client section in the /etc/pve/ceph.conf looks like this:
[client] rbd cache = true
rbd cache size = 268435456
rbd cache max dirty = 33554432
rbd cache max dirty age = 3

but seems to be ignored?
I am using cache=writeback and cache=none. Same results

Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von Eneko 
Lacunza
Gesendet: Dienstag, 18. August 2015 14:36
An: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] memory consumption

Hi Philipp,

What Ceph packages?

We have 3 small clusters using creph/rbd and I'm not seeing this. Biggest VM is 
6GB of RAM though, biggest disk is 200GB and a VM has a total of 425GB in 5 
disks.

Are you using Proxmox Ceph Server?
What cache setting in VM harddisk config?

Cheers
Eneko

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió:
Hi,

some of my virtual machines on different servers are consuming a lot more ram 
then they should!
(up to twice) which is a horror if the machines have about 15-20G but consume 
30-40GB
I was checking:
) all machines have the same pve packages installed
) it only happens on machines with ceph osd installed
) which also are using rbd client for virtual hdds

It does not happen if they are using nfs backend for virtual hdd. Or other 
machines which are just using the rbd client but doent have osd's...

Has somebody experienced something like this also?
Any ideas how to debug further?

Machines config:
proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-3.10.0-7-pve: 3.10.0-27
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-3.10.0-3-pve: 3.10.0-11
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1




___

pve-devel mailing list

pve-devel@pve.proxmox.com<mailto:pve-devel@pve.proxmox.com>

http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




--

Zuzendari Teknikoa / Director Técnico

Binovo IT Human Project, S.L.

Telf. 943575997

  943493611

Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)

www.binovo.es<http://www.binovo.es>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] memory consumption

2015-08-18 Thread Alexandre DERUMIER
Hi,

>>) it only happens on machines with ceph osd installed
>>) which also are using rbd client for virtual hdds

I'm never see this, maybe it could be a tcmalloc bug not releasing memory.

Is it the kvm process witch use a lot more ram ?


- Mail original -
De: "VELARTIS Philipp Dürhammer" 
À: "pve-devel" 
Envoyé: Mardi 18 Août 2015 14:22:25
Objet: [pve-devel] memory consumption



Hi, 



some of my virtual machines on different servers are consuming a lot more ram 
then they should! 

(up to twice) which is a horror if the machines have about 15-20G but consume 
30-40GB 

I was checking: 

) all machines have the same pve packages installed 

) it only happens on machines with ceph osd installed 

) which also are using rbd client for virtual hdds 



It does not happen if they are using nfs backend for virtual hdd. Or other 
machines which are just using the rbd client but doent have osd’s… 



Has somebody experienced something like this also? 

Any ideas how to debug further? 



Machines config: 

proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve) 

pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a) 

pve-kernel-3.10.0-7-pve: 3.10.0-27 

pve-kernel-2.6.32-32-pve: 2.6.32-136 

pve-kernel-2.6.32-30-pve: 2.6.32-130 

pve-kernel-2.6.32-37-pve: 2.6.32-150 

pve-kernel-2.6.32-29-pve: 2.6.32-126 

pve-kernel-3.10.0-3-pve: 3.10.0-11 

lvm2: 2.02.98-pve4 

clvm: 2.02.98-pve4 

corosync-pve: 1.4.7-1 

openais-pve: 1.1.4-3 

libqb0: 0.11.1-2 

redhat-cluster-pve: 3.2.0-2 

resource-agents-pve: 3.9.2-4 

fence-agents-pve: 4.0.10-3 

pve-cluster: 3.0-18 

qemu-server: 3.4-6 

pve-firmware: 1.1-4 

libpve-common-perl: 3.0-24 

libpve-access-control: 3.0-16 

libpve-storage-perl: 3.0-33 

pve-libspice-server1: 0.12.4-3 

vncterm: 1.1-8 

vzctl: 4.0-1pve6 

vzprocps: 2.0.11-2 

vzquota: 3.1-2 

pve-qemu-kvm: 2.2-11 

ksm-control-daemon: 1.1-1 

glusterfs-client: 3.5.2-1 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] memory consumption

2015-08-18 Thread Eneko Lacunza

Hi Philipp,

What Ceph packages?

We have 3 small clusters using creph/rbd and I'm not seeing this. 
Biggest VM is 6GB of RAM though, biggest disk is 200GB and a VM has a 
total of 425GB in 5 disks.


Are you using Proxmox Ceph Server?
What cache setting in VM harddisk config?

Cheers
Eneko

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió:


Hi,

some of my virtual machines on different servers are consuming a lot 
more ram then they should!


(up to twice) which is a horror if the machines have about 15-20G but 
consume 30-40GB


I was checking:

) all machines have the same pve packages installed

) it only happens on machines with ceph osd installed

) which also are using rbd client for virtual hdds

It does not happen if they are using nfs backend for virtual hdd. Or 
other machines which are just using the rbd client but doent have osd’s…


Has somebody experienced something like this also?

Any ideas how to debug further?

Machines config:

proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve)

pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)

pve-kernel-3.10.0-7-pve: 3.10.0-27

pve-kernel-2.6.32-32-pve: 2.6.32-136

pve-kernel-2.6.32-30-pve: 2.6.32-130

pve-kernel-2.6.32-37-pve: 2.6.32-150

pve-kernel-2.6.32-29-pve: 2.6.32-126

pve-kernel-3.10.0-3-pve: 3.10.0-11

lvm2: 2.02.98-pve4

clvm: 2.02.98-pve4

corosync-pve: 1.4.7-1

openais-pve: 1.1.4-3

libqb0: 0.11.1-2

redhat-cluster-pve: 3.2.0-2

resource-agents-pve: 3.9.2-4

fence-agents-pve: 4.0.10-3

pve-cluster: 3.0-18

qemu-server: 3.4-6

pve-firmware: 1.1-4

libpve-common-perl: 3.0-24

libpve-access-control: 3.0-16

libpve-storage-perl: 3.0-33

pve-libspice-server1: 0.12.4-3

vncterm: 1.1-8

vzctl: 4.0-1pve6

vzprocps: 2.0.11-2

vzquota: 3.1-2

pve-qemu-kvm: 2.2-11

ksm-control-daemon: 1.1-1

glusterfs-client: 3.5.2-1



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel