Hi Dmitry
After reading CephRDB the impressions were extremely good and even
better than CephFS to ephemeral storage. Are you using qcow2 or raw
type? I prefer qcow2, but in this case we cannot enable the writing
cache in the cluster reducing a bit the performance. I should test the
CephRDB p
My instances are using much more memory that expected. The amount free
memory (free + cached) is under 3G on my servers even though the compute
nodes are configured to reserve 32G.
Here's my setup:
Release: Ice House
Server mem: 256G
Qemu version: 2.0.0+dfsg-2ubuntu1.1
Networking: Contrail 1.20
B
Not totally sure I am following - the output of free would help a lot.
However, the number you should be caring about is free +buffers/cache. The
reason for you discrepancy is you are including the cached in memory file
system content that linux does in order to improve performance. On boxes wi
In addition to what Kris said, here are two other ways to see memory usage
of qemu processes:
The first is with "nova diagnostics ". By default this is an
admin-only command.
The second is by running "virsh dommemstat " directly on the
compute node.
Note that it's possible for the used memory (r
Kris,
Sorry for the confusion, when I refer to "free mem", it's from the free
column, buffers/cache row. eg:
root@node-2:/etc/libvirt/qemu# free -g
total used free sharedbuffers cached
Mem: 251250 1 0 0 1
Deepti, sorry for replying off-list before, that was an accident. I have
some new info though:
I ran some numbers on this today just from a general benchmark POV. We had
900 revocation events in our system as the result of some automated
testing. I found that this reduced token validation performa
One more reminder that Mike Dorman and I will be talking about this with the
devs at the Neutron mid-cycle. If you have a use case for Network Segmentation
that is not covered and/or you have a different Ideal Situation please update
the etherpad [1]. I would like to make sure that your use ca