On 11/29/2010 11:13 AM, Sirtaj Singh Kang wrote:
I am configuring a Xen host with 4-5 guest VMs and have a question about
allocation of memory for buffer and cache. As I understand it, all guest
disk IO is routed through dom0 first. Does this mean that dom0
buffer/cache is used for guest I/O as well, and guest buffer/cache is
redundant?

Its been a while since I looked at blockdev stuff in Xen; but I seem to remember that hostside buffers depends on what sort of a storage setup you have in place. If there is a generic file-as-disk exported from dom0 -> domU; There is some level of caching that the host can contribute. However, if its blockdev -> domU (eg. a logical vol, or a physical disk ) you wont get any filesystem level caching on the host, but there might still be an opportunity to run with seriously high device buffers ( if you so desire and trust your setup! ).

Things get a bit more complex once you move to remote storage, eg a SAN or even just a remote linux iscsi target. Since you can then tweak a few things on that end.

I have seen at least one commercial xen VPS vendor disabling buffer and
cache for guests. Is this simply a tradeoff for memory vs performance,
or is there really no benefit to having two levels of buffers and cache?

Are you sure it was a Xen based VPS ? I've seen people using openvz try stuff like that. In the end it boils down to what level of service you need. If you have nearline storage, with blockdevices being exported directly under xen in hvm mode, there wont be two levels of buffers anyway.

I know this does not directly answer your question, but hopefully gets you going in the right direction.

- KB

_______________________________________________
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd

Reply via email to