On Tue, Jan 19, 2016 at 10:56:21PM +0100, lee wrote:
> Alec Ten Harmsel <a...@alectenharmsel.com> writes:
> >
> > Depends on how the load is. Right now I have a 500GB HDD at work. I use
> > VirtualBox and vagrant for testing various software. Every VM in
> > VirtualBox gets a 50GB hard disk, and I generally have 7 or 8 at a time.
> > Add in all the other stuff on my system, which includes a 200GB dataset,
> > and the disk is overcommitted. Of course, none of the VirtualBox disks
> > use anywhere near 50GB.
> 
> True, that's for testing when you do know that the disk space will not
> be used and have no trouble when it is.  When you have the VMs in
> production and users (employees) using them, you don't know when they
> will run out of disk space and trouble ensues.

Almost. Here is an equal example: I am an admin on an HPC cluster. We
have a shared Lustre filesystem that people store work files in while
they are running jobs. It has around 1PB of capacity. As strange as this
may sound, this filesystem is overcommitted (we have 20,000 cores,
that's only 52GB per core, not even close to enough for more than half a
year of data accumulation).  Unused data is deleted after 90 days, which
is why it can be overcommitted.

Extending this to a more realistic example without automatic data
deletion is trivial. Imagine you are a web hosting provider. You allow
each client unlimited disk space, so you're automatically overcommitted.
In the aggregate, even though one client may increase their usage
extremely quickly, total usage rises slowly, giving you more than enough
time to increase the storage capacity of whatever backing filesystem is
hosting their files.

> > All Joost is saying is that most resources can be overcommitted, since
> > all the users will not be using all their resources at the same time.
> 
> How do you overcommit disk space and then shrink the VMs automatically
> when disk usage gets lower again?
> 

Sorry, my previous example was bad, since the normal strategy is to
expand when necessary as far as I know. See above.

Alec

Reply via email to