Re: space issues, I have yet to encounter problems with the VM Working directory path -- its usage tends to stay pretty constant (it's more or less a function of the number of running computers). The Virtual Disk path, on the other hand, is constantly filling up (and having to be managed).
As for whether to put the Virtual Disk path on SSDs or to use that for the VM Working directory path, I would encourage you to look at the actual data utilization patterns of your existing infrastructure, when you encounter high I/O latency and what the patterns are for read and write operations. Keep in mind that the Virtual Disk path will primarily be used for read operations while the VM Working directory will be both read and write. And the more concurrent reservations you have running, the more obvious the numbers will be. In our experience, we encountered more I/O bottlenecks on the VM Working directory during heavy loads of classroom use until we put the VM Working directory onto a faster disk backend (the Virtual Disk path is on comparatively slower disk). But I would encourage you to make that determination based on what you observe in your infrastructure. Regards, Aaron Coburn On Jun 9, 2014, at 4:02 PM, David DeMizio <[email protected]> wrote: > Thanks Aaron, > > In my case , I just have one management node so I think using the 4 local ssd > (raid 5) for the Virtual Disk path should be fine and then use the iscsi > datastore for my repository and VM Working directory path. Do you have any > other recommendations for my proposed set up? It would be nice to have use > those 4 300 gig ssd as two raid 1 so I can use 1 for the VM Working directory > path and the other for Virtual Disk path just not sure if I will run into > space issues. > >
signature.asc
Description: Message signed with OpenPGP using GPGMail
