On 07/27/2012 07:55 AM, David Nalley wrote:
On Tue, Jul 24, 2012 at 4:30 AM, Vladimir Melnik <v.mel...@uplink.ua> wrote:
Good day!
What will you recommend to use as primary storage based on Linux server?
iSCSI or NFS?
Am I right that NFS will be too slow for keeping VM-images?
Thanks in advance!
So while my inner storage geek really likes iSCSI and there are lots
of performance tuning that you can do, I don't find that most people
are willing to do that level of tuning, and until the advent of 10GE I
am not sure that it really mattered, as network bandwidth tended to be
the limiting factor for arrays of a given size.
There have also been a number of NFS vs iSCSI performance reports done
for various hypervisors, (google for 'NFS vs iSCSI +
$hypervisor_of_choice') and they typically show that iSCSI is
marginally faster on average, but the overhead isn't worth it. So if
you are a large iSCSI shop already, feel free to use it, but NFS is
typically a lot easier to setup, and gets you 95% of the speed benefit
IMO.
--David
I am doing this comparison as we speak, and was led to believe by Citrix
that NFS was the storage method of choice for Cloudstack Once I got
into it, what I realized is that NFS creates less support issues for
Citrix, and that is probably the main reason for the recommendation. I
so far have found that NFS creates a far higher processor load on the
storage unit than iSCSI, such that I can saturate a quad core xeon with
1 instance of bonnie++ in a VM. With iSCSI, that workload is
distributed across the Hypervisors, as the HVs and VMs are doing the
file system metadata processing, not the storage unit. On the SAN/NAS
itself, with iSCSI you are just doing block level storage and if the
network card does TCP offload, or is an iSCSI HBA, then you reduce the
cpu load even more. I will be rebuilding my NFS volume as a straight
LVM Logical Volume, then creating an iSCSI target on that, and will be
able to compare and will reply with my results.
--
Regards,
Nik