Good point Carlos, though for sake of clarity (to confirm its not a bug), i'd check if portgroups for storage have any network throttle.

In vmware its easy to check, i assume xen would be similar.


On 8/18/14, 3:29 PM, Carlos Reátegui wrote:
Ilya,
Isn’t the network throttling for the guest/public network?  How does it affect 
the primary storage network?


On Aug 18, 2014, at 3:17 PM, ilya musayev <[email protected]> wrote:

As someone probably already mentioned, check if throttle enabled/set in global 
settings as well as under network offering.

Regards
ilya

On 8/18/14, 1:01 PM, Carlos Reátegui wrote:
You sure VR is traversed for nfs traffic?  In my setup the NAS subnet is 
completely separate from any that CS uses.  The hosts know about it but none of 
the system vms know about it.

In my setup I am using shared network so the VR is not involved in network 
traffic.

One of my setups:
NAS (ubuntu nfs, HW raid10 with ssd cache) connected with 10Gbe on a subnet 
that CS does not know about other than the ip to the NFS server.
XenServer Hosts: 4 x 1Gbe for primary storage, 4x1 Gbe for CloudStack (e.g. 
guest, management, secondary storage)

Using bonnie++ I am seeing ~135Mbps read ~109Mbps write from an ubuntu 12.04 vm.


On Aug 18, 2014, at 10:20 AM, Jeff Crystal <[email protected]> wrote:

Management server: HP Proliant ML350 G5 18GB RAM dual quad-core 2.0Ghz server
SAN: HP Proliant ML350 G6 28GB RAM dual quad-core 2.66Ghz server running Open-e 
DSS v7 lite
Virtual Hosts (2 identical servers)
HP Proliant ML 350 G5 24GB RAM 2.66Ghz dual Quad-core with (4) gigabit nics
Public, Guest, Storage, and Management networks are all assigned dedicated nics 
(cloudbr0-3)

Using NFS I'm getting 6-7Mbps write and 45-50Mbps read speeds with this setup.

Using Microsoft software iSCSI from a Windows Vm running in this environment 
and attached to the same SAN, I get 13-14Mbps read/write speeds.  (Access to 
the SAN traverses the virtual router.  I'm not sure if this is affecting the 
speed or not.)

From: Carlos Reátegui [mailto:[email protected]]
Sent: Monday, August 18, 2014 1:07 PM
To: [email protected]
Subject: Re: Disk performance

What is your network setup?


On Aug 18, 2014, at 10:04 AM, Jeff Crystal 
<[email protected]><mailto:[email protected]%3e> wrote:

No, I need a shared storage solution. I'm wondering what others are using in 
place of NFS. OCFS2? GFS2? GlusterFS? I tried setting up CLVM, but it seems 
very problematic (server won't shut down without manual intervention to leave 
the cluster, server won't join the cluster on boot without manual commands. Not 
very enterprisey!) I'll have to give Ceph a look...

-----Original Message-----
From: Ahmad Emneina [mailto:[email protected]]
Sent: Monday, August 18, 2014 11:52 AM
To: Cloudstack users mailing list
Subject: Re: Disk performance

local storage is probably your most performant storage type... you dont get the 
awesome of HA or easy volume recovery, but if all youre after is performance. 
Thats the one.


On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith 
<[email protected]><mailto:[email protected]%3e> wrote:

Jeff,

I'm a big fan of ceph for clustered storage for block devices.

Beyond that, there are a bunch of crazy things you can do to tune NFS
but it's rarely worth it.


On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal 
<[email protected]><mailto:[email protected]%3e>
wrote:

Anyone have any suggestions for improving disk performance with
Cloudstack and KVM? Using NFS is pretty craptastic, even with
dedicated network adapters and switches for storage traffic.





[image: JCrystal Signature2013-1]




--
Randall Smith
Computing Services
Adams State University
http://www.adams.edu/
719-587-7741

________________________________

Reply via email to