Doron Fediuck wrote:
----- Original Message -----
| From: "Itamar Heim" <ih...@redhat.com>
| To: "Doron Fediuck" <dfedi...@redhat.com>
| Cc: "Christian Kolquist" <ckolqu...@rgmadvisors.com>, users@ovirt.org, "Dan 
Kenigsberg" <dan...@redhat.com>
| Sent: Tuesday, July 16, 2013 3:04:11 PM
| Subject: Re: [Users] SPM and VM migrations
| | On 07/08/2013 02:07 PM, Doron Fediuck wrote:
| >
| > ----- Original Message -----
| > | From: "Christian Kolquist" <ckolqu...@rgmadvisors.com>
| > | To: users@ovirt.org
| > | Sent: Tuesday, July 2, 2013 11:01:55 PM
| > | Subject: [Users] SPM and VM migrations
| > |
| > | I currently have an issue where when we migrate more than one VM from the
| > | host that is the SPM to another host it will cause connectivity to the
| > | SPM
| > | Host to fail which the ovirt engine then sees the host being down. It
| > | then
| > | reboots the SPM Host and stops all of the VM's running on there.
| > |
| > |
| > | Our setup
| > |
| > | Nodes: Fedora 18
| > | em1: ovirtmgmt (mgmt, storage and migrations)
| > | em2: VM network trunks
| > |
| > | Engine: Fedora 17 (will be upgrading that to fedora 18 shortly)
| > |
| > | Both NICS are 1 GB. The storage is NFS which is on the same VLAN and
| > | subnet
| > | as the hosts. The ovirt-engine is on a standalone server but is NOT on
| > | the
| > | same vlan/subnet yet.
| > |
| > | Is this normal behavior for the SPM to have issue when migrating hosts to
| > | and
| > | from it? I don't have any further network interfaces to add to the hosts
| > | at
| > | this time (we are planning on adding a 2x 10GB card to each node in the
| > | future but we don't have that option at this time). Is there anyway to
| > | limit
| > | the number of active migrations and have the migrations be a lower
| > | priority
| > | traffic than others?
| > |
| > |
| > | Thanks
| > | Christian
| > |
| > |
| > | ---------------------------------------------------------------
| > | This email, along with any attachments, is confidential. If you
| > | believe you received this message in error, please contact the
| > | sender immediately and delete all copies of the message.
| > | Thank you.
| > |
| >
| > Hi Christian,
| > Apparently the migration is eating up your bandwidth.
| > Currently it is possible to hard-limit the migration using
| > The /usr/share/doc/<vdsm-version>/vdsm.conf.sample file with
| >
| > # Maximum bandwidth for migration, in mbps, 0 means libvirt's default
| > (30mbps?).
| > # migration_max_bandwidth = 0
| | why not change this default from 0 to something more defensive? |
IIRC the limitation has a very bad effect in most cases, and it may
also get you to a point where the migration process does not converge.

| >
| > In the coming oVirt 3.3 we should be able to handle this by separating
| > the migration network, and then use-
| > http://www.ovirt.org/Features/Network_QoS
| >
| > Doron
| >
I asked about this parameter the other day and have used it on two hosts sofar BUT is has an unmentioned sideeffect. There are two parameters:
- migration_max_bandwidth
- max_outgoing_migrations

I thought that setting the first to 85 would be enough but it isn't. The settings mean the following: If max_outgoing_migrations is set to 2 or higher than you the end result will be max_outgoing_migrations*migration_max_bandwidth, meaning that with anything higher than 1 in my case will saturated my 1G line. So max_outgoing_migrations=3 and migration_max_bandwidth=70 will consume 210Mb if the line has that capacity if you select more than 3 VMs to migratie.
So migration_max_bandwidth is PER VM and not an absolute.
0 means take as much as you can not 30 since 1 VM will saturate 1G easily if its a big one.

In our case we will probably go for 1 VM at a time but at 85-90M since our use case is usually 1 VM at a time.

Joop


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to