VR Loses Instance IP Address

2019-08-12 Thread li jerry
Hi All
  After our ACS is upgraded to 4.11.3, the VM on the shared network usually 
loses its IP address (it cannot get an IP address in the guest VM).

Analysis of the cloud.log in vr, we found that sometimes restarting or deleting 
the A VM will cause the B VM to disappear in /etc/dhcphosts.txt.
Because /etc/dhcphosts.txt is inconsistent with the data in 
/var/lib/misc/dnsmasq.leases. This triggers delete_leases to remove the IP 
address of the B VM from the DHCP server.



2019-07-25 12:03:58,519  CsHelper.py execute:193 Command 'ip link show eth0 | 
grep 'state DOWN'' returned non-zero exit status 1
2019-07-25 12:03:58,529  CsRoute.py add_network_route:73 Adding route: dev eth0 
table: Table_eth0 network: 10.40.51.0/24 if not present
2019-07-25 12:03:58,530  CsHelper.py execute:188 Executing: ip route show type 
throw 10.40.51.0/24 table Table_eth0 proto static
2019-07-25 12:03:58,544  CsHelper.py execute:188 Executing: sudo ip route flush 
cache
2019-07-25 12:03:58,582  CsHelper.py execute:188 Executing: systemctl start 
cloud-password-server@10.40.51.252
2019-07-25 12:03:58,603  CsHelper.py service:225 Service 
cloud-password-server@10.40.51.252 start
2019-07-25 12:03:58,604  CsRoute.py defaultroute_exists:115 Checking if default 
ipv4 route is present
2019-07-25 12:03:58,604  CsHelper.py execute:188 Executing: ip -4 route list 0/0
2019-07-25 12:03:58,617  CsRoute.py defaultroute_exists:119 Default route 
found: default via 10.40.51.1 dev eth0
2019-07-25 12:03:58,619  CsHelper.py execute:188 Executing: ip addr show
2019-07-25 12:03:58,635  CsFile.py commit:60 Nothing to commit. The 
/etc/dnsmasq.d/cloud.conf file did not change
2019-07-25 12:03:58,635  CsFile.py commit:66 Wrote edited file 
/etc/dhcphosts.txt
2019-07-25 12:03:58,635  CsFile.py commit:68 Updated file in-cache configuration
2019-07-25 12:03:58,635  CsFile.py commit:60 Nothing to commit. The 
/etc/dhcpopts.txt file did not change
2019-07-25 12:03:58,636  CsDhcp.py delete_leases:122 Attempting to delete 
entries from dnsmasq.leases file for VMs which are not on dhcphosts
file
2019-07-25 12:03:58,636  CsDhcp.py delete_leases:133 dhcp_release $(ip route 
get 10.40.51.231 | grep eth | head -1 | awk '{print $3}') 10.40.
51.231 1e:00:94:00:04:40
2019-07-25 12:03:58,636  CsHelper.py execute:188 Executing: dhcp_release $(ip 
route get 10.40.51.231 | grep eth | head -1 | awk '{print $3}')
10.40.51.231 1e:00:94:00:04:40
2019-07-25 12:03:58,660  CsDhcp.py delete_leases:137 Deleted 1 entries from 
dnsmasq.leases file
2019-07-25 12:03:58,661  CsFile.py commit:66 Wrote edited file /etc/hosts
2019-07-25 12:03:58,661  CsFile.py commit:68 Updated file in-cache configuration
2019-07-25 12:03:58,661  CsDhcp.py write_hosts:156 Updated hosts file
2019-07-25 12:03:58,662  CsHelper.py execute:188 Executing: systemctl restart 
dnsmasq
2019-07-25 12:03:58,772  CsHelper.py service:225 Service dnsmasq restart
2019-07-25 12:03:58,772  CsHelper.py execute:188 Executing: systemctl stop 
conntrackd
2019-07-25 12:03:58,793  CsHelper.py service:225 Service conntrackd stop
2019-07-25 12:03:58,793  CsHelper.py execute:188 Executing: systemctl stop 
keepalived
2019-07-25 12:03:58,813  CsHelper.py service:225 Service keepalived stop
2019-07-25 12:03:58,813  CsHelper.py execute:188 Executing: mount
2019-07-25 12:04:31,229  update_config.py :146 update_config.py :: 
Processing incoming file => vm_dhcp_entry.json.41460506-6ea7-4474-
a970-b923726889b8



Re: Dedicated hosts for Domain/Account

2019-08-12 Thread Rakesh Venkatesh
Thanks for the quick reply.
I was browsing through the code and found the following


// check affinity group of type Explicit dedication exists. If No
put
// dedicated pod/cluster/host in avoid list
List vmGroupMappings =
_affinityGroupVMMapDao.findByVmIdType(vm.getId(), "ExplicitDedication");

if (vmGroupMappings != null && !vmGroupMappings.isEmpty()) {
isExplicit = true;
}


So this feature will work only if vm's are associated with affinity groups.
I created two vm's with same affinity group and after enabling the
maintenance mode they were migrated to the other dedicated machines.
So no need to create a github issue I guess.

On Mon, Aug 12, 2019 at 5:04 PM Andrija Panic 
wrote:

> Considering that manual VM LIVE migrations via CloudStack from
> non-dedicated to dedicated SHOULD/DOES work - then I would say this is an
> "unhandled" case, which indeed should be handled and live migration should
> happen instead of stopping the VMs.
>
> I assume someone else might jump in - but if not, please raise GitHub
> issues as a bug report.
>
>
> Thx
>
> On Mon, 12 Aug 2019 at 16:52, Rakesh Venkatesh 
> wrote:
>
> > Hello
> >
> > In my cloudstack setup, I have three KVM hypervisors out of which two
> > hypervisors are dedicated to Root/admin account and the third is not
> > dedicated. When I enable the maintenance mode on the dedicated
> hypervisor,
> > it will always migrate the vm's from dedicated to non dedicated
> hypervisor
> > but not to second dedicated hypervisor. I dont think this is the expected
> > behavior. Can any one please verify? The dedicated hypervisors will be
> > added to avoid set and the deployment planning manager skips these
> > hypervisors.
> >
> > If I dedicate the third hypervisor to different domain and enable the
> > maintenance mode on the first hypervisor then all the vm's will be
> stopped
> > instead of migrating to second dedicated hypervisor of the same
> > domain/account.
> >
> >
> > I have highlighted the necessary logs in red. You can see from the logs
> > that host with id 17 and 20 are dedicated but not 26. When maintenance
> mode
> > is enabled on host id 20, it skips 17 and 20 and migrates vm's to host id
> > 26
> >
> >
> >
> > 2019-08-12 14:35:23,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Deploy avoids pods: null, clusters: null, hosts: [20],
> > pools: null
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) DeploymentPlanner allocation algorithm:
> > com.cloud.deploy.FirstFitPlanner@6fecace4
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Trying to allocate a host and storage pools from dc:8,
> > pod:8,cluster:null, requested cpu: 16000, requested ram: 8589934592
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Is ROOT volume READY (pool already allocated)?: Yes
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) This VM has last host_id specified, trying to choose the
> > same host: 20
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) The last host of this VM is in avoid set
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Cannot choose the last host to deploy this VM
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Searching resources only under specified Pod: 8
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Listing clusters in order of aggregate capacity, that
> have
> > (atleast one host with) enough CPU and RAM capacity under this Pod: 8
> > 2019-08-12 14:35:23,761 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> > pools: null
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) DeploymentPlanner allocation algorithm:
> > com.cloud.deploy.FirstFitPlanner@6fecace4
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1

Re: Dedicated hosts for Domain/Account

2019-08-12 Thread Andrija Panic
Considering that manual VM LIVE migrations via CloudStack from
non-dedicated to dedicated SHOULD/DOES work - then I would say this is an
"unhandled" case, which indeed should be handled and live migration should
happen instead of stopping the VMs.

I assume someone else might jump in - but if not, please raise GitHub
issues as a bug report.


Thx

On Mon, 12 Aug 2019 at 16:52, Rakesh Venkatesh 
wrote:

> Hello
>
> In my cloudstack setup, I have three KVM hypervisors out of which two
> hypervisors are dedicated to Root/admin account and the third is not
> dedicated. When I enable the maintenance mode on the dedicated hypervisor,
> it will always migrate the vm's from dedicated to non dedicated hypervisor
> but not to second dedicated hypervisor. I dont think this is the expected
> behavior. Can any one please verify? The dedicated hypervisors will be
> added to avoid set and the deployment planning manager skips these
> hypervisors.
>
> If I dedicate the third hypervisor to different domain and enable the
> maintenance mode on the first hypervisor then all the vm's will be stopped
> instead of migrating to second dedicated hypervisor of the same
> domain/account.
>
>
> I have highlighted the necessary logs in red. You can see from the logs
> that host with id 17 and 20 are dedicated but not 26. When maintenance mode
> is enabled on host id 20, it skips 17 and 20 and migrates vm's to host id
> 26
>
>
>
> 2019-08-12 14:35:23,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Deploy avoids pods: null, clusters: null, hosts: [20],
> pools: null
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) DeploymentPlanner allocation algorithm:
> com.cloud.deploy.FirstFitPlanner@6fecace4
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Trying to allocate a host and storage pools from dc:8,
> pod:8,cluster:null, requested cpu: 16000, requested ram: 8589934592
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Is ROOT volume READY (pool already allocated)?: Yes
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) This VM has last host_id specified, trying to choose the
> same host: 20
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) The last host of this VM is in avoid set
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Cannot choose the last host to deploy this VM
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Searching resources only under specified Pod: 8
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Listing clusters in order of aggregate capacity, that have
> (atleast one host with) enough CPU and RAM capacity under this Pod: 8
> 2019-08-12 14:35:23,761 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> pools: null
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) DeploymentPlanner allocation algorithm:
> com.cloud.deploy.FirstFitPlanner@6fecace4
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) Trying to allocate a host and storage pools from dc:8,
> pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) Is ROOT volume READY (pool already allocated)?: Yes
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) This VM has last host_id specified, trying to choose the
> same host: 26
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
> (logid:b7e8e3a2) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> pools: null
> 2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerI

Dedicated hosts for Domain/Account

2019-08-12 Thread Rakesh Venkatesh
Hello

In my cloudstack setup, I have three KVM hypervisors out of which two
hypervisors are dedicated to Root/admin account and the third is not
dedicated. When I enable the maintenance mode on the dedicated hypervisor,
it will always migrate the vm's from dedicated to non dedicated hypervisor
but not to second dedicated hypervisor. I dont think this is the expected
behavior. Can any one please verify? The dedicated hypervisors will be
added to avoid set and the deployment planning manager skips these
hypervisors.

If I dedicate the third hypervisor to different domain and enable the
maintenance mode on the first hypervisor then all the vm's will be stopped
instead of migrating to second dedicated hypervisor of the same
domain/account.


I have highlighted the necessary logs in red. You can see from the logs
that host with id 17 and 20 are dedicated but not 26. When maintenance mode
is enabled on host id 20, it skips 17 and 20 and migrates vm's to host id 26



2019-08-12 14:35:23,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Deploy avoids pods: null, clusters: null, hosts: [20],
pools: null
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) DeploymentPlanner allocation algorithm:
com.cloud.deploy.FirstFitPlanner@6fecace4
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Trying to allocate a host and storage pools from dc:8,
pod:8,cluster:null, requested cpu: 16000, requested ram: 8589934592
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Is ROOT volume READY (pool already allocated)?: Yes
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) This VM has last host_id specified, trying to choose the
same host: 20
2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) The last host of this VM is in avoid set
2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Cannot choose the last host to deploy this VM
2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Searching resources only under specified Pod: 8
2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Listing clusters in order of aggregate capacity, that have
(atleast one host with) enough CPU and RAM capacity under this Pod: 8
2019-08-12 14:35:23,761 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
pools: null
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) DeploymentPlanner allocation algorithm:
com.cloud.deploy.FirstFitPlanner@6fecace4
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) Trying to allocate a host and storage pools from dc:8,
pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) Is ROOT volume READY (pool already allocated)?: Yes
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) This VM has last host_id specified, trying to choose the
same host: 26
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
(logid:b7e8e3a2) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
pools: null
2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
(logid:b7e8e3a2) DeploymentPlanner allocation algorithm:
com.cloud.deploy.FirstFitPlanner@6fecace4
2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
(logid:b7e8e3a2) Trying to allocate a host and storage pools from dc:8,
pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-2461

Re: Question about Basic and Advanced Network

2019-08-12 Thread Jon Marshall
1) Netscaler provides local balancing functions rather than IPs.  For both 
basic and advanced networking you can either assign IPs statically to your VMs 
or you can use DHCP on your virtual routers to provide the IPs.

Public vs private IPs , doesn't really make any difference.

2) You can setup your Cloudstack using Adavnced network with Security Groups 
which is pretty much basic networking but with multiple subnets/vlans.

However if you use Advanced networking (without Security Groups) then no you 
cannot have isolated networks using SG but Advanced networking does support 
firewalling to isolated and VPC networks.

Jon


From: Francisco Germano 
Sent: 11 August 2019 22:51
To: 'users@cloudstack.apache.org' 
Subject: Question about Basic and Advanced Network

Greetings,

My team and I are working for open-source software and our next step are to 
implement an integration with the Cloudstack. We are implementing the network 
context and we have some doubts. Could you help us?

About Basic Network:
1 - A Citrix NetScaler provides public IP, right? Is it possible to control the 
Public IPs using just the CloudStack API? If yes, how?

About Advanced Network:
2 - Is it possible to use Security Group in the Isolated Network?

Best regards,
Francisco Germano