Hi ilya,

        Thanks to all for support, the issue was fixed ( both create template 
from ROOT Volume and Deploy VM from default template :

        I found 3 Interfaces on SSVM, 2 with Private IP but from same class, in 
order to fix this i added a new class for storage networking on POD level. ( 
everything is fine right now)

Regards,
Cristian
On 28.01.2016 22:31:08, ilya <ilya.mailing.li...@gmail.com> wrote:
See response in-line:

On 1/28/16 5:11 AM, Cristian Ciobanu wrote:
> Hello again,
>
> I found something strange.
>
> 1. i did the telnet test from SSVM with destination vCenter, not able to ping 
> also not able to connect via telnet on 80

>From SSVM log

2016-01-27 11:23:07,117 ERROR
[storage.resource.VmwareSecondaryStorageResourceHandler]
(agentRequest-Handler-5:job-29/job-30, cmd: CopyCommand) Unexpected
exception com.sun.xml.internal.ws.client.ClientTransportException: HTTP
transport error: java.net.ConnectException: Connection timed out
com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport
error: java.net.ConnectException: Connection timed out
at
com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:117)



Try the following on SSVM:

# service iptables stop

try connecting to vcenter again via telnet or "nc -z", port is 443 not 80

If it does not work, check your routing table - something is funky with
your FW or routes.

Explain how your network is setup, are there firewalls setup?

Besides vCenter, i would test the connectivy to actual ESXi host that
the VM resides on, try "nc -z esxhost 443".

There can also be an "extra nic issue" that could cause routing and
connectivity issue. Try "ifdown eth3" on SSVM to see if it helps.

All of this must work in order for cloudstack to function properly.

Regards
ilya


> 2. Restart SSVM PING and telnet vCenter, i was able to ping and connect via 
> telnet
> 3. Again, PING test from SSVM -> vCenter works, on same time i started a VM 
> deploy, but i get the same error and when i verified SSVM i saw fail for PING 
> test ( no packets received from vCenter )
>
> I have no idea what can be the issue.
>
>
> Thanks
>
>
> Regards,
> Cristian
> On 28.01.2016 14:44:49, Cristian Ciobanu wrote:
> Hi Somesh,
>
>
> I did the request test, and is not able to connect ( i did telnet text on 
> port 80, 8080, 443 ) also is not responding to PING
>
> I also did the same test from Management console ( CloudStack ) and works.
>
> I'm not sure why is not working from SSVM because i'm able to ping google 
> from it.
>
> Thank you !
>
>
> Regards,
> Cristian
> On 28.01.2016 01:42:15, Somesh Naidu wrote:
> Can you check if SSVM is able to talk to the vCenter on port 80?
>
> Regards,
> Somesh
>
>
> -----Original Message-----
> From: Cristian Ciobanu [mailto:cristian.c@istream.today]
> Sent: Wednesday, January 27, 2016 4:32 PM
> To: users@cloudstack.apache.org
> Subject: Re: CloudStack and VMware VM deploy issue
>
> Hello again,
>
> Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D
>
> Please help to solve this issue.
>
> Thanks All
>
>
> Regards,
> Cristian
> On 27.01.2016 17:52:32, Cristian Ciobanu wrote:
> Hello to All,
>
> I did rebuild of this configuration and i have the same issue, first VM 
> deploy issue, please see the full log : http://pastebin.com/ybEybCQJ (maybe 
> you will see another error on first part of log, this was related to HOST IP 
> configuration, i forgot to change it, but right now is fixed )
>
> Thank you!
>
>
> Regards,
> Cristian
> On 27.01.2016 03:42:35, ilya wrote:
> err..
>
> Looks like my message was trimmed, do
>
> cat /dev/null > /var/log/cloudstack/management/management-server.log
>
> or similar to start with minimal log.
>
> On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
>> I will come back in few hours with more details, I will start again from 
>> scratch.
>>
>> Thank you!
>>
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:43:54, ilya wrote:
>> Can you post full log to pastebin like site?
>>
>> Ideally, if you can trim the log before you simulate the error - it
>> would be great.
>>
>> do
>>
>>> /var/log/cloudstack/management/management-server.log
>>
>> Then similate the issue and upload to http://pastebin.com
>>
>> Thanks
>> ilya
>>
>>
>>
>> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>>> Hi ilya
>>>
>>>
>>> Yes, before deploy i set local storage in "Global Settings" also i set on 
>>> zone level settings and i removed all disk offering with shared option and 
>>> added with local storage. ( CS 4.5.2 ShapeBlue)
>>>
>>> I did this in CS 4.5.2 i also used this version for KVM with local storage 
>>> and works. but on VMware i have this issue.
>>>
>>>
>>>
>>>
>>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - 
>>> VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> I installed an Windows 2008 from ISO file, after this i tried to create 
>>> template from ROOT volume, but i get this error : Failed to create 
>>> templatecreate template from volume exception: Exception: 
>>> java.lang.NullPointerException
>>> Message: null
>>>
>>>
>>>
>>>
>>> A. I see this error when i try to deploy a vm from default template ( CS 
>>> 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking 
>>> ):
>>>
>>> root@ns52xxxx management]# grep -r job-79 --color
>>> management-server.log:2016-01-26 06:28:27,608 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 
>>> ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>>> management-server.log:2016-01-26 06:28:29,292 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) 
>>> Schedule queued job-79
>>> management-server.log:2016-01-26 06:28:29,307 INFO 
>>> [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Add job-79 into job monitoring
>>> management-server.log:2016-01-26 06:28:29,320 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, 
>>> instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, 
>>> cmdInfo: 
>>> rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcH
N

> y
>
>> A
>> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA,
>>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, 
>> result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: 
>> null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>>> management-server.log:2016-01-26 06:28:29,321 DEBUG 
>>> [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job 
>>> origin: 78
>>> management-server.log:2016-01-26 06:28:29,323 DEBUG 
>>> [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Execute VM work job: 
>>> com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting 
>>> with event: StartRequestedvm's original host id: null new host id: null 
>>> host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Successfully transitioned to start state for 
>>> VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: 
>>> null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using 
>>> dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, 
>>> hosts: null
>>> management-server.log:2016-01-26 06:28:29,360 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: 
>>> com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools 
>>> from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: 
>>> No
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing 
>>> this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:28:29,363 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Looking for suitable pools for this host under 
>>> zone: 1, pod: 1, cluster: 1
>>> management-server.log:2016-01-26 06:28:29,366 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): 
>>> (15,ROOT)
>>> management-server.log:2016-01-26 06:28:29,367 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) We need to allocate new storagepool for this 
>>> volume
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable 
>>> pools
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG 
>>> [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find 
>>> storage pool to fit the vm
>>> management-server.log:2016-01-26 06:28:29,371 DEBUG 
>>> [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: 
>>> null ,poolId: 1
>>> management-server.log:2016-01-26 06:28:29,373 INFO 
>>> [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming 
>>> enough capacity
>>> management-server.log:2016-01-26 06:28:29,374 DEBUG 
>>> [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, 
>>> usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 
>>> 0.85
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG 
>>> [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of 
>>> type VMFS with overprovisioning factor 1
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG 
>>> [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Total over provisioned capacity calculated is 1 * 
>>> 3992708972544
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG 
>>> [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local 
>>> Storage id: 1 is 3992708972544
>>> management-server.log:2016-01-26 06:28:29,381 DEBUG 
>>> [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], 
>>> maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 
>>> 2147483648, allocated disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG 
>>> [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to 
>>> list
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG 
>>> [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable 
>>> storage pools
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Trying to find a potenial host and associated 
>>> storage pools from the suitable host/pool lists for this VM
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable 
>>> storage pool for volume: ROOT
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Found a potential host id: 1 name: 
>>> ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Returning Deployment Destination: 
>>> Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
>>>  : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], 
>>> P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
>>>  : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,408 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting 
>>> with event: OperationRetryvm's original host id: null new host id: 1 host 
>>> id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after 
>>> applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) We are allocating VM, increasing the used 
>>> capacity of this host:1
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 
>>> ,Requested CPU: 500
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free 
>>> RAM:30489239552 ,Requested RAM: 536870912
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old 
>>> used: 3000, old reserved: 0, actual total: 29592, total with 
>>> overprovisioning: 29592; new used:3500, reserved:0; requested 
>>> cpu:500,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old 
>>> used: 3758096384, old reserved: 0, total: 34247335936; new used: 
>>> 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,432 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) VM is being created in podId: 1
>>> management-server.log:2016-01-26 06:28:29,440 DEBUG 
>>> [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Network id=204 is already implemented
>>> management-server.log:2016-01-26 06:28:29,466 DEBUG 
>>> [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:29,491 DEBUG 
>>> [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Changing active number of nics for network 
>>> id=204 on 1
>>> management-server.log:2016-01-26 06:28:29,510 DEBUG 
>>> [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for 
>>> Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>>> management-server.log:2016-01-26 06:28:29,521 DEBUG 
>>> [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is 
>>> acquired for network id 204 as a part of router startup in 
>>> Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
>>>  : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,524 DEBUG 
>>> [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is 
>>> released for network id 204 as a part of router startup in 
>>> Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
>>>  : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,549 DEBUG 
>>> [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying 
>>> dhcp entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 
>>> 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, 
>>> [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}]
>>>  }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 
>>> 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, 
>>> [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}]
>>>  }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic 
>>> zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,566 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run 
>>> command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 
>>> 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 
>>> 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,902 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) 
>>> edithosts.sh execution result: true
>>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, 
>>> Ver: v1, Flags: 10, { Answer } }
>>> management-server.log:2016-01-26 06:28:29,917 DEBUG 
>>> [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying 
>>> userdata and password entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 
>>> 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, 
>>> [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}]
>>>  }
>>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 
>>> 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, 
>>> [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}]
>>>  }
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In 
>>> Basic zone mode, use router's guest IP for SSH control. guest IP : 
>>> 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run 
>>> command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 
>>> 1xx.xx.xx.235 -p saved_password
>>> management-server.log:2016-01-26 06:28:30,220 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) 
>>> savepassword.sh execution result: true
>>> management-server.log:2016-01-26 06:28:30,222 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone 
>>> mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:30,223 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command 
>>> on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d 
>>> eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>>> management-server.log:2016-01-26 06:28:30,447 DEBUG 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py 
>>> execution result: true
>>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, 
>>> Ver: v1, Flags: 10, { Answer, Answer } }
>>> management-server.log:2016-01-26 06:28:30,450 DEBUG 
>>> [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:30,453 DEBUG 
>>> [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for 
>>> VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:28:30,465 DEBUG 
>>> [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>>> management-server.log:2016-01-26 06:28:30,469 DEBUG 
>>> [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in 
>>> template_spool_ref, persisting it
>>> management-server.log:2016-01-26 06:28:30,493 DEBUG 
>>> [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>>> management-server.log:2016-01-26 06:28:30,495 DEBUG 
>>> [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 
>>> with VMTemplateStoragePool id: 15
>>> management-server.log:2016-01-26 06:28:30,496 DEBUG 
>>> [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with 
>>> timeout 3600 seconds
>>> management-server.log:2016-01-26 06:28:30,498 INFO 
>>> [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:28:30,517 DEBUG 
>>> [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE 
>>> copyAsync inspecting dest type TEMPLATE
>>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 
>>> 3(s-4-VM), Ver: v1, Flags: 100111, 
>>> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova
>>>  
>>> [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS
>>>  5.3(64-bit) no GUI 
>>> (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://down
l

>
>> o
>> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova 
>> [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS
>>  5.3(64-bit) no GUI 
>> (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS
>>  datastore: 
>> datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: 
>> datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}]
>>  }
>>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, 
>>> Ver: v1, Flags: 110, { CopyCmdAnswer } }
>>> management-server.log:2016-01-26 06:29:33,819 INFO 
>>> [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:29:33,821 DEBUG 
>>> [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to 
>>> create a Vmware context, check the management server logs or the ssvm log 
>>> for details
>>> management-server.log:2016-01-26 06:29:33,821 INFO 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Unable to contact resource.
>>> management-server.log:2016-01-26 06:29:33,836 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Cleaning up resources for the vm 
>>> VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 
>>> 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, 
>>> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}]
>>>  }
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 
>>> 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, 
>>> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}]
>>>  }
>>> management-server.log:2016-01-26 06:29:33,842 INFO 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing 
>>> resource StopCommand: 
>>> {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] 
>>> (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: 
>>> StopCommand) find VM i-2-12-VM on host
>>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] 
>>> (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: 
>>> StopCommand) VM i-2-12-VM not found in host cache
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] 
>>> (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: 
>>> StopCommand) load VM cache on host
>>> management-server.log:2016-01-26 06:29:33,896 INFO 
>>> [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is 
>>> no longer in vSphere
>>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 
>>> 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, 
>>> Ver: v1, Flags: 110, { StopAnswer } }
>>> management-server.log:2016-01-26 06:29:33,908 DEBUG 
>>> [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 
>>> ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:29:33,911 DEBUG 
>>> [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Changing active number of nics for network 
>>> id=204 on -1
>>> management-server.log:2016-01-26 06:29:33,945 DEBUG 
>>> [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Asking VirtualRouter to release 
>>> NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Successfully released network resources for the 
>>> vm VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG 
>>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm 
>>> VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,956 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: 
>>> com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools 
>>> from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: 
>>> No
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing 
>>> this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:29:33,958 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) The specified host is in avoid set
>>> management-server.log:2016-01-26 06:29:33,959 DEBUG 
>>> [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>>> management-server.log:2016-01-26 06:29:34,012 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped 
>>> with event: OperationFailedvm's original host id: null new host id: null 
>>> host id before state transition: 1
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after 
>>> applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM 
>>> after applying overprovisioning: 34247335936
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 
>>> 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; 
>>> new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG 
>>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) release mem from host: 1, old used: 
>>> 4294967296,reserved: 0, total: 34247335936; new used: 
>>> 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,079 ERROR 
>>> [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Invocation exception, caused by: 
>>> com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
>>> deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; 
>>> id=1
>>> management-server.log:2016-01-26 06:29:34,079 INFO 
>>> [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79 ctx-67168166) Rethrow exception 
>>> com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
>>> deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; 
>>> id=1
>>> management-server.log:2016-01-26 06:29:34,080 DEBUG 
>>> [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for 
>>> VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:29:34,080 ERROR 
>>> [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 
>>> 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, 
>>> cmdInfo: 
>>> rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgA
K

> c
>
>> H
>> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA,
>>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, 
>> result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: 
>> null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> management-server.log:2016-01-26 06:29:34,083 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, 
>>> result: 
>>> rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lb
n

> Q
>
>> g
>> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZ
t

> V
> 2
>> 9
>> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0
Z

> 1
> d
>> H
>> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Publish async job-79 complete on message bus
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Wake up jobs related to job-79
>>> management-server.log:2016-01-26 06:29:34,085 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Update db status for job-79
>>> management-server.log:2016-01-26 06:29:34,087 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs 
>>> created from job- 79
>>> management-server.log:2016-01-26 06:29:34,111 DEBUG 
>>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>>> management-server.log:2016-01-26 06:29:34,113 INFO 
>>> [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f 
>>> job-78/job-79) Remove job-79 from job monitoring
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job 
>>> monitoring
>>> catalina.out:INFO [c.c.s.StorageManagerImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool 
>>> null (1) does not supply IOPS capacity, assuming enough capacity
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is 
>>> acquired for VMTemplateStoragePool 15
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing 
>>> lock for VMTemplateStoragePool 15
>>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to 
>>> contact resource.
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing 
>>> resource StopCommand: 
>>> {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM 
>>> not found in host cache
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 
>>> ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is 
>>> no longer in vSphere
>>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation 
>>> exception, caused by: 
>>> com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
>>> deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; 
>>> id=1
>>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow 
>>> exception com.cloud.exception.InsufficientServerCapacityException: Unable 
>>> to create a deployment for VM[User|i-2-12-VM]Scope=interface 
>>> com.cloud.dc.DataCenter; id=1
>>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete 
>>> AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: 
>>> null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: 
>>> rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH
2

> s
>
>> H
>> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA,
>>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, 
>> result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: 
>> null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] 
>>> (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job 
>>> monitoring
>>>
>>> ============================================================================================
>>>
>>> I just want to mention that i test also CS 4.6 and 4.7 but there i have 
>>> other issue regarding Virtual Router, never start ( i just see "starting" 
>>> for VR status )
>>>
>>> Thank you
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 23:08:00, ilya wrote:
>>> Cristian
>>>
>>> Not entire clear on strange part.
>>>
>>> If you want to use local storage, you must have all components reference
>>> local storage.
>>>
>>> That includes SystemVMs - there is a zone level setting that enables
>>> local storage, as well as service and disk offering must be set to local.
>>>
>>> Regards
>>> ilya
>>>
>>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>>> Hi Timothy
>>>>
>>>> This is very strange, I configured for local storage also i removed all 
>>>> disk offering and create the same but with local storage.
>>>>
>>>>
>>>>
>>>> Thank you!
>>>>
>>>> Regards,
>>>> Cristian
>>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>>> Hi Cristian,
>>>>
>>>> As per Ilya’s mail, check that the DATA disk offering is set to use 
>>>> “Storage Type – Local”, the logs provided indicate that no shared storage 
>>>> can be found:
>>>>
>>>> “No storage pools available for shared volume allocation, returning”
>>>>
>>>> Change this under the Service Offering à Disk Offering.
>>>>
>>>> Regards,
>>>>
>>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>>> Timothy Lothering
>>>> Solutions Architect
>>>> Managed Services
>>>> T: +27877415535
>>>> F: +27877415100
>>>> C: +27824904099
>>>> E: tlother...@datacentrix.co.za
>>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>>
>>>>
>>>> Serious about performance, passionate about value
>>>>
>>>>
>>>> From: Glenn Wagner [mailto:glenn.wag...@shapeblue.com]
>>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>>> To: users@cloudstack.apache.org
>>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>>
>>>> Hi Cristian,
>>>>
>>>> Like ilya said try create a new VM without a Data Drive when using the 
>>>> Wizard
>>>> And check that your disk offerings are all set to local storage.
>>>>
>>>> If you are just testing you can also change your over provision factor in 
>>>> the Global settings from 2 to like 4 or 6 for storage
>>>>
>>>> Error
>>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>>
>>>> Thanks
>>>> Glenn
>>>>
>>>>
>>>>
>>>> [ShapeBlue] [http://www.shapeblue.com]
>>>> Glenn Wagner
>>>> Senior Consultant
>>>> ,
>>>> ShapeBlue
>>>> d:
>>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>>> |
>>>> m:
>>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>>> e:
>>>> glenn.wag...@shapeblue.com | t: 
>>>> [mailto:glenn.wag...@shapeblue.com%20|%20t:]
>>>> |
>>>> w:
>>>> www.shapeblue.com [http://www.shapeblue.com]
>>>> a:
>>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 
>>>> South Africa
>>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue 
>>>> Services India LLP is a company incorporated in India and is operated 
>>>> under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a 
>>>> company incorporated in Brasil and is operated under license from Shape 
>>>> Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of 
>>>> South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is 
>>>> a registered trademark.
>>>> This email and any attachments to it may be confidential and are intended 
>>>> solely for the use of the individual to whom it is addressed. Any views or 
>>>> opinions expressed are solely those of the author and do not necessarily 
>>>> represent those of Shape Blue Ltd or related companies. If you are not the 
>>>> intended recipient of this email, you must neither take any action based 
>>>> upon its contents, nor copy or show it to anyone. Please contact the 
>>>> sender if you believe you have received this email in error.
>>>>
>>>> -----Original Message-----
>>>> From: ilya [mailto:ilya.mailing.li...@gmail.com 
>>>> [mailto:ilya.mailing.li...@gmail.com]]
>>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>>
>>>> Cristian
>>>>
>>>> See response in-line
>>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>>> Cluster: 1
>>>>
>>>> Can you deploy without a data disk? That is when going through wizard, 
>>>> dont select any additional disks..
>>>>
>>>> If it works, make sure the disk offering you create is also type "local"
>>>> and not shared or clustered.
>>>>
>>>> PS: I would also change a value in global settings for 
>>>> vm.instancename.flag to true. This is going to use the names given in 
>>>> cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But 
>>>> this has nothing to do with your issue above.
>>>>
>>>> Regards
>>>> ilya
>>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>>> IaaS Cloud Design & Build 
>>>> [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS 
>>>> deployment framework [http://shapeblue.com/csforge/]
>>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | 
>>>> CloudStack Software Engineering 
>>>> [http://shapeblue.com/cloudstack-software-engineering/]
>>>> CloudStack Infrastructure Support 
>>>> [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack 
>>>> Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>>
>>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>>> Disclaimer
>>>> Everything in this e-mail and any attachments relating to the official 
>>>> business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') 
>>>> is proprietary to Datacentrix. It is confidential, legally privileged and 
>>>> protected by law. Datacentrix does not own and endorse any other content. 
>>>> Views and opinions are those of the sender unless clearly stated as being 
>>>> that of Datacentrix. The person addressed in the e-mail is the sole 
>>>> authorised recipient. Please notify the sender immediately if it has 
>>>> unintentionally reached you and do not read, disclose or use the content 
>>>> in any way. Datacentrix cannot assure that the integrity of this 
>>>> communication has been maintained nor that it is free of errors, virus, 
>>>> interception or interference.
>>>>
>>>>
>>>
>>
>

Reply via email to