RE: CloudStack Collab in Miami

2017-01-25 Thread Giles Sirett
This is great to see - thanks for all your hard work on this Will

I hope as many people as possible can make it to Miami

Kind Regards
Giles


giles.sir...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Will Stevens [mailto:sw...@apache.org] 
Sent: 24 January 2017 21:28
To: users@cloudstack.apache.org; d...@cloudstack.apache.org; 
market...@cloudstack.apache.org
Subject: CloudStack Collab in Miami

Hello Everyone,
It is that time of the year again.  We are into the thick of it planning the 
next CloudStack Collaboration Conference (CCC).

We are happy to announce that the first CloudStack Collab of 2017 will be 
taking place on May 16-18 in Miami.  CloudStack has partnered with ApacheCon to 
bring you a great event.

More information about the conference is available here:
http://us.cloudstackcollab.org/

Here are some of the important details:

- The *speaker submission deadline is Feb 11th*, so get your talks in early.  
Details here: http://us.cloudstackcollab.org/#get-involved

- Registration will be taken care of by ApacheCon.  Details here:
http://us.cloudstackcollab.org/#attend

- Travel information and details about the venue can be found here:
http://us.cloudstackcollab.org/#location

- Consider sponsoring the event to make it even better. More info here:
http://us.cloudstackcollab.org/#sponsors

If you have any questions, please respond to this email and I will make sure 
your questions are answered.

Looking forward to seeing you all in Miami...

Cheers,

Will


Re: Host allocation

2017-01-25 Thread Alessandro Caviglione
Yes but here I configure the cluster, not the host...
It should be a "memory.allocated.capacity.disablethreshold" setting for
every single host because
cluster.memory.allocated.capacity.disablethreshold
will disable the cluster allocation at 90% of the entire cluster, but I
could have some hosts filled (100%) and one at half (50%)...

Am I right?

On Wed, Jan 18, 2017 at 12:31 PM, Dag Sonstebo 
wrote:

> Hi Allessandro,
>
> To achieve this you would probably have to:
>
> - Set  global setting “host.capacityType.to.order.clusters” to RAM
> (default is CPU)
> - Set *cluster setting* “cluster.memory.allocated.capacity.disablethreshold”
> to 0.90.
> - You may also want to review cluster settings 
> cluster.cpu.allocated.capacity.disablethreshold,
> cluster.cpu.allocated.capacity.notificationthreshold and
> cluster.memory.allocated.capacity.notificationthreshold.
>
> Hope this helps.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 18/01/2017, 11:24, "Alessandro Caviglione" 
> wrote:
>
> Hi guys,
> just a question about host allocation.
> My infrastructure is based on CS 4.5 and XS 6.5, host allocation is
> "random" but I see that CS allocate hosts to completely fill the RAM.
> I've some hosts with 148 GB RAM usable and 148 GB used, completely
> filled.
> This obviously means that XS will swap to HD and instances performance
> is
> reduced...
> How can I tell to CS to allocate the hosts up to 90% its RAM?
>
> Thank you!
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


cs 4.5.1, vm stopped when putting xen hosts in maintenance

2017-01-25 Thread Francois Scheurer

Dear CS contributors


We use CS 4.5.1 on a 3 Clusters with XenServer 6.5. and shared primary 
storage.


When we put a host in maintenance, cs evacuates all VM's on other hosts.

But it happens regularly that some VM's are stopped instead of live 
migrated.


If we restart that stopped VM's, CS will then stop them again later.

We need to start them several times until they stays up.

Is it a known issue? Is it fixed on CS 4.9.2 ?


Migrating manually all VM's is working but impracticable because too 
much time consuming.



Many thanks in advance for your help.



Best Regards
Francois






--


EveryWare AG
François Scheurer
Senior Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature


Re: cs 4.5.1, vm stopped when putting xen hosts in maintenance

2017-01-25 Thread Rafael Weingärtner
Did you check the ACS log files looking for some exceptions when the
migrations for this specific VM started?

On Wed, Jan 25, 2017 at 9:49 AM, Francois Scheurer <
francois.scheu...@everyware.ch> wrote:

> Dear CS contributors
>
>
> We use CS 4.5.1 on a 3 Clusters with XenServer 6.5. and shared primary
> storage.
>
> When we put a host in maintenance, cs evacuates all VM's on other hosts.
>
> But it happens regularly that some VM's are stopped instead of live
> migrated.
>
> If we restart that stopped VM's, CS will then stop them again later.
>
> We need to start them several times until they stays up.
>
> Is it a known issue? Is it fixed on CS 4.9.2 ?
>
>
> Migrating manually all VM's is working but impracticable because too much
> time consuming.
>
>
> Many thanks in advance for your help.
>
>
>
> Best Regards
> Francois
>
>
>
>
>
>
> --
>
>
> EveryWare AG
> François Scheurer
> Senior Systems Engineer
> Zurlindenstrasse 52a
> CH-8003 Zürich
>
> tel: +41 44 466 60 00
> fax: +41 44 466 60 10
> mail: francois.scheu...@everyware.ch
> web: http://www.everyware.ch
>



-- 
Rafael Weingärtner


Re: cs 4.5.1, vm stopped when putting xen hosts in maintenance

2017-01-25 Thread Francois Scheurer

Hello Rafael


I think the important log lines are these:

ewcstack-man02-prod: 2017-01-24 18:05:28,407 INFO 
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e 
job-60216/job-191380 ctx-175d37df) Migration cancelled because state has 
changed: VM[User|i-638-1736-VM]
ewcstack-man02-prod: 2017-01-24 18:05:28,407 DEBUG 
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e 
job-60216/job-191380 ctx-175d37df) Unable to migrate VM due to: 
Migration cancelled because state has changed: VM[User|i-638-1736-VM]
ewcstack-man02-prod: 2017-01-24 18:05:28,427 DEBUG 
[c.c.c.CapacityManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e 
job-60216/job-191380 ctx-175d37df) VM state transitted from :Running to 
Stopping with event: StopRequestedvm's original host id: 224 new host 
id: 15 host id before state transition: 15


The really disturbing thing is that CS stop again the VM if we start it 
manually, apparently because the migratingjob is still running.


Note the the VM HA Flag is enabled.



Thank you.


Best Regards.

Francois






more details:



ewcstack-man02-prod: 2017-01-24 18:05:28,148 DEBUG 
[o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-156:ctx-14b86f5e 
job-60216/job-191380 ctx-175d37df) Preparing 3 volumes for 
VM[User|i-638-1736-VM]
ewcstack-man02-prod: 2017-01-24 18:05:28,155 DEBUG 
[c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-122:ctx-a8bacc64 
job-14953/job-191351) Done with run of VM work job: 
com.cloud.vm.VmWorkMigrateAway for VM 1411, job origin: 14953
ewcstack-man02-prod: 2017-01-24 18:05:28,155 DEBUG 
[o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-122:ctx-a8bacc64 
job-14953/job-191351) Done executing com.cloud.vm.VmWorkMigrateAway for 
job-191351
ewcstack-man02-prod: 2017-01-24 18:05:28,160 INFO 
[o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-122:ctx-a8bacc64 
job-14953/job-191351) Remove job-191351 from job monitoring
ewcstack-man02-prod: 2017-01-24 18:05:28,168 INFO 
[c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-f3bcf43c work-1989) 
Completed HAWork[1989-Migration-1411-Running-Migrating]
ewcstack-man02-prod: 2017-01-24 18:05:28,246 DEBUG 
[c.c.a.m.ClusteredAgentAttache] (Work-Job-Executor-156:ctx-14b86f5e 
job-60216/job-191380 ctx-175d37df) Seq 19-8526158519542516375: 
Forwarding Seq 19-8526158519542516375:  { Cmd , MgmtId: 345049103441, 
via: 19(ewcstack-vh010-prod), Ver: v1, Flags: 100111, [{
"com.cloud.agent.api.PrepareForMigrationCommand":{"vm":{"id":1736,"name":"i-638-1736-VM","bootloader":"PyGrub","type":"User","cpus":4,"minSpeed":2100,"maxSpeed":2100,"minRam":17179869184,"maxRam":17179869184,"arch":"x86_64","os":"Other 
(64-bit)","platformEmulator":"Other install media","bootArgs":"","enable

HA":true,"limitCpuUse":true,"enableDynamicallyScaleVm":true,"vncPassword":"pC1WUC7h1DBH9J36q3H9pg==","params":{"memoryOvercommitRatio":"1.0","platform":"viridian:true;acpi:1;apic:true;pae:true;nx:true;timeoffset:-3601","keyboard":"us","Message.ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"4.0","
hypervisortoolsversion":"xenserver61"},"uuid":"5bff01de-c033-4925-9e47-30dd14539272","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"1ab514b2-f2de-4d85-8e0d-9768184a1349","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b8751b89-00f
6-5079-ed7a-c073579c7563","id":64,"poolType":"PreSetup","host":"localhost","path":"/b8751b89-00f6-5079-ed7a-c073579c7563","port":0,"url":"PreSetup://localhost/b8751b89-00f6-5079-ed7a-c073579c7563/?ROLE=Primary&STOREUUID=b8751b89-00f6-5079-ed7a-c073579c7563"}},"name":"ROOT-1736","size":53687091200,"path":"d3
f87e3c-1efe-42ad-aa93-609d5b980a34","volumeId":8655,"vmName":"i-638-1736-VM","accountId":638,"format":"VHD","provisioningType":"THIN","id":8655,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"d3f87e3c-1efe-42ad-aa93-609d5b980a34","type":"ROOT","_details":{"managed":"false","storagePort":"0",
"storageHost":"localhost","volumeSize":"53687091200"}},{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"733df9dc-5e24-4ab9-b22b-bce78322428e","volumeType":"DATADISK","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b8751b89-00f6-5079-ed7a-c073579c7563","id":64,"
poolType":"PreSetup","host":"localhost","path":"/b8751b89-00f6-5079-ed7a-c073579c7563","port":0,"url":"PreSetup://localhost/b8751b89-00f6-5079-ed7a-c073579c7563/?ROLE=Primary&STOREUUID=b8751b89-00f6-5079-ed7a-c073579c7563"}},"name":"WELLA-DB1-NCSA_D","size":268435456000,"path":"265f168a-bb92-4591-a32b-ee3e2
2c8cb68","volumeId":8656,"vmName":"i-638-1736-VM","accountId":638,"format":"VHD","provisioningType":"THIN","id":8656,"deviceId":1,"hypervisorType":"XenServer"}},"diskSeq":1,"path":"265f168a-bb92-4591-a32b-ee3e22c8cb68","type":"DATADISK","_details":{"managed":"false","storagePort":"0","storageHost":"localhos
t","volumeSize":"268435456000"}},{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"bda6f335-b8a3-479c-8

Re: cs 4.5.1, vm stopped when putting xen hosts in maintenance

2017-01-25 Thread Rafael Weingärtner
I do not have access to ACS code right now, but I suggest starting
debugging here:

> 2017-01-24 18:05:28,427 DEBUG [c.c.c.CapacityManagerImpl]
> (Work-Job-Executor-156:ctx-14b86f5e job-60216/job-191380 ctx-175d37df) VM
> state transitted from :Running to Stopping with event: StopRequestedvm's
> original host id: 224 new host id: 15 host id before state transition: 15
>

I would try to understand first why ACS requested the hypervisor to stop
the VM. I mean the following, check the source code for conditions that
would make ACS request the shutdown of VMs.
BTW: are you migrating within a cluster or across clusters?

I will be able to analyze this further only after work hours, so if you
find anything, keep me posted.

On Wed, Jan 25, 2017 at 11:45 AM, Francois Scheurer <
francois.scheu...@everyware.ch> wrote:

> Hello Rafael
>
>
> I think the important log lines are these:
>
> ewcstack-man02-prod: 2017-01-24 18:05:28,407 INFO
> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e
> job-60216/job-191380 ctx-175d37df) Migration cancelled because state has
> changed: VM[User|i-638-1736-VM]
> ewcstack-man02-prod: 2017-01-24 18:05:28,407 DEBUG
> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e
> job-60216/job-191380 ctx-175d37df) Unable to migrate VM due to: Migration
> cancelled because state has changed: VM[User|i-638-1736-VM]
> ewcstack-man02-prod: 2017-01-24 18:05:28,427 DEBUG
> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e
> job-60216/job-191380 ctx-175d37df) VM state transitted from :Running to
> Stopping with event: StopRequestedvm's original host id: 224 new host id:
> 15 host id before state transition: 15
>
> The really disturbing thing is that CS stop again the VM if we start it
> manually, apparently because the migratingjob is still running.
>
> Note the the VM HA Flag is enabled.
>
>
>
> Thank you.
>
>
> Best Regards.
>
> Francois
>
>
>
>
>
>
> more details:
>
>
>
> ewcstack-man02-prod: 2017-01-24 18:05:28,148 DEBUG
> [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-156:ctx-14b86f5e
> job-60216/job-191380 ctx-175d37df) Preparing 3 volumes for
> VM[User|i-638-1736-VM]
> ewcstack-man02-prod: 2017-01-24 18:05:28,155 DEBUG
> [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-122:ctx-a8bacc64
> job-14953/job-191351) Done with run of VM work job:
> com.cloud.vm.VmWorkMigrateAway for VM 1411, job origin: 14953
> ewcstack-man02-prod: 2017-01-24 18:05:28,155 DEBUG
> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-122:ctx-a8bacc64
> job-14953/job-191351) Done executing com.cloud.vm.VmWorkMigrateAway for
> job-191351
> ewcstack-man02-prod: 2017-01-24 18:05:28,160 INFO
> [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-122:ctx-a8bacc64
> job-14953/job-191351) Remove job-191351 from job monitoring
> ewcstack-man02-prod: 2017-01-24 18:05:28,168 INFO
> [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-f3bcf43c work-1989)
> Completed HAWork[1989-Migration-1411-Running-Migrating]
> ewcstack-man02-prod: 2017-01-24 18:05:28,246 DEBUG
> [c.c.a.m.ClusteredAgentAttache] (Work-Job-Executor-156:ctx-14b86f5e
> job-60216/job-191380 ctx-175d37df) Seq 19-8526158519542516375: Forwarding
> Seq 19-8526158519542516375:  { Cmd , MgmtId: 345049103441, via:
> 19(ewcstack-vh010-prod), Ver: v1, Flags: 100111, [{
> "com.cloud.agent.api.PrepareForMigrationCommand":{"vm":{"id"
> :1736,"name":"i-638-1736-VM","bootloader":"PyGrub","type":"
> User","cpus":4,"minSpeed":2100,"maxSpeed":2100,"minRam":
> 17179869184,"maxRam":17179869184,"arch":"x86_64","os":"Other
> (64-bit)","platformEmulator":"Other install media","bootArgs":"","enable
> HA":true,"limitCpuUse":true,"enableDynamicallyScaleVm":true,
> "vncPassword":"pC1WUC7h1DBH9J36q3H9pg==","params":{"memoryOv
> ercommitRatio":"1.0","platform":"viridian:true;acpi:1;apic:
> true;pae:true;nx:true;timeoffset:-3601","keyboard":"us","
> Message.ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"4.0","
> hypervisortoolsversion":"xenserver61"},"uuid":"5bff01de-
> c033-4925-9e47-30dd14539272","disks":[{"data":{"org.apache.
> cloudstack.storage.to.VolumeObjectTO":{"uuid":"1ab514b2-f2de-4d85-8e0d-
> 9768184a1349","volumeType":"ROOT","dataStore":{"org.
> apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b8751b89-00f
> 6-5079-ed7a-c073579c7563","id":64,"poolType":"PreSetup","hos
> t":"localhost","path":"/b8751b89-00f6-5079-ed7a-c073579c7563
> ","port":0,"url":"PreSetup://localhost/b8751b89-00f6-5079-
> ed7a-c073579c7563/?ROLE=Primary&STOREUUID=b8751b89-
> 00f6-5079-ed7a-c073579c7563"}},"name":"ROOT-1736","size":
> 53687091200,"path":"d3
> f87e3c-1efe-42ad-aa93-609d5b980a34","volumeId":8655,"vmName"
> :"i-638-1736-VM","accountId":638,"format":"VHD","
> provisioningType":"THIN","id":8655,"deviceId":0,"hypervisorT
> ype":"XenServer"}},"diskSeq":0,"path":"d3f87e3c-1efe-42ad-
> aa93-609d5b980a34","type":"ROOT","_details":{"managed":"
> false","storagePort":"0",
> "storageHost":"localhost","volumeSize":"53687091200"}},{"dat
> a":{"org.

Re: cs 4.5.1, vm stopped when putting xen hosts in maintenance

2017-01-25 Thread Rafael Weingärtner
Found the code that is executed.
I need you to confirm if you are migrating within a cluster or across
clusters?

On Wed, Jan 25, 2017 at 11:57 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I do not have access to ACS code right now, but I suggest starting
> debugging here:
>
>> 2017-01-24 18:05:28,427 DEBUG [c.c.c.CapacityManagerImpl]
>> (Work-Job-Executor-156:ctx-14b86f5e job-60216/job-191380 ctx-175d37df)
>> VM state transitted from :Running to Stopping with event: StopRequestedvm's
>> original host id: 224 new host id: 15 host id before state transition: 15
>>
>
> I would try to understand first why ACS requested the hypervisor to stop
> the VM. I mean the following, check the source code for conditions that
> would make ACS request the shutdown of VMs.
> BTW: are you migrating within a cluster or across clusters?
>
> I will be able to analyze this further only after work hours, so if you
> find anything, keep me posted.
>
> On Wed, Jan 25, 2017 at 11:45 AM, Francois Scheurer <
> francois.scheu...@everyware.ch> wrote:
>
>> Hello Rafael
>>
>>
>> I think the important log lines are these:
>>
>> ewcstack-man02-prod: 2017-01-24 18:05:28,407 INFO
>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e
>> job-60216/job-191380 ctx-175d37df) Migration cancelled because state has
>> changed: VM[User|i-638-1736-VM]
>> ewcstack-man02-prod: 2017-01-24 18:05:28,407 DEBUG
>> [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e
>> job-60216/job-191380 ctx-175d37df) Unable to migrate VM due to: Migration
>> cancelled because state has changed: VM[User|i-638-1736-VM]
>> ewcstack-man02-prod: 2017-01-24 18:05:28,427 DEBUG
>> [c.c.c.CapacityManagerImpl] (Work-Job-Executor-156:ctx-14b86f5e
>> job-60216/job-191380 ctx-175d37df) VM state transitted from :Running to
>> Stopping with event: StopRequestedvm's original host id: 224 new host id:
>> 15 host id before state transition: 15
>>
>> The really disturbing thing is that CS stop again the VM if we start it
>> manually, apparently because the migratingjob is still running.
>>
>> Note the the VM HA Flag is enabled.
>>
>>
>>
>> Thank you.
>>
>>
>> Best Regards.
>>
>> Francois
>>
>>
>>
>>
>>
>>
>> more details:
>>
>>
>>
>> ewcstack-man02-prod: 2017-01-24 18:05:28,148 DEBUG
>> [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-156:ctx-14b86f5e
>> job-60216/job-191380 ctx-175d37df) Preparing 3 volumes for
>> VM[User|i-638-1736-VM]
>> ewcstack-man02-prod: 2017-01-24 18:05:28,155 DEBUG
>> [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-122:ctx-a8bacc64
>> job-14953/job-191351) Done with run of VM work job:
>> com.cloud.vm.VmWorkMigrateAway for VM 1411, job origin: 14953
>> ewcstack-man02-prod: 2017-01-24 18:05:28,155 DEBUG
>> [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-122:ctx-a8bacc64
>> job-14953/job-191351) Done executing com.cloud.vm.VmWorkMigrateAway for
>> job-191351
>> ewcstack-man02-prod: 2017-01-24 18:05:28,160 INFO
>> [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-122:ctx-a8bacc64
>> job-14953/job-191351) Remove job-191351 from job monitoring
>> ewcstack-man02-prod: 2017-01-24 18:05:28,168 INFO
>> [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-f3bcf43c work-1989)
>> Completed HAWork[1989-Migration-1411-Running-Migrating]
>> ewcstack-man02-prod: 2017-01-24 18:05:28,246 DEBUG
>> [c.c.a.m.ClusteredAgentAttache] (Work-Job-Executor-156:ctx-14b86f5e
>> job-60216/job-191380 ctx-175d37df) Seq 19-8526158519542516375: Forwarding
>> Seq 19-8526158519542516375:  { Cmd , MgmtId: 345049103441, via:
>> 19(ewcstack-vh010-prod), Ver: v1, Flags: 100111, [{
>> "com.cloud.agent.api.PrepareForMigrationCommand":{"vm":{"id"
>> :1736,"name":"i-638-1736-VM","bootloader":"PyGrub","type":"U
>> ser","cpus":4,"minSpeed":2100,"maxSpeed":2100,"minRam":17179
>> 869184,"maxRam":17179869184,"arch":"x86_64","os":"Other
>> (64-bit)","platformEmulator":"Other install media","bootArgs":"","enable
>> HA":true,"limitCpuUse":true,"enableDynamicallyScaleVm":true,
>> "vncPassword":"pC1WUC7h1DBH9J36q3H9pg==","params":{"memoryOv
>> ercommitRatio":"1.0","platform":"viridian:true;acpi:1;apic:t
>> rue;pae:true;nx:true;timeoffset:-3601","keyboard":"us","Mess
>> age.ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"4.0","
>> hypervisortoolsversion":"xenserver61"},"uuid":"5bff01de-c033
>> -4925-9e47-30dd14539272","disks":[{"data":{"org.apache.cloud
>> stack.storage.to.VolumeObjectTO":{"uuid":"1ab514b2-f2de-
>> 4d85-8e0d-9768184a1349","volumeType":"ROOT","dataStore":{"org.
>> apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b8751b89-00f
>> 6-5079-ed7a-c073579c7563","id":64,"poolType":"PreSetup","hos
>> t":"localhost","path":"/b8751b89-00f6-5079-ed7a-c073579c7563
>> ","port":0,"url":"PreSetup://localhost/b8751b89-00f6-5079-ed
>> 7a-c073579c7563/?ROLE=Primary&STOREUUID=b8751b89-00f6-5079-
>> ed7a-c073579c7563"}},"name":"ROOT-1736","size":53687091200,"path":"d3
>> f87e3c-1efe-42ad-aa93-609d5b980a34","volumeId":8655,"vmName"
>> :"i-638-1736-VM

Public IaaS advanced zone

2017-01-25 Thread Chiradeep Vittal
Can somebody recommend a CloudStack - based public IAAS cloud? I need -
- advanced zone
- API access
- VPC support with multiple subnets
- preferably KVM / XenServer
- credit card payment
- hourly billing

Sent from my iPhone