Re: launch instance error

2019-06-17 Thread Alejandro Ruiz Bermejo
Ok then it's done. Thank you very much. I have a functional LXC cluster
configured into my CS environment. You told me you where interested in this
to work so if you have any question i'll be more than happy to help.

Regards,
Alejandro

On Sunday, June 16, 2019, Nicolas Vazquez 
wrote:

> Hi Alejandro,
>
> AFAIK it is not possible to add data disks unless you add a Ceph storage
> pool.
>
> References:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Enhancements
> https://issues.apache.org/jira/browse/CLOUDSTACK-8574
>
> Regards,
> Nicolas Vazquez
>
> De: Alejandro Ruiz Bermejo
> Enviado: sábado, 15 de junio 16:15
> Asunto: Re: launch instance error
> Para: users@cloudstack.apache.org
>
>
> Hi,
> I made a fresh install of the CoudStack environment with 2 nodes:
> management and server. The primary and secondary storage inside the
> management server, both using NFS.
>
> 1 Zone
> 1 Pod
> 1 Cluster
> 1 Host (LXC)
>
> The secondary and console proxy VMs where created successfully and i
> created an LXC template according to
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Template+creation#LXCTemplatecreation-CreatingtemplateonUbuntu
>
>
> When i try to launc the new instance i get again errors thi is the log
> <https://pastebin.com/rdCYMYDZ>. As i see now the manager allocate CPU,
> RAM
> and Root disk in a storage pool  but can't find a storage pool for Data
> disk.
>
> I can create now an instance without assign a Data Disk. How can i correct
> this?
>
> On Fri, Jun 14, 2019 at 5:54 PM Andrija Panic 
> wrote:
>
> > I would go with 2nd aproach - I don't expect "same" issues actually - you
> > can either add a new pod/cluster or just new cluster in same pod.
> >
> > On Fri, 14 Jun 2019 at 21:25, Alejandro Ruiz Bermejo <
> > arbermejo0...@gmail.com> wrote:
> >
> > > I created a new zone, pod and cluster and added the LXC host to the new
> > > cluster. CloudStack did everything for me. Since i am in a test
> > environment
> > > i used the same subnet for both zones.
> > >
> > > I can do 2 things:
> > > 1. I will wipe the host and do a fresh install(only on the compute
> host),
> > > but i will probably have the same issues. I will add it to the same
> zone
> > in
> > > the same pod as the KVM cluster and in an LXC cluster as i previously
> > did.
> > >
> > > 2. I also can do fresh a CloudStack install (management server
> included)
> > > and use only LXC as hypervisor technology in order to find where the
> > > troubles are. After i have LXC working can later add KVM
> > >
> > > Wich approach would be more helpful to find where the trouble is.
> > >
> > > Regards,
> > > Alejandro
> > >
> > >
> > > On Fri, Jun 14, 2019 at 2:59 PM Andrija Panic  >
> > > wrote:
> > >
> > > > I'm not sure how you moved the host to another Zone ? (Please
> describe)
> > > >
> > > > But anyway, I would wipe that host (perhaps some settings are kept
> > > locally,
> > > > etc), and add it as a fresh host to a new cluster (could be in same
> Pod
> > > as
> > > > well, ot a new one).i.e. start from scratch please - since your setup
> > > seems
> > > > problematic at this moment.
> > > >
> > > > I would be happy to learn if you managed to have it working - so
> please
> > > let
> > > > me know.
> > > >
> > > > Best
> > > > Andrija
> > > >
> > > > On Fri, Jun 14, 2019, 20:03 Alejandro Ruiz Bermejo <
> > > > arbermejo0...@gmail.com>
> > > > wrote:
> > > >
> > > > > Yes trying to find solutions i did create a new zone pod and
> cluster
> > > and
> > > > > moved the lxc host to it, but i had the same errors. So i moved it
> > back
> > > > to
> > > > > my original LXC cluster into my original zone. I guess thats why it
> > > shows
> > > > > those records.
> > > > >
> > > > > I made all the movements using the UI, it seems like the values on
> > the
> > > DB
> > > > > didn't updated, how do i change that?
> > > > >
> > > > > On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic <
> > andrija.pa...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > So, there are a couple of problems I see, based on

Re: launch instance error

2019-06-16 Thread Andrija Panic
Well, Im glad that you managed to fix it Alejandro.

Thought, as Nicolas pointed out, there seems to be some limitations in
place (that I also wasn't aware of).

Best
Andrija

On Sun, Jun 16, 2019, 10:07 Nicolas Vazquez 
wrote:

> Hi Alejandro,
>
> AFAIK it is not possible to add data disks unless you add a Ceph storage
> pool.
>
> References:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Enhancements
> https://issues.apache.org/jira/browse/CLOUDSTACK-8574
>
> Regards,
> Nicolas Vazquez
>
> De: Alejandro Ruiz Bermejo
> Enviado: sábado, 15 de junio 16:15
> Asunto: Re: launch instance error
> Para: users@cloudstack.apache.org
>
>
> Hi,
> I made a fresh install of the CoudStack environment with 2 nodes:
> management and server. The primary and secondary storage inside the
> management server, both using NFS.
>
> 1 Zone
> 1 Pod
> 1 Cluster
> 1 Host (LXC)
>
> The secondary and console proxy VMs where created successfully and i
> created an LXC template according to
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Template+creation#LXCTemplatecreation-CreatingtemplateonUbuntu
>
>
> When i try to launc the new instance i get again errors thi is the log
> <https://pastebin.com/rdCYMYDZ>. As i see now the manager allocate CPU,
> RAM
> and Root disk in a storage pool  but can't find a storage pool for Data
> disk.
>
> I can create now an instance without assign a Data Disk. How can i correct
> this?
>
> On Fri, Jun 14, 2019 at 5:54 PM Andrija Panic 
> wrote:
>
> > I would go with 2nd aproach - I don't expect "same" issues actually - you
> > can either add a new pod/cluster or just new cluster in same pod.
> >
> > On Fri, 14 Jun 2019 at 21:25, Alejandro Ruiz Bermejo <
> > arbermejo0...@gmail.com> wrote:
> >
> > > I created a new zone, pod and cluster and added the LXC host to the new
> > > cluster. CloudStack did everything for me. Since i am in a test
> > environment
> > > i used the same subnet for both zones.
> > >
> > > I can do 2 things:
> > > 1. I will wipe the host and do a fresh install(only on the compute
> host),
> > > but i will probably have the same issues. I will add it to the same
> zone
> > in
> > > the same pod as the KVM cluster and in an LXC cluster as i previously
> > did.
> > >
> > > 2. I also can do fresh a CloudStack install (management server
> included)
> > > and use only LXC as hypervisor technology in order to find where the
> > > troubles are. After i have LXC working can later add KVM
> > >
> > > Wich approach would be more helpful to find where the trouble is.
> > >
> > > Regards,
> > > Alejandro
> > >
> > >
> > > On Fri, Jun 14, 2019 at 2:59 PM Andrija Panic  >
> > > wrote:
> > >
> > > > I'm not sure how you moved the host to another Zone ? (Please
> describe)
> > > >
> > > > But anyway, I would wipe that host (perhaps some settings are kept
> > > locally,
> > > > etc), and add it as a fresh host to a new cluster (could be in same
> Pod
> > > as
> > > > well, ot a new one).i.e. start from scratch please - since your setup
> > > seems
> > > > problematic at this moment.
> > > >
> > > > I would be happy to learn if you managed to have it working - so
> please
> > > let
> > > > me know.
> > > >
> > > > Best
> > > > Andrija
> > > >
> > > > On Fri, Jun 14, 2019, 20:03 Alejandro Ruiz Bermejo <
> > > > arbermejo0...@gmail.com>
> > > > wrote:
> > > >
> > > > > Yes trying to find solutions i did create a new zone pod and
> cluster
> > > and
> > > > > moved the lxc host to it, but i had the same errors. So i moved it
> > back
> > > > to
> > > > > my original LXC cluster into my original zone. I guess thats why it
> > > shows
> > > > > those records.
> > > > >
> > > > > I made all the movements using the UI, it seems like the values on
> > the
> > > DB
> > > > > didn't updated, how do i change that?
> > > > >
> > > > > On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic <
> > andrija.pa...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > So, there are a couple of problems I see, based on your storage
> > pool
> > > DB
> &

Re: launch instance error

2019-06-16 Thread Nicolas Vazquez
Hi Alejandro,

AFAIK it is not possible to add data disks unless you add a Ceph storage pool.

References:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Enhancements
https://issues.apache.org/jira/browse/CLOUDSTACK-8574

Regards,
Nicolas Vazquez

De: Alejandro Ruiz Bermejo
Enviado: sábado, 15 de junio 16:15
Asunto: Re: launch instance error
Para: users@cloudstack.apache.org


Hi,
I made a fresh install of the CoudStack environment with 2 nodes:
management and server. The primary and secondary storage inside the
management server, both using NFS.

1 Zone
1 Pod
1 Cluster
1 Host (LXC)

The secondary and console proxy VMs where created successfully and i
created an LXC template according to
https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Template+creation#LXCTemplatecreation-CreatingtemplateonUbuntu


When i try to launc the new instance i get again errors thi is the log
<https://pastebin.com/rdCYMYDZ>. As i see now the manager allocate CPU, RAM
and Root disk in a storage pool  but can't find a storage pool for Data
disk.

I can create now an instance without assign a Data Disk. How can i correct
this?

On Fri, Jun 14, 2019 at 5:54 PM Andrija Panic 
wrote:

> I would go with 2nd aproach - I don't expect "same" issues actually - you
> can either add a new pod/cluster or just new cluster in same pod.
>
> On Fri, 14 Jun 2019 at 21:25, Alejandro Ruiz Bermejo <
> arbermejo0...@gmail.com> wrote:
>
> > I created a new zone, pod and cluster and added the LXC host to the new
> > cluster. CloudStack did everything for me. Since i am in a test
> environment
> > i used the same subnet for both zones.
> >
> > I can do 2 things:
> > 1. I will wipe the host and do a fresh install(only on the compute host),
> > but i will probably have the same issues. I will add it to the same zone
> in
> > the same pod as the KVM cluster and in an LXC cluster as i previously
> did.
> >
> > 2. I also can do fresh a CloudStack install (management server included)
> > and use only LXC as hypervisor technology in order to find where the
> > troubles are. After i have LXC working can later add KVM
> >
> > Wich approach would be more helpful to find where the trouble is.
> >
> > Regards,
> > Alejandro
> >
> >
> > On Fri, Jun 14, 2019 at 2:59 PM Andrija Panic 
> > wrote:
> >
> > > I'm not sure how you moved the host to another Zone ? (Please describe)
> > >
> > > But anyway, I would wipe that host (perhaps some settings are kept
> > locally,
> > > etc), and add it as a fresh host to a new cluster (could be in same Pod
> > as
> > > well, ot a new one).i.e. start from scratch please - since your setup
> > seems
> > > problematic at this moment.
> > >
> > > I would be happy to learn if you managed to have it working - so please
> > let
> > > me know.
> > >
> > > Best
> > > Andrija
> > >
> > > On Fri, Jun 14, 2019, 20:03 Alejandro Ruiz Bermejo <
> > > arbermejo0...@gmail.com>
> > > wrote:
> > >
> > > > Yes trying to find solutions i did create a new zone pod and cluster
> > and
> > > > moved the lxc host to it, but i had the same errors. So i moved it
> back
> > > to
> > > > my original LXC cluster into my original zone. I guess thats why it
> > shows
> > > > those records.
> > > >
> > > > I made all the movements using the UI, it seems like the values on
> the
> > DB
> > > > didn't updated, how do i change that?
> > > >
> > > > On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic <
> andrija.pa...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > So, there are a couple of problems I see, based on your storage
> pool
> > DB
> > > > > records:
> > > > >
> > > > > your storage (for LXC) is CLUSTER wide, so back to first SQL I
> share,
> > > as
> > > > > following:
> > > > > SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > > > > storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> > > > > storage_pool.update_time, storage_pool.data_center_id,
> > > > storage_pool.pod_id,
> > > > > storage_pool.used_bytes, storage_pool.capacity_bytes,
> > > > storage_pool.status,
> > > > > storage_pool.storage_provider_name, storage_pool.host_address,
> > > > > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > > > storage_pool.cluster_id, 

Re: launch instance error

2019-06-15 Thread Alejandro Ruiz Bermejo
Hi,
I made a fresh install of the CoudStack environment with 2 nodes:
management and server. The primary and secondary storage inside the
management server, both using NFS.

1 Zone
1 Pod
1 Cluster
1 Host (LXC)

The secondary and console proxy VMs where created successfully and i
created an LXC template according to
https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Template+creation#LXCTemplatecreation-CreatingtemplateonUbuntu


When i try to launc the new instance i get again errors thi is the log
. As i see now the manager allocate CPU, RAM
and Root disk in a storage pool  but can't find a storage pool for Data
disk.

I can create now an instance without assign a Data Disk. How can i correct
this?

On Fri, Jun 14, 2019 at 5:54 PM Andrija Panic 
wrote:

> I would go with 2nd aproach - I don't expect "same" issues actually - you
> can either add a new pod/cluster or just new cluster in same pod.
>
> On Fri, 14 Jun 2019 at 21:25, Alejandro Ruiz Bermejo <
> arbermejo0...@gmail.com> wrote:
>
> > I created a new zone, pod and cluster and added the LXC host to the new
> > cluster. CloudStack did everything for me. Since i am in a test
> environment
> > i used the same subnet for both zones.
> >
> > I can do 2 things:
> > 1. I will wipe the host and do a fresh install(only on the compute host),
> > but i will probably have the same issues. I will add it to the same zone
> in
> > the same pod as the KVM cluster and in an LXC cluster as i previously
> did.
> >
> > 2. I also can do fresh a CloudStack install (management server included)
> > and use only LXC as hypervisor technology in order to find where the
> > troubles are. After i have LXC working can later add KVM
> >
> > Wich approach would be more helpful to find where the trouble is.
> >
> > Regards,
> > Alejandro
> >
> >
> > On Fri, Jun 14, 2019 at 2:59 PM Andrija Panic 
> > wrote:
> >
> > > I'm not sure how you moved the host to another Zone ? (Please describe)
> > >
> > > But anyway, I would wipe that host (perhaps some settings are kept
> > locally,
> > > etc), and add it as a fresh host to a new cluster (could be in same Pod
> > as
> > > well, ot a new one).i.e. start from scratch please - since your setup
> > seems
> > > problematic at this moment.
> > >
> > > I would be happy to learn if you managed to have it working - so please
> > let
> > > me know.
> > >
> > > Best
> > > Andrija
> > >
> > > On Fri, Jun 14, 2019, 20:03 Alejandro Ruiz Bermejo <
> > > arbermejo0...@gmail.com>
> > > wrote:
> > >
> > > > Yes trying to find solutions i did create a new zone pod and cluster
> > and
> > > > moved the lxc host to it, but i had the same errors. So i moved it
> back
> > > to
> > > > my original LXC cluster into my original zone. I guess thats why it
> > shows
> > > > those records.
> > > >
> > > > I made all the movements using the UI, it seems like the values on
> the
> > DB
> > > > didn't updated, how do i change that?
> > > >
> > > > On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic <
> andrija.pa...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > So, there are a couple of problems I see, based on your storage
> pool
> > DB
> > > > > records:
> > > > >
> > > > > your storage (for LXC) is CLUSTER wide, so back to first SQL I
> share,
> > > as
> > > > > following:
> > > > > SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > > > > storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> > > > > storage_pool.update_time, storage_pool.data_center_id,
> > > > storage_pool.pod_id,
> > > > > storage_pool.used_bytes, storage_pool.capacity_bytes,
> > > > storage_pool.status,
> > > > > storage_pool.storage_provider_name, storage_pool.host_address,
> > > > > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > > > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > > > > storage_pool.capacity_iops, storage_pool.hypervisor FROM
> storage_pool
> > > > > WHERE storage_pool.data_center_id = 1
> > > > > AND storage_pool.status = 'Up'
> > > > > AND storage_pool.scope = 'CLUSTER'
> > > > > AND  ( storage_pool.pod_id IS NULL  OR storage_pool.pod_id = 1  )
> > > > > AND  ( storage_pool.cluster_id IS NULL  OR storage_pool.cluster_id
> =
> > > 2  )
> > > > > AND storage_pool.removed IS NULL
> > > > >
> > > > > If you compare WHERE clause i.e. conditions, you will see your
> > > > > data_center=2 instead of 1, you pod_id=2 instead of 1, your
> > > cluster_id=3
> > > > > instead of 2.
> > > > > So you probably have been adding back and forth primary storage
> pool
> > > for
> > > > > LXC or something fishy there...
> > > > >
> > > > > Please investigate - based on your records, it seems you have a
> zone
> > > for
> > > > > KVM, another zone for LXC, and then there are more clusters/pods
> > than 1
> > > > in
> > > > > each of them...
> > > > > We could easily change DB records, but you need to understand why
> > these
> > > > > records are "shifted" by one - i.e. wrong zone, wrong 

Re: launch instance error

2019-06-14 Thread Andrija Panic
I would go with 2nd aproach - I don't expect "same" issues actually - you
can either add a new pod/cluster or just new cluster in same pod.

On Fri, 14 Jun 2019 at 21:25, Alejandro Ruiz Bermejo <
arbermejo0...@gmail.com> wrote:

> I created a new zone, pod and cluster and added the LXC host to the new
> cluster. CloudStack did everything for me. Since i am in a test environment
> i used the same subnet for both zones.
>
> I can do 2 things:
> 1. I will wipe the host and do a fresh install(only on the compute host),
> but i will probably have the same issues. I will add it to the same zone in
> the same pod as the KVM cluster and in an LXC cluster as i previously did.
>
> 2. I also can do fresh a CloudStack install (management server included)
> and use only LXC as hypervisor technology in order to find where the
> troubles are. After i have LXC working can later add KVM
>
> Wich approach would be more helpful to find where the trouble is.
>
> Regards,
> Alejandro
>
>
> On Fri, Jun 14, 2019 at 2:59 PM Andrija Panic 
> wrote:
>
> > I'm not sure how you moved the host to another Zone ? (Please describe)
> >
> > But anyway, I would wipe that host (perhaps some settings are kept
> locally,
> > etc), and add it as a fresh host to a new cluster (could be in same Pod
> as
> > well, ot a new one).i.e. start from scratch please - since your setup
> seems
> > problematic at this moment.
> >
> > I would be happy to learn if you managed to have it working - so please
> let
> > me know.
> >
> > Best
> > Andrija
> >
> > On Fri, Jun 14, 2019, 20:03 Alejandro Ruiz Bermejo <
> > arbermejo0...@gmail.com>
> > wrote:
> >
> > > Yes trying to find solutions i did create a new zone pod and cluster
> and
> > > moved the lxc host to it, but i had the same errors. So i moved it back
> > to
> > > my original LXC cluster into my original zone. I guess thats why it
> shows
> > > those records.
> > >
> > > I made all the movements using the UI, it seems like the values on the
> DB
> > > didn't updated, how do i change that?
> > >
> > > On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic  >
> > > wrote:
> > >
> > > > So, there are a couple of problems I see, based on your storage pool
> DB
> > > > records:
> > > >
> > > > your storage (for LXC) is CLUSTER wide, so back to first SQL I share,
> > as
> > > > following:
> > > > SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > > > storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> > > > storage_pool.update_time, storage_pool.data_center_id,
> > > storage_pool.pod_id,
> > > > storage_pool.used_bytes, storage_pool.capacity_bytes,
> > > storage_pool.status,
> > > > storage_pool.storage_provider_name, storage_pool.host_address,
> > > > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > > > storage_pool.capacity_iops, storage_pool.hypervisor FROM storage_pool
> > > > WHERE storage_pool.data_center_id = 1
> > > > AND storage_pool.status = 'Up'
> > > > AND storage_pool.scope = 'CLUSTER'
> > > > AND  ( storage_pool.pod_id IS NULL  OR storage_pool.pod_id = 1  )
> > > > AND  ( storage_pool.cluster_id IS NULL  OR storage_pool.cluster_id =
> > 2  )
> > > > AND storage_pool.removed IS NULL
> > > >
> > > > If you compare WHERE clause i.e. conditions, you will see your
> > > > data_center=2 instead of 1, you pod_id=2 instead of 1, your
> > cluster_id=3
> > > > instead of 2.
> > > > So you probably have been adding back and forth primary storage pool
> > for
> > > > LXC or something fishy there...
> > > >
> > > > Please investigate - based on your records, it seems you have a zone
> > for
> > > > KVM, another zone for LXC, and then there are more clusters/pods
> than 1
> > > in
> > > > each of them...
> > > > We could easily change DB records, but you need to understand why
> these
> > > > records are "shifted" by one - i.e. wrong zone, wrong pod, wrong
> > cluster
> > > > expected by CloudStack versus the actual records in storage_pool
> table.
> > > >
> > > > Best
> > > > Andrija
> > > >
> > > >
> > > > On Fri, 14 Jun 2019 at 18:53, Alejandro Ruiz Bermejo <
> > > > arbermejo0...@gmail.com> wrote:
> > > >
> > > > > this was the ouputs
> > > > >
> > > > > mysql> SELECT storage_pool.id, storage_pool.name,
> storage_pool.uuid,
> > > > > -> storage_pool.pool_type, storage_pool.created,
> > > > storage_pool.removed,
> > > > > -> storage_pool.update_time, storage_pool.data_center_id,
> > > > > storage_pool.pod_id,
> > > > > -> storage_pool.used_bytes, storage_pool.capacity_bytes,
> > > > > storage_pool.status,
> > > > > -> storage_pool.storage_provider_name,
> storage_pool.host_address,
> > > > > -> storage_pool.path, storage_pool.port,
> storage_pool.user_info,
> > > > > -> storage_pool.cluster_id, storage_pool.scope,
> > > storage_pool.managed,
> > > > > -> storage_pool.capacity_iops, storage_pool.hypervisor FROM
> > > > > storage_pool WHERE
> > > > > -> 

Re: launch instance error

2019-06-14 Thread Alejandro Ruiz Bermejo
I created a new zone, pod and cluster and added the LXC host to the new
cluster. CloudStack did everything for me. Since i am in a test environment
i used the same subnet for both zones.

I can do 2 things:
1. I will wipe the host and do a fresh install(only on the compute host),
but i will probably have the same issues. I will add it to the same zone in
the same pod as the KVM cluster and in an LXC cluster as i previously did.

2. I also can do fresh a CloudStack install (management server included)
and use only LXC as hypervisor technology in order to find where the
troubles are. After i have LXC working can later add KVM

Wich approach would be more helpful to find where the trouble is.

Regards,
Alejandro


On Fri, Jun 14, 2019 at 2:59 PM Andrija Panic 
wrote:

> I'm not sure how you moved the host to another Zone ? (Please describe)
>
> But anyway, I would wipe that host (perhaps some settings are kept locally,
> etc), and add it as a fresh host to a new cluster (could be in same Pod as
> well, ot a new one).i.e. start from scratch please - since your setup seems
> problematic at this moment.
>
> I would be happy to learn if you managed to have it working - so please let
> me know.
>
> Best
> Andrija
>
> On Fri, Jun 14, 2019, 20:03 Alejandro Ruiz Bermejo <
> arbermejo0...@gmail.com>
> wrote:
>
> > Yes trying to find solutions i did create a new zone pod and cluster and
> > moved the lxc host to it, but i had the same errors. So i moved it back
> to
> > my original LXC cluster into my original zone. I guess thats why it shows
> > those records.
> >
> > I made all the movements using the UI, it seems like the values on the DB
> > didn't updated, how do i change that?
> >
> > On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic 
> > wrote:
> >
> > > So, there are a couple of problems I see, based on your storage pool DB
> > > records:
> > >
> > > your storage (for LXC) is CLUSTER wide, so back to first SQL I share,
> as
> > > following:
> > > SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > > storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> > > storage_pool.update_time, storage_pool.data_center_id,
> > storage_pool.pod_id,
> > > storage_pool.used_bytes, storage_pool.capacity_bytes,
> > storage_pool.status,
> > > storage_pool.storage_provider_name, storage_pool.host_address,
> > > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > > storage_pool.capacity_iops, storage_pool.hypervisor FROM storage_pool
> > > WHERE storage_pool.data_center_id = 1
> > > AND storage_pool.status = 'Up'
> > > AND storage_pool.scope = 'CLUSTER'
> > > AND  ( storage_pool.pod_id IS NULL  OR storage_pool.pod_id = 1  )
> > > AND  ( storage_pool.cluster_id IS NULL  OR storage_pool.cluster_id =
> 2  )
> > > AND storage_pool.removed IS NULL
> > >
> > > If you compare WHERE clause i.e. conditions, you will see your
> > > data_center=2 instead of 1, you pod_id=2 instead of 1, your
> cluster_id=3
> > > instead of 2.
> > > So you probably have been adding back and forth primary storage pool
> for
> > > LXC or something fishy there...
> > >
> > > Please investigate - based on your records, it seems you have a zone
> for
> > > KVM, another zone for LXC, and then there are more clusters/pods than 1
> > in
> > > each of them...
> > > We could easily change DB records, but you need to understand why these
> > > records are "shifted" by one - i.e. wrong zone, wrong pod, wrong
> cluster
> > > expected by CloudStack versus the actual records in storage_pool table.
> > >
> > > Best
> > > Andrija
> > >
> > >
> > > On Fri, 14 Jun 2019 at 18:53, Alejandro Ruiz Bermejo <
> > > arbermejo0...@gmail.com> wrote:
> > >
> > > > this was the ouputs
> > > >
> > > > mysql> SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > > > -> storage_pool.pool_type, storage_pool.created,
> > > storage_pool.removed,
> > > > -> storage_pool.update_time, storage_pool.data_center_id,
> > > > storage_pool.pod_id,
> > > > -> storage_pool.used_bytes, storage_pool.capacity_bytes,
> > > > storage_pool.status,
> > > > -> storage_pool.storage_provider_name, storage_pool.host_address,
> > > > -> storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > > -> storage_pool.cluster_id, storage_pool.scope,
> > storage_pool.managed,
> > > > -> storage_pool.capacity_iops, storage_pool.hypervisor FROM
> > > > storage_pool WHERE
> > > > -> storage_pool.data_center_id = 1  AND storage_pool.status =
> 'Up'
> > > AND
> > > > -> storage_pool.scope = 'ZONE'  AND storage_pool.hypervisor =
> 'LXC'
> > > > AND
> > > > -> storage_pool.removed IS NULL;
> > > > Empty set (0.00 sec)
> > > >
> > > > mysql> select * from storage_pool;
> > > >
> > > >
> > >
> >
> 

Re: launch instance error

2019-06-14 Thread Andrija Panic
I'm not sure how you moved the host to another Zone ? (Please describe)

But anyway, I would wipe that host (perhaps some settings are kept locally,
etc), and add it as a fresh host to a new cluster (could be in same Pod as
well, ot a new one).i.e. start from scratch please - since your setup seems
problematic at this moment.

I would be happy to learn if you managed to have it working - so please let
me know.

Best
Andrija

On Fri, Jun 14, 2019, 20:03 Alejandro Ruiz Bermejo 
wrote:

> Yes trying to find solutions i did create a new zone pod and cluster and
> moved the lxc host to it, but i had the same errors. So i moved it back to
> my original LXC cluster into my original zone. I guess thats why it shows
> those records.
>
> I made all the movements using the UI, it seems like the values on the DB
> didn't updated, how do i change that?
>
> On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic 
> wrote:
>
> > So, there are a couple of problems I see, based on your storage pool DB
> > records:
> >
> > your storage (for LXC) is CLUSTER wide, so back to first SQL I share, as
> > following:
> > SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> > storage_pool.update_time, storage_pool.data_center_id,
> storage_pool.pod_id,
> > storage_pool.used_bytes, storage_pool.capacity_bytes,
> storage_pool.status,
> > storage_pool.storage_provider_name, storage_pool.host_address,
> > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > storage_pool.capacity_iops, storage_pool.hypervisor FROM storage_pool
> > WHERE storage_pool.data_center_id = 1
> > AND storage_pool.status = 'Up'
> > AND storage_pool.scope = 'CLUSTER'
> > AND  ( storage_pool.pod_id IS NULL  OR storage_pool.pod_id = 1  )
> > AND  ( storage_pool.cluster_id IS NULL  OR storage_pool.cluster_id = 2  )
> > AND storage_pool.removed IS NULL
> >
> > If you compare WHERE clause i.e. conditions, you will see your
> > data_center=2 instead of 1, you pod_id=2 instead of 1, your cluster_id=3
> > instead of 2.
> > So you probably have been adding back and forth primary storage pool for
> > LXC or something fishy there...
> >
> > Please investigate - based on your records, it seems you have a zone for
> > KVM, another zone for LXC, and then there are more clusters/pods than 1
> in
> > each of them...
> > We could easily change DB records, but you need to understand why these
> > records are "shifted" by one - i.e. wrong zone, wrong pod, wrong cluster
> > expected by CloudStack versus the actual records in storage_pool table.
> >
> > Best
> > Andrija
> >
> >
> > On Fri, 14 Jun 2019 at 18:53, Alejandro Ruiz Bermejo <
> > arbermejo0...@gmail.com> wrote:
> >
> > > this was the ouputs
> > >
> > > mysql> SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > > -> storage_pool.pool_type, storage_pool.created,
> > storage_pool.removed,
> > > -> storage_pool.update_time, storage_pool.data_center_id,
> > > storage_pool.pod_id,
> > > -> storage_pool.used_bytes, storage_pool.capacity_bytes,
> > > storage_pool.status,
> > > -> storage_pool.storage_provider_name, storage_pool.host_address,
> > > -> storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > -> storage_pool.cluster_id, storage_pool.scope,
> storage_pool.managed,
> > > -> storage_pool.capacity_iops, storage_pool.hypervisor FROM
> > > storage_pool WHERE
> > > -> storage_pool.data_center_id = 1  AND storage_pool.status = 'Up'
> > AND
> > > -> storage_pool.scope = 'ZONE'  AND storage_pool.hypervisor = 'LXC'
> > > AND
> > > -> storage_pool.removed IS NULL;
> > > Empty set (0.00 sec)
> > >
> > > mysql> select * from storage_pool;
> > >
> > >
> >
> ++-+--+---+--++++++--+---+-+-+-+-++---+-++-+---+
> > > | id | name| uuid | pool_type
> >  |
> > > port | data_center_id | pod_id | cluster_id | used_bytes |
> > capacity_bytes |
> > > host_address | user_info | path| created |
> > removed
> > > | update_time | status | storage_provider_name | scope   | hypervisor |
> > > managed | capacity_iops |
> > >
> > >
> >
> ++-+--+---+--++++++--+---+-+-+-+-++---+-++-+---+
> > > |  1 | primary | 672631ab-5c14-3f60-97c6-1d633e60f7bd |
> > NetworkFilesystem |
> > > 2049 |  1 |   NULL |   NULL | 3413114880 |
> 

Re: launch instance error

2019-06-14 Thread Alejandro Ruiz Bermejo
Yes trying to find solutions i did create a new zone pod and cluster and
moved the lxc host to it, but i had the same errors. So i moved it back to
my original LXC cluster into my original zone. I guess thats why it shows
those records.

I made all the movements using the UI, it seems like the values on the DB
didn't updated, how do i change that?

On Fri, Jun 14, 2019 at 1:32 PM Andrija Panic 
wrote:

> So, there are a couple of problems I see, based on your storage pool DB
> records:
>
> your storage (for LXC) is CLUSTER wide, so back to first SQL I share, as
> following:
> SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> storage_pool.update_time, storage_pool.data_center_id, storage_pool.pod_id,
> storage_pool.used_bytes, storage_pool.capacity_bytes, storage_pool.status,
> storage_pool.storage_provider_name, storage_pool.host_address,
> storage_pool.path, storage_pool.port, storage_pool.user_info,
> storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> storage_pool.capacity_iops, storage_pool.hypervisor FROM storage_pool
> WHERE storage_pool.data_center_id = 1
> AND storage_pool.status = 'Up'
> AND storage_pool.scope = 'CLUSTER'
> AND  ( storage_pool.pod_id IS NULL  OR storage_pool.pod_id = 1  )
> AND  ( storage_pool.cluster_id IS NULL  OR storage_pool.cluster_id = 2  )
> AND storage_pool.removed IS NULL
>
> If you compare WHERE clause i.e. conditions, you will see your
> data_center=2 instead of 1, you pod_id=2 instead of 1, your cluster_id=3
> instead of 2.
> So you probably have been adding back and forth primary storage pool for
> LXC or something fishy there...
>
> Please investigate - based on your records, it seems you have a zone for
> KVM, another zone for LXC, and then there are more clusters/pods than 1 in
> each of them...
> We could easily change DB records, but you need to understand why these
> records are "shifted" by one - i.e. wrong zone, wrong pod, wrong cluster
> expected by CloudStack versus the actual records in storage_pool table.
>
> Best
> Andrija
>
>
> On Fri, 14 Jun 2019 at 18:53, Alejandro Ruiz Bermejo <
> arbermejo0...@gmail.com> wrote:
>
> > this was the ouputs
> >
> > mysql> SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > -> storage_pool.pool_type, storage_pool.created,
> storage_pool.removed,
> > -> storage_pool.update_time, storage_pool.data_center_id,
> > storage_pool.pod_id,
> > -> storage_pool.used_bytes, storage_pool.capacity_bytes,
> > storage_pool.status,
> > -> storage_pool.storage_provider_name, storage_pool.host_address,
> > -> storage_pool.path, storage_pool.port, storage_pool.user_info,
> > -> storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > -> storage_pool.capacity_iops, storage_pool.hypervisor FROM
> > storage_pool WHERE
> > -> storage_pool.data_center_id = 1  AND storage_pool.status = 'Up'
> AND
> > -> storage_pool.scope = 'ZONE'  AND storage_pool.hypervisor = 'LXC'
> > AND
> > -> storage_pool.removed IS NULL;
> > Empty set (0.00 sec)
> >
> > mysql> select * from storage_pool;
> >
> >
> ++-+--+---+--++++++--+---+-+-+-+-++---+-++-+---+
> > | id | name| uuid | pool_type
>  |
> > port | data_center_id | pod_id | cluster_id | used_bytes |
> capacity_bytes |
> > host_address | user_info | path| created |
> removed
> > | update_time | status | storage_provider_name | scope   | hypervisor |
> > managed | capacity_iops |
> >
> >
> ++-+--+---+--++++++--+---+-+-+-+-++---+-++-+---+
> > |  1 | primary | 672631ab-5c14-3f60-97c6-1d633e60f7bd |
> NetworkFilesystem |
> > 2049 |  1 |   NULL |   NULL | 3413114880 |
>  984374837248 |
> > 10.8.9.235   | NULL  | /export/primary | 2019-06-06 20:05:26 | NULL
> >  | NULL| Up | DefaultPrimary| ZONE| KVM|
> > 0 |  NULL |
> > |  3 | lxc | 3b062c9d-0333-41c4-9ba8-f71f75a81d1f |
> SharedMountPoint  |
> >0 |  2 |  2 |  3 | 3413114880 |
>  984374837248 |
> > localhost| NULL  | /mnt/primary| 2019-06-13 18:32:26 | NULL
> >  | NULL| Up | DefaultPrimary| CLUSTER | NULL   |
> > 0 |  NULL |
> >
> >
> 

Re: launch instance error

2019-06-14 Thread Alejandro Ruiz Bermejo
this was the ouputs

mysql> SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
-> storage_pool.pool_type, storage_pool.created, storage_pool.removed,
-> storage_pool.update_time, storage_pool.data_center_id,
storage_pool.pod_id,
-> storage_pool.used_bytes, storage_pool.capacity_bytes,
storage_pool.status,
-> storage_pool.storage_provider_name, storage_pool.host_address,
-> storage_pool.path, storage_pool.port, storage_pool.user_info,
-> storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
-> storage_pool.capacity_iops, storage_pool.hypervisor FROM
storage_pool WHERE
-> storage_pool.data_center_id = 1  AND storage_pool.status = 'Up'  AND
-> storage_pool.scope = 'ZONE'  AND storage_pool.hypervisor = 'LXC'  AND
-> storage_pool.removed IS NULL;
Empty set (0.00 sec)

mysql> select * from storage_pool;
++-+--+---+--++++++--+---+-+-+-+-++---+-++-+---+
| id | name| uuid | pool_type |
port | data_center_id | pod_id | cluster_id | used_bytes | capacity_bytes |
host_address | user_info | path| created | removed
| update_time | status | storage_provider_name | scope   | hypervisor |
managed | capacity_iops |
++-+--+---+--++++++--+---+-+-+-+-++---+-++-+---+
|  1 | primary | 672631ab-5c14-3f60-97c6-1d633e60f7bd | NetworkFilesystem |
2049 |  1 |   NULL |   NULL | 3413114880 |   984374837248 |
10.8.9.235   | NULL  | /export/primary | 2019-06-06 20:05:26 | NULL
 | NULL| Up | DefaultPrimary| ZONE| KVM|
0 |  NULL |
|  3 | lxc | 3b062c9d-0333-41c4-9ba8-f71f75a81d1f | SharedMountPoint  |
   0 |  2 |  2 |  3 | 3413114880 |   984374837248 |
localhost| NULL  | /mnt/primary| 2019-06-13 18:32:26 | NULL
 | NULL| Up | DefaultPrimary| CLUSTER | NULL   |
0 |  NULL |
++-+--+---+--++++++--+---+-+-+-+-++---+-++-+---+
2 rows in set (0.00 sec)


On Fri, Jun 14, 2019 at 12:51 PM Andrija Panic 
wrote:

> right... based on logs, it used different sql to search for ZONE-wide
> storage, execute this one please:
>
> SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> storage_pool.update_time, storage_pool.data_center_id, storage_pool.pod_id,
> storage_pool.used_bytes, storage_pool.capacity_bytes, storage_pool.status,
> storage_pool.storage_provider_name, storage_pool.host_address,
> storage_pool.path, storage_pool.port, storage_pool.user_info,
> storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> storage_pool.capacity_iops, storage_pool.hypervisor FROM storage_pool WHERE
> storage_pool.data_center_id = 1  AND storage_pool.status = 'Up'  AND
> storage_pool.scope = 'ZONE'  AND storage_pool.hypervisor = 'LXC'  AND
> storage_pool.removed IS NULL
>
> This SHOULD return 1 pool, unless there are some funny issues...
> I expect the storage_pool.hypervisor might be NULL instead of "LXC"
>
>  Anyway, execute the following:   "select * from storage_pool;"
>  to see what you have in DB...
>
> Best
> Andrija
>
>
> On Fri, 14 Jun 2019 at 18:13, Alejandro Ruiz Bermejo <
> arbermejo0...@gmail.com> wrote:
>
> > Hi and thanks again for your help,
> >
> > So i run the query with both level scopes of storage
> >
> > with storage_pool.scope = 'CLUSTER'
> >
> > output:
> > Empty set (0.00 sec)
> >
> > with storage_pool.scope = 'ZONE'
> >
> >
> >
> ++-+--+---+-+-+-++++++---+--+-+--+---++---+-+---++
> > | id | name| uuid | pool_type
>  |
> > created | removed | update_time | data_center_id | pod_id |
> > used_bytes | capacity_bytes | status | storage_provider_name |
> host_address
> > | path| port | user_info | cluster_id | scope | managed |
> > capacity_iops | 

Re: launch instance error

2019-06-14 Thread Andrija Panic
right... based on logs, it used different sql to search for ZONE-wide
storage, execute this one please:

SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
storage_pool.pool_type, storage_pool.created, storage_pool.removed,
storage_pool.update_time, storage_pool.data_center_id, storage_pool.pod_id,
storage_pool.used_bytes, storage_pool.capacity_bytes, storage_pool.status,
storage_pool.storage_provider_name, storage_pool.host_address,
storage_pool.path, storage_pool.port, storage_pool.user_info,
storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
storage_pool.capacity_iops, storage_pool.hypervisor FROM storage_pool WHERE
storage_pool.data_center_id = 1  AND storage_pool.status = 'Up'  AND
storage_pool.scope = 'ZONE'  AND storage_pool.hypervisor = 'LXC'  AND
storage_pool.removed IS NULL

This SHOULD return 1 pool, unless there are some funny issues...
I expect the storage_pool.hypervisor might be NULL instead of "LXC"

 Anyway, execute the following:   "select * from storage_pool;"
 to see what you have in DB...

Best
Andrija


On Fri, 14 Jun 2019 at 18:13, Alejandro Ruiz Bermejo <
arbermejo0...@gmail.com> wrote:

> Hi and thanks again for your help,
>
> So i run the query with both level scopes of storage
>
> with storage_pool.scope = 'CLUSTER'
>
> output:
> Empty set (0.00 sec)
>
> with storage_pool.scope = 'ZONE'
>
>
> ++-+--+---+-+-+-++++++---+--+-+--+---++---+-+---++
> | id | name| uuid | pool_type |
> created | removed | update_time | data_center_id | pod_id |
> used_bytes | capacity_bytes | status | storage_provider_name | host_address
> | path| port | user_info | cluster_id | scope | managed |
> capacity_iops | hypervisor |
>
> ++-+--+---+-+-+-++++++---+--+-+--+---++---+-+---++
> |  1 | primary | 672631ab-5c14-3f60-97c6-1d633e60f7bd | NetworkFilesystem |
> 2019-06-06 20:05:26 | NULL| NULL|  1 |   NULL |
> 3413114880 |   984374837248 | Up | DefaultPrimary| 10.8.9.235
> | /export/primary | 2049 | NULL  |   NULL | ZONE  |   0 |
>NULL | KVM|
>
> ++-+--+---+-+-+-++++++---+--+-+--+---++---+-+---++
> 1 row in set (0.00 sec)
>
>
>
> On Fri, Jun 14, 2019 at 11:57 AM Andrija Panic 
> wrote:
>
> > Execute this one (from logs) against your DB - does it return/finds a
> > storage pool ?
> >
> > SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
> > storage_pool.pool_type, storage_pool.created, storage_pool.removed,
> > storage_pool.update_time, storage_pool.data_center_id,
> storage_pool.pod_id,
> > storage_pool.used_bytes, storage_pool.capacity_bytes,
> storage_pool.status,
> > storage_pool.storage_provider_name, storage_pool.host_address,
> > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > storage_pool.capacity_iops, storage_pool.hypervisor FROM storage_pool
> WHERE
> > storage_pool.data_center_id = 1  AND storage_pool.status = 'Up'  AND
> > storage_pool.scope = 'CLUSTER'  AND  ( storage_pool.pod_id IS NULL  OR
> > storage_pool.pod_id = 1  )  AND  ( storage_pool.cluster_id IS NULL  OR
> > storage_pool.cluster_id = 2  )  AND storage_pool.removed IS NULL
> >
> > tip from thre query - it seems to be looking for CLUSTER level not ZONE
> > level scope of storage.
> >
> >
> >
> > On Thu, 13 Jun 2019 at 16:35, Alejandro Ruiz Bermejo <
> > arbermejo0...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I really need to get LXC working, I'm very interested on using CS and i
> > > need to move LXC instances to the new cloud. Thats's why i'm insisting
> on
> > > it. So please if you see any errors or can help me i will appreciate
> it.
> > >
> > > 1. I haven't used any Tag till now (should i?)
> > > 2. Already set the TRACE logging level here is the content  for an
> > instance
> > > createion (https://pastebin.com/Yd9as8uF)
> > >
> > > Thanks for the help
> > > Alejandro
> > >
> > > On Wed, Jun 12, 2019 at 5:06 PM Andrija Panic  >
> > > wrote:
> > >
> > > > Well, this one still doesn't provide any usefull info.
> > > >
> > > > 1. Can you confirm if you have tags set on LXC 

Re: launch instance error

2019-06-14 Thread Alejandro Ruiz Bermejo
t;
> > > > > >>
> > > > >
> > > >
> > >
> >
> com.cloud.offering.DiskOffering\":\"74099b7a-d7bd-44a8-b7da-b01e8e6fa2ed\",\"interface
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> com.cloud.template.VirtualMachineTemplate\":\"574ce32a-cb06-4f9e-b423-cb2a3e053950\"}","_":"1560188536922"},
> > > > > >> > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0,
> > resultCode:
> > > 0,
> > > > > >> > result: null, initMsid: 207380201932, completeMsid: null,
> > > > lastUpdated:
> > > > > >> > null, lastPolled: null, created: null}
> > > > > >> > 2019-06-10 13:40:11,248 DEBUG
> [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > > > >> > (qtp895947612-329:ctx-37027106 ctx-58323b07) (logid:396f8582)
> > > submit
> > > > > >> async
> > > > > >> > job-41, details: AsyncJobVO {id:41, userId: 2, accountId: 2,
> > > > > >> instanceType:
> > > > > >> > VirtualMachine, instanceId: 10, cmd:
> > > > > >> > org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin,
> > > > > cmdInfo:
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> {"httpmethod":"GET","templateid":"574ce32a-cb06-4f9e-b423-cb2a3e053950","ctxAccountId":"2","uuid":"e5106e36-c36d-49e1-b8e8-4a82ce6fc025","cmdEventType":"VM.CREATE","diskofferingid":"74099b7a-d7bd-44a8-b7da-b01e8e6fa2ed","serviceofferingid":"57675b96-98fa-49a3-9437-24f9dbf3fd90","response":"json","ctxUserId":"2","hypervisor":"LXC","zoneid":"5b03beea-ffdd-45ea-84eb-64110f3ff0d0","ctxStartEventId":"108","id":"10","ctxDetails":"{\"interface
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> com.cloud.vm.VirtualMachine\":\"e5106e36-c36d-49e1-b8e8-4a82ce6fc025\",\"interface
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> com.cloud.offering.ServiceOffering\":\"57675b96-98fa-49a3-9437-24f9dbf3fd90\",\"interface
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> com.cloud.dc.DataCenter\":\"5b03beea-ffdd-45ea-84eb-64110f3ff0d0\",\"interface
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> com.cloud.offering.DiskOffering\":\"74099b7a-d7bd-44a8-b7da-b01e8e6fa2ed\",\"interface
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> com.cloud.template.VirtualMachineTemplate\":\"574ce32a-cb06-4f9e-b423-cb2a3e053950\"}","_":"1560188536922"},
> > > > > >> > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0,
> > resultCode:
> > > 0,
> > > > > >> > result: null, initMsid: 207380201932, completeMsid: null,
> > > > lastUpdated:
> > > > > >> > null, lastPolled: null, created: null}
> > > > > >> > 2019-06-10 13:40:11,347 DEBUG [c.c.n.NetworkModelImpl]
> > > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > > (logid:0a0b5a30)
> > > > > >> > Service SecurityGroup is not supported in the network id=204
> > > > > >> > 2019-06-10 13:40:11,349 DEBUG [c.c.n.NetworkModelImpl]
> > > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > > (logid:0a0b5a30)
> > > > > >> > Service SecurityGroup is not supported in the network id=204
> > > > > >> > 2019-06-10 13:40:11,3

Re: launch instance error

2019-06-14 Thread Andrija Panic
 (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > (logid:0a0b5a30)
> > > > >> > Deploy avoids pods: [], clusters: [], hosts: []
> > > > >> > 2019-06-10 13:40:11,361 DEBUG [c.c.d.FirstFitPlanner]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > (logid:0a0b5a30)
> > > > >> > Searching all possible resources under this Zone: 1
> > > > >> > 2019-06-10 13:40:11,362 DEBUG [c.c.d.FirstFitPlanner]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > (logid:0a0b5a30)
> > > > >> > Listing clusters in order of aggregate capacity, that have
> > (atleast
> > > > one
> > > > >> > host with) enough CPU and RAM capacity under this Zone: 1
> > > > >> > 2019-06-10 13:40:11,364 DEBUG [c.c.d.FirstFitPlanner]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > (logid:0a0b5a30)
> > > > >> > Removing from the clusterId list these clusters from avoid set:
> []
> > > > >> > 2019-06-10 13:40:11,371 DEBUG
> > [c.c.d.DeploymentPlanningManagerImpl]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > (logid:0a0b5a30)
> > > > >> > Checking resources in Cluster: 2 under Pod: 1
> > > > >> > 2019-06-10 13:40:11,371 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Looking for hosts in
> > dc:
> > > 1
> > > > >> >  pod:1  cluster:2
> > > > >> > 2019-06-10 13:40:11,372 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) FirstFitAllocator
> has 1
> > > > >> hosts to
> > > > >> > check for allocation: [Host[-4-Routing]]
> > > > >> > 2019-06-10 13:40:11,373 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Not considering
> hosts:
> > > > >> > [Host[-4-Routing]]  to deploy template:
> > > > >> > Tmpl[201-TAR-201-2-ff22b996-a840-3cce-bec2-9ea6ab3081da as they
> > are
> > > > not
> > > > >> HVM
> > > > >> > enabled
> > > > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Found 0 hosts for
> > > > allocation
> > > > >> >

Re: launch instance error

2019-06-13 Thread Alejandro Ruiz Bermejo
oymentPlanningManagerImpl]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> > Deploy avoids pods: [], clusters: [], hosts: []
> > > >> > 2019-06-10 13:40:11,361 DEBUG [c.c.d.FirstFitPlanner]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> > Searching all possible resources under this Zone: 1
> > > >> > 2019-06-10 13:40:11,362 DEBUG [c.c.d.FirstFitPlanner]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> > Listing clusters in order of aggregate capacity, that have
> (atleast
> > > one
> > > >> > host with) enough CPU and RAM capacity under this Zone: 1
> > > >> > 2019-06-10 13:40:11,364 DEBUG [c.c.d.FirstFitPlanner]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> > Removing from the clusterId list these clusters from avoid set: []
> > > >> > 2019-06-10 13:40:11,371 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> > Checking resources in Cluster: 2 under Pod: 1
> > > >> > 2019-06-10 13:40:11,371 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Looking for hosts in
> dc:
> > 1
> > > >> >  pod:1  cluster:2
> > > >> > 2019-06-10 13:40:11,372 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) FirstFitAllocator has 1
> > > >> hosts to
> > > >> > check for allocation: [Host[-4-Routing]]
> > > >> > 2019-06-10 13:40:11,373 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Not considering hosts:
> > > >> > [Host[-4-Routing]]  to deploy template:
> > > >> > Tmpl[201-TAR-201-2-ff22b996-a840-3cce-bec2-9ea6ab3081da as they
> are
> > > not
> > > >> HVM
> > > >> > enabled
> > > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Found 0 hosts for
> > > allocation
> > > >> > after prioritization: []
> > > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Looking for
> > speed=1000Mhz,
> > > >> > Ram=1024
> > > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Host Allocator
> returning
> > 0
> > > >> > suitable hosts
> > > >> > 2019-06-10 13:40:11,374 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> No
> > > >> > suitable hosts found
> > > >> > 2019-06-10 13:40:11,374 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> No
> > > >> > suitable hosts found under this Cluster: 2
> > > >> > 2019-06-10 13:40:11,375 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl]
> > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > (logid:0a0b5a30)
> > > >> > Cluster: 1 has HyperVisorType that does not match the VM, skipping
> > > this
> > > >> > cluster
> > > >> > 2019-06-10 13:40:11,375 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl]
> > > >> > 

Re: launch instance error

2019-06-12 Thread Andrija Panic
upported in the network id=204
> > >> > 2019-06-10 13:40:11,356 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> > DeploymentPlanner allocation algorithm: null
> > >> > 2019-06-10 13:40:11,356 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> > Trying to allocate a host and storage pools from dc:1,
> > >> > pod:null,cluster:null, requested cpu: 1000, requested ram:
> 1073741824
> > >> > 2019-06-10 13:40:11,356 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> Is
> > >> > ROOT volume READY (pool already allocated)?: No
> > >> > 2019-06-10 13:40:11,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> > Deploy avoids pods: [], clusters: [], hosts: []
> > >> > 2019-06-10 13:40:11,361 DEBUG [c.c.d.FirstFitPlanner]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> > Searching all possible resources under this Zone: 1
> > >> > 2019-06-10 13:40:11,362 DEBUG [c.c.d.FirstFitPlanner]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> > Listing clusters in order of aggregate capacity, that have (atleast
> > one
> > >> > host with) enough CPU and RAM capacity under this Zone: 1
> > >> > 2019-06-10 13:40:11,364 DEBUG [c.c.d.FirstFitPlanner]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> > Removing from the clusterId list these clusters from avoid set: []
> > >> > 2019-06-10 13:40:11,371 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> > Checking resources in Cluster: 2 under Pod: 1
> > >> > 2019-06-10 13:40:11,371 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Looking for hosts in dc:
> 1
> > >> >  pod:1  cluster:2
> > >> > 2019-06-10 13:40:11,372 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) FirstFitAllocator has 1
> > >> hosts to
> > >> > check for allocation: [Host[-4-Routing]]
> > >> > 2019-06-10 13:40:11,373 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Not considering hosts:
> > >> > [Host[-4-Routing]]  to deploy template:
> > >> > Tmpl[201-TAR-201-2-ff22b996-a840-3cce-bec2-9ea6ab3081da as they are
> > not
> > >> HVM
> > >> > enabled
> > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Found 0 hosts for
> > allocation
> > >> > after prioritization: []
> > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Looking for
> speed=1000Mhz,
> > >> > Ram=1024
> > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Host Allocator returning
> 0
> > >> > suitable hosts
> > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> No
> > >> > suitable hosts found
> > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > (logid:0a0b5a30)
> > >> No
> > >> > suitable hosts fou

Re: launch instance error

2019-06-12 Thread Alejandro Ruiz Bermejo
ator) (logid:0a0b5a30) Found 0 hosts for
> allocation
> >> > after prioritization: []
> >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Looking for speed=1000Mhz,
> >> > Ram=1024
> >> > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Host Allocator returning 0
> >> > suitable hosts
> >> > 2019-06-10 13:40:11,374 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> No
> >> > suitable hosts found
> >> > 2019-06-10 13:40:11,374 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> No
> >> > suitable hosts found under this Cluster: 2
> >> > 2019-06-10 13:40:11,375 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Cluster: 1 has HyperVisorType that does not match the VM, skipping
> this
> >> > cluster
> >> > 2019-06-10 13:40:11,375 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Could not find suitable Deployment Destination for this VM under any
> >> > clusters, returning.
> >> > 2019-06-10 13:40:11,375 DEBUG [c.c.d.FirstFitPlanner]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Searching all possible resources under this Zone: 1
> >> > 2019-06-10 13:40:11,376 DEBUG [c.c.d.FirstFitPlanner]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Listing clusters in order of aggregate capacity, that have (atleast
> one
> >> > host with) enough CPU and RAM capacity under this Zone: 1
> >> > 2019-06-10 13:40:11,378 DEBUG [c.c.d.FirstFitPlanner]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Removing from the clusterId list these clusters from avoid set: [1, 2]
> >> > 2019-06-10 13:40:11,378 DEBUG [c.c.d.FirstFitPlanner]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> No
> >> > clusters found after removing disabled clusters and clusters in avoid
> >> list,
> >> > returning.
> >> > 2019-06-10 13:40:11,380 DEBUG [c.c.v.UserVmManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Destroying vm VM[User|i-2-10-VM] as it failed to create on Host with
> >> > Id:null
> >> > 2019-06-10 13:40:11,690 DEBUG [c.c.c.CapacityManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> VM
> >> > state transitted from :Stopped to Error with event:
> >> > OperationFailedToErrorvm's original host id: null new host id: null
> >> host id
> >> > before state transition: null
> >> > 2019-06-10 13:40:11,964 DEBUG [c.c.r.ResourceLimitManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Updating resource Type = volume count for Account = 2 Operation =
> >> > decreasing Amount = 1
> >> > 2019-06-10 13:40:12,064 DEBUG [c.c.r.ResourceLimitManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Updating resource Type = primary_storage count for Account = 2
> >> Operation =
> >> > decreasing Amount = 358010880
> >> > 2019-06-10 13:40:12,457 DEBUG [c.c.r.ResourceLimitManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Updating resource Type = volume count for Account = 2 Operation =
> >> > decreasing Amount = 1
> >> > 2019-06-10 13:40:12,598 DEBUG [c.c.r.ResourceLimitManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Updating resource Type = primary_storage count for Account = 2
> >> Operation =
> >> > decreasing Amount = 21474836480
> >> > 2019-06-10 13:40:12,707 WARN  [o.a.c.alerts]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > AlertType:: 8 | dataCenterId:: 1 | podId:: null | clusterId:: null |
> >> > message:: Failed to deploy Vm with Id: 10, on Host with Id: null
> >> > 2019-06-10 13:40:12,790 DEBUG [c.c.r.ResourceLimitManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Updating resource Type = user_vm count for Account = 2 Operation =
> >> > decreasing Amount = 1
> >> > 2019-06-10 13:40:12,873 DEBUG [c.c.r.ResourceLimitManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Updating resource Type = cpu count for Account = 2 Operation =
> >> decreasing
> >> > Amount = 1
> >> > 2019-06-10 13:40:12,957 DEBUG [c.c.r.ResourceLimitManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Updating resource Type = memory count for Account = 2 Operation =
> >> > decreasing Amount = 1024
> >> > 2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > com.cloud.exception.InsufficientServerCapacityException: Unable to
> >> create a
> >> > deployment for VM[User|i-2-10-VM]Scope=interface
> >> com.cloud.dc.DataCenter;
> >> > id=1
> >> > 2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> (logid:0a0b5a30)
> >> > Unable to create a deployment for VM[User|i-2-10-VM]
> >> > 2019-06-10 13:40:13,124 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Complete
> >> async
> >> > job-41, jobStatus: FAILED, resultCode: 530, result:
> >> >
> >> >
> >>
> org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":533,"errortext":"Unable
> >> > to create a deployment for VM[User|i-2-10-VM]"}
> >> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Publish
> async
> >> > job-41 complete on message bus
> >> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up
> jobs
> >> > related to job-41
> >> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Update db
> >> status
> >> > for job-41
> >> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up
> jobs
> >> > joined with job-41 and disjoin all subjobs created from job- 41
> >> > 2019-06-10 13:40:13,207 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Done
> >> executing
> >> > org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin for
> job-41
> >> > 2019-06-10 13:40:13,207 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Remove
> job-41
> >> > from job monitoring
> >> >
> >> > On Mon, Jun 10, 2019 at 1:42 PM Nicolas Vazquez <
> >> > nicolas.vazq...@shapeblue.com> wrote:
> >> >
> >> > > Hi Alejandro,
> >> > >
> >> > > Can you verify if the expected checksum is the correct and i  that
> >> case
> >> > > update the checksum column on vm_template table fot that template
> and
> >> > > restart management server?
> >> > >
> >> > > Regards,
> >> > > Nicolas Vazquez
> >> > > 
> >> > > From: Alejandro Ruiz Bermejo 
> >> > > Sent: Monday, June 10, 2019 12:38:52 PM
> >> > > To: users@cloudstack.apache.org
> >> > > Subject: launch instance error
> >> > >
> >> > > Hi, I'm working with Cloudstack 4.11.2.0
> >> > >
> >> > > This is my environment:
> >> > > 1 Zone
> >> > > 1 Pod
> >> > > 2 Clusters (LXC and KVM)
> >> > > 2 Hosts (one in each cluster)
> >> > >
> >> > > i can launch perfectly VMs on the KVM cluster but when i'm trying to
> >> > launch
> >> > > a new VM with an LXC template i'm getting this error:
> >> > >
> >> > > 2019-06-10 11:24:28,961 INFO  [a.c.c.a.ApiServer]
> >> > > (qtp895947612-18:ctx-e1a5bd5e ctx-bc704f1b) (logid:2635d81d)
> (userId=2
> >> > > accountId=2 sessionId=node0a04x8pj44gz8mf2hb5ikizk20) 10.8.2.116 --
> >> GET
> >> > > command=listAlerts=json=1=4&_=1560180395194
> 200
> >> > >
> >> > >
> >> >
> >>
> {"listalertsresponse":{"count":16,"alert":[{"id":"c896ae0a-764b-4976-9dc1-7868f9f61e3c","type":8,"name":"ALERT.USERVM","description":"Failed
> >> > > to deploy Vm with Id: 5, on Host with Id:
> >> > >
> >> > >
> >> >
> >>
> null","sent":"2019-06-10T11:15:19-0400"},{"id":"42988261-9238-430f-af54-1582073b35d0","type":8,"name":"ALERT.USERVM","description":"Failed
> >> > > to deploy Vm with Id: 4, on Host with Id:
> >> > >
> >> > >
> >> >
> >>
> null","sent":"2019-06-10T10:36:40-0400"},{"id":"fc89bba6-9890-4aa7-b57a-3dc3a2e14643","type":8,"name":"ALERT.USERVM","description":"Failed
> >> > > to deploy Vm with Id: 3, on Host with Id:
> >> > >
> >> > >
> >> >
> >>
> null","sent":"2019-06-10T10:31:05-0400"},{"id":"e3ab7e4b-5ce7-4c50-893e-0432c23f684f","type":28,"name":"ALERT.UPLOAD.FAILED","description":"Failed
> >> > > to register template: 59eaea2c-8883-11e9-bc81-003048d2c1cc with
> error:
> >> > > Failed post download script: checksum
> >> > > \"{MD5}292bfea2667a3a23a71d42d53d68b8fc\" didn't match the given
> >> value,
> >> > >
> >> > >
> >> >
> >>
> \"{MD5}c2c4fa2d0978121c7977db571f132d6e\"","sent":"2019-06-10T10:28:19-0400"}]}}
> >> > >
> >> > > I'm seeing two errors here one is the Host ID: * Host with Id: null*
> >> > > And the other one is a checksum verification
> >> > >
> >> > > When i created the  LXC template i had to change the Download
> >> Keyserver
> >> > for
> >> > > the lxc-create -t download step so idk if this is causing the
> checksum
> >> > > error
> >> > >
> >> > > any help with this would be great
> >> > >
> >> > > nicolas.vazq...@shapeblue.com
> >> > > www.shapeblue.com
> >> > > Amadeus House, Floral Street, London  WC2E 9DPUK
> >> > > @shapeblue
> >> > >
> >> > >
> >> > >
> >> > >
> >> >
> >>
> >>
> >> --
> >>
> >> Andrija Panić
> >>
> >
>
> --
>
> Andrija Panić
>


Re: launch instance error

2019-06-11 Thread Andrija Panic
 it failed to create on Host with
>> > Id:null
>> > 2019-06-10 13:40:11,690 DEBUG [c.c.c.CapacityManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> VM
>> > state transitted from :Stopped to Error with event:
>> > OperationFailedToErrorvm's original host id: null new host id: null
>> host id
>> > before state transition: null
>> > 2019-06-10 13:40:11,964 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = volume count for Account = 2 Operation =
>> > decreasing Amount = 1
>> > 2019-06-10 13:40:12,064 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = primary_storage count for Account = 2
>> Operation =
>> > decreasing Amount = 358010880
>> > 2019-06-10 13:40:12,457 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = volume count for Account = 2 Operation =
>> > decreasing Amount = 1
>> > 2019-06-10 13:40:12,598 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = primary_storage count for Account = 2
>> Operation =
>> > decreasing Amount = 21474836480
>> > 2019-06-10 13:40:12,707 WARN  [o.a.c.alerts]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > AlertType:: 8 | dataCenterId:: 1 | podId:: null | clusterId:: null |
>> > message:: Failed to deploy Vm with Id: 10, on Host with Id: null
>> > 2019-06-10 13:40:12,790 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = user_vm count for Account = 2 Operation =
>> > decreasing Amount = 1
>> > 2019-06-10 13:40:12,873 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = cpu count for Account = 2 Operation =
>> decreasing
>> > Amount = 1
>> > 2019-06-10 13:40:12,957 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = memory count for Account = 2 Operation =
>> > decreasing Amount = 1024
>> > 2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > com.cloud.exception.InsufficientServerCapacityException: Unable to
>> create a
>> > deployment for VM[User|i-2-10-VM]Scope=interface
>> com.cloud.dc.DataCenter;
>> > id=1
>> > 2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Unable to create a deployment for VM[User|i-2-10-VM]
>> > 2019-06-10 13:40:13,124 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Complete
>> async
>> > job-41, jobStatus: FAILED, resultCode: 530, result:
>> >
>> >
>> org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":533,"errortext":"Unable
>> > to create a deployment for VM[User|i-2-10-VM]"}
>> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Publish async
>> > job-41 complete on message bus
>> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
>> > related to job-41
>> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Update db
>> status
>> > for job-41
>> > 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
>> > joined with job-41 and disjoin all subjobs created from job- 41
>> > 2019-06-10 13:40:13,207 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Done
>> executing
>> > org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin for job-41
>> > 2019-06-10 13:40:13,207 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
>> > (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Remove job-41
>> > from job monitoring
>> >
>> > On Mon, Jun 10, 2019 at 1:42 PM Nicolas Vazquez <
>> > nicolas.vazq...@shapeblue.com> wrote:
>> >
>> > > Hi Alejandro,
>> > >
>> > > Can you verify if the expected checksum is the correct and i  that
>> case
>> > > update the checksum column on vm_template table fot that template and
>> > > restart management server?
>> > >
>> > > Regards,
>> > > Nicolas Vazquez
>> > > 
>> > > From: Alejandro Ruiz Bermejo 
>> > > Sent: Monday, June 10, 2019 12:38:52 PM
>> > > To: users@cloudstack.apache.org
>> > > Subject: launch instance error
>> > >
>> > > Hi, I'm working with Cloudstack 4.11.2.0
>> > >
>> > > This is my environment:
>> > > 1 Zone
>> > > 1 Pod
>> > > 2 Clusters (LXC and KVM)
>> > > 2 Hosts (one in each cluster)
>> > >
>> > > i can launch perfectly VMs on the KVM cluster but when i'm trying to
>> > launch
>> > > a new VM with an LXC template i'm getting this error:
>> > >
>> > > 2019-06-10 11:24:28,961 INFO  [a.c.c.a.ApiServer]
>> > > (qtp895947612-18:ctx-e1a5bd5e ctx-bc704f1b) (logid:2635d81d) (userId=2
>> > > accountId=2 sessionId=node0a04x8pj44gz8mf2hb5ikizk20) 10.8.2.116 --
>> GET
>> > > command=listAlerts=json=1=4&_=1560180395194 200
>> > >
>> > >
>> >
>> {"listalertsresponse":{"count":16,"alert":[{"id":"c896ae0a-764b-4976-9dc1-7868f9f61e3c","type":8,"name":"ALERT.USERVM","description":"Failed
>> > > to deploy Vm with Id: 5, on Host with Id:
>> > >
>> > >
>> >
>> null","sent":"2019-06-10T11:15:19-0400"},{"id":"42988261-9238-430f-af54-1582073b35d0","type":8,"name":"ALERT.USERVM","description":"Failed
>> > > to deploy Vm with Id: 4, on Host with Id:
>> > >
>> > >
>> >
>> null","sent":"2019-06-10T10:36:40-0400"},{"id":"fc89bba6-9890-4aa7-b57a-3dc3a2e14643","type":8,"name":"ALERT.USERVM","description":"Failed
>> > > to deploy Vm with Id: 3, on Host with Id:
>> > >
>> > >
>> >
>> null","sent":"2019-06-10T10:31:05-0400"},{"id":"e3ab7e4b-5ce7-4c50-893e-0432c23f684f","type":28,"name":"ALERT.UPLOAD.FAILED","description":"Failed
>> > > to register template: 59eaea2c-8883-11e9-bc81-003048d2c1cc with error:
>> > > Failed post download script: checksum
>> > > \"{MD5}292bfea2667a3a23a71d42d53d68b8fc\" didn't match the given
>> value,
>> > >
>> > >
>> >
>> \"{MD5}c2c4fa2d0978121c7977db571f132d6e\"","sent":"2019-06-10T10:28:19-0400"}]}}
>> > >
>> > > I'm seeing two errors here one is the Host ID: * Host with Id: null*
>> > > And the other one is a checksum verification
>> > >
>> > > When i created the  LXC template i had to change the Download
>> Keyserver
>> > for
>> > > the lxc-create -t download step so idk if this is causing the checksum
>> > > error
>> > >
>> > > any help with this would be great
>> > >
>> > > nicolas.vazq...@shapeblue.com
>> > > www.shapeblue.com
>> > > Amadeus House, Floral Street, London  WC2E 9DPUK
>> > > @shapeblue
>> > >
>> > >
>> > >
>> > >
>> >
>>
>>
>> --
>>
>> Andrija Panić
>>
>

-- 

Andrija Panić


Re: launch instance error

2019-06-11 Thread Alejandro Ruiz Bermejo
; > 2019-06-10 13:40:11,374 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7
> > FirstFitRoutingAllocator) (logid:0a0b5a30) Host Allocator returning 0
> > suitable hosts
> > 2019-06-10 13:40:11,374 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> No
> > suitable hosts found
> > 2019-06-10 13:40:11,374 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> No
> > suitable hosts found under this Cluster: 2
> > 2019-06-10 13:40:11,375 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Cluster: 1 has HyperVisorType that does not match the VM, skipping this
> > cluster
> > 2019-06-10 13:40:11,375 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Could not find suitable Deployment Destination for this VM under any
> > clusters, returning.
> > 2019-06-10 13:40:11,375 DEBUG [c.c.d.FirstFitPlanner]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Searching all possible resources under this Zone: 1
> > 2019-06-10 13:40:11,376 DEBUG [c.c.d.FirstFitPlanner]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Listing clusters in order of aggregate capacity, that have (atleast one
> > host with) enough CPU and RAM capacity under this Zone: 1
> > 2019-06-10 13:40:11,378 DEBUG [c.c.d.FirstFitPlanner]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Removing from the clusterId list these clusters from avoid set: [1, 2]
> > 2019-06-10 13:40:11,378 DEBUG [c.c.d.FirstFitPlanner]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> No
> > clusters found after removing disabled clusters and clusters in avoid
> list,
> > returning.
> > 2019-06-10 13:40:11,380 DEBUG [c.c.v.UserVmManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Destroying vm VM[User|i-2-10-VM] as it failed to create on Host with
> > Id:null
> > 2019-06-10 13:40:11,690 DEBUG [c.c.c.CapacityManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> VM
> > state transitted from :Stopped to Error with event:
> > OperationFailedToErrorvm's original host id: null new host id: null host
> id
> > before state transition: null
> > 2019-06-10 13:40:11,964 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = volume count for Account = 2 Operation =
> > decreasing Amount = 1
> > 2019-06-10 13:40:12,064 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = primary_storage count for Account = 2 Operation
> =
> > decreasing Amount = 358010880
> > 2019-06-10 13:40:12,457 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = volume count for Account = 2 Operation =
> > decreasing Amount = 1
> > 2019-06-10 13:40:12,598 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = primary_storage count for Account = 2 Operation
> =
> > decreasing Amount = 21474836480
> > 2019-06-10 13:40:12,707 WARN  [o.a.c.alerts]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > AlertType:: 8 | dataCenterId:: 1 | podId:: null | clusterId:: null |
> > message:: Failed to deploy Vm with Id: 10, on Host with Id: null
> > 2019-06-10 13:40:12,790 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = user_vm count for Account = 2 Operation =
> > decreasing Amount = 1
> > 2019-06-10 13:40:12,873 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = cpu count for Account = 2 Operation = decreasing
> > Amount = 1
> > 2019-06-10 13:40:12,957 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = memory count for Account = 2 Operation =
> > decreasing Amount = 1024
> > 2019-06-10 13:40:13,1

Re: launch instance error

2019-06-10 Thread Andrija Panic
 aggregate capacity, that have (atleast one
> host with) enough CPU and RAM capacity under this Zone: 1
> 2019-06-10 13:40:11,378 DEBUG [c.c.d.FirstFitPlanner]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Removing from the clusterId list these clusters from avoid set: [1, 2]
> 2019-06-10 13:40:11,378 DEBUG [c.c.d.FirstFitPlanner]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30) No
> clusters found after removing disabled clusters and clusters in avoid list,
> returning.
> 2019-06-10 13:40:11,380 DEBUG [c.c.v.UserVmManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Destroying vm VM[User|i-2-10-VM] as it failed to create on Host with
> Id:null
> 2019-06-10 13:40:11,690 DEBUG [c.c.c.CapacityManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30) VM
> state transitted from :Stopped to Error with event:
> OperationFailedToErrorvm's original host id: null new host id: null host id
> before state transition: null
> 2019-06-10 13:40:11,964 DEBUG [c.c.r.ResourceLimitManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Updating resource Type = volume count for Account = 2 Operation =
> decreasing Amount = 1
> 2019-06-10 13:40:12,064 DEBUG [c.c.r.ResourceLimitManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Updating resource Type = primary_storage count for Account = 2 Operation =
> decreasing Amount = 358010880
> 2019-06-10 13:40:12,457 DEBUG [c.c.r.ResourceLimitManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Updating resource Type = volume count for Account = 2 Operation =
> decreasing Amount = 1
> 2019-06-10 13:40:12,598 DEBUG [c.c.r.ResourceLimitManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Updating resource Type = primary_storage count for Account = 2 Operation =
> decreasing Amount = 21474836480
> 2019-06-10 13:40:12,707 WARN  [o.a.c.alerts]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> AlertType:: 8 | dataCenterId:: 1 | podId:: null | clusterId:: null |
> message:: Failed to deploy Vm with Id: 10, on Host with Id: null
> 2019-06-10 13:40:12,790 DEBUG [c.c.r.ResourceLimitManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Updating resource Type = user_vm count for Account = 2 Operation =
> decreasing Amount = 1
> 2019-06-10 13:40:12,873 DEBUG [c.c.r.ResourceLimitManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Updating resource Type = cpu count for Account = 2 Operation = decreasing
> Amount = 1
> 2019-06-10 13:40:12,957 DEBUG [c.c.r.ResourceLimitManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Updating resource Type = memory count for Account = 2 Operation =
> decreasing Amount = 1024
> 2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> com.cloud.exception.InsufficientServerCapacityException: Unable to create a
> deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter;
> id=1
> 2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
> (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> Unable to create a deployment for VM[User|i-2-10-VM]
> 2019-06-10 13:40:13,124 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Complete async
> job-41, jobStatus: FAILED, resultCode: 530, result:
>
> org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":533,"errortext":"Unable
> to create a deployment for VM[User|i-2-10-VM]"}
> 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Publish async
> job-41 complete on message bus
> 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
> related to job-41
> 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Update db status
> for job-41
> 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
> joined with job-41 and disjoin all subjobs created from job- 41
> 2019-06-10 13:40:13,207 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Done executing
> org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin for j

Re: launch instance error

2019-06-10 Thread Alejandro Ruiz Bermejo
r-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
Updating resource Type = volume count for Account = 2 Operation =
decreasing Amount = 1
2019-06-10 13:40:12,598 DEBUG [c.c.r.ResourceLimitManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
Updating resource Type = primary_storage count for Account = 2 Operation =
decreasing Amount = 21474836480
2019-06-10 13:40:12,707 WARN  [o.a.c.alerts]
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
AlertType:: 8 | dataCenterId:: 1 | podId:: null | clusterId:: null |
message:: Failed to deploy Vm with Id: 10, on Host with Id: null
2019-06-10 13:40:12,790 DEBUG [c.c.r.ResourceLimitManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
Updating resource Type = user_vm count for Account = 2 Operation =
decreasing Amount = 1
2019-06-10 13:40:12,873 DEBUG [c.c.r.ResourceLimitManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
Updating resource Type = cpu count for Account = 2 Operation = decreasing
Amount = 1
2019-06-10 13:40:12,957 DEBUG [c.c.r.ResourceLimitManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
Updating resource Type = memory count for Account = 2 Operation =
decreasing Amount = 1024
2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
com.cloud.exception.InsufficientServerCapacityException: Unable to create a
deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter;
id=1
2019-06-10 13:40:13,124 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
Unable to create a deployment for VM[User|i-2-10-VM]
2019-06-10 13:40:13,124 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Complete async
job-41, jobStatus: FAILED, resultCode: 530, result:
org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":533,"errortext":"Unable
to create a deployment for VM[User|i-2-10-VM]"}
2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Publish async
job-41 complete on message bus
2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
related to job-41
2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Update db status
for job-41
2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
joined with job-41 and disjoin all subjobs created from job- 41
2019-06-10 13:40:13,207 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Done executing
org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin for job-41
2019-06-10 13:40:13,207 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Remove job-41
from job monitoring

On Mon, Jun 10, 2019 at 1:42 PM Nicolas Vazquez <
nicolas.vazq...@shapeblue.com> wrote:

> Hi Alejandro,
>
> Can you verify if the expected checksum is the correct and i  that case
> update the checksum column on vm_template table fot that template and
> restart management server?
>
> Regards,
> Nicolas Vazquez
> 
> From: Alejandro Ruiz Bermejo 
> Sent: Monday, June 10, 2019 12:38:52 PM
> To: users@cloudstack.apache.org
> Subject: launch instance error
>
> Hi, I'm working with Cloudstack 4.11.2.0
>
> This is my environment:
> 1 Zone
> 1 Pod
> 2 Clusters (LXC and KVM)
> 2 Hosts (one in each cluster)
>
> i can launch perfectly VMs on the KVM cluster but when i'm trying to launch
> a new VM with an LXC template i'm getting this error:
>
> 2019-06-10 11:24:28,961 INFO  [a.c.c.a.ApiServer]
> (qtp895947612-18:ctx-e1a5bd5e ctx-bc704f1b) (logid:2635d81d) (userId=2
> accountId=2 sessionId=node0a04x8pj44gz8mf2hb5ikizk20) 10.8.2.116 -- GET
> command=listAlerts=json=1=4&_=1560180395194 200
>
> {"listalertsresponse":{"count":16,"alert":[{"id":"c896ae0a-764b-4976-9dc1-7868f9f61e3c","type":8,"name":"ALERT.USERVM","description":"Failed
> to deploy Vm with Id: 5, on Host with Id:
>
> null","sent":"2019-06-10T11:15:19-0400"},{"id":"42988261-9238-430f-af54-1582073b35d0","type":8,"name":"ALERT.USERVM","description":"Failed
> to deploy Vm with Id: 4, on Host with Id:
>
> null","sent":"2019-06-10T10:36:40-0400"},{"id":"fc89bba6-9890-4aa7-b57a-3dc3a2e14643","type":8,"name":"ALERT.USERVM","description":"Failed
> to deploy Vm with Id: 3, on Host with Id:
>
> null","sent":"2019-06-10T10:31:05-0400"},{"id":"e3ab7e4b-5ce7-4c50-893e-0432c23f684f","type":28,"name":"ALERT.UPLOAD.FAILED","description":"Failed
> to register template: 59eaea2c-8883-11e9-bc81-003048d2c1cc with error:
> Failed post download script: checksum
> \"{MD5}292bfea2667a3a23a71d42d53d68b8fc\" didn't match the given value,
>
> \"{MD5}c2c4fa2d0978121c7977db571f132d6e\"","sent":"2019-06-10T10:28:19-0400"}]}}
>
> I'm seeing two errors here one is the Host ID: * Host with Id: null*
> And the other one is a checksum verification
>
> When i created the  LXC template i had to change the Download Keyserver for
> the lxc-create -t download step so idk if this is causing the checksum
> error
>
> any help with this would be great
>
> nicolas.vazq...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>


Re: launch instance error

2019-06-10 Thread Alejandro Ruiz Bermejo
Hi, thanks for your answer, as an update of mi issue i've keep digging on
it and found this the log for the job. It seems like the allocation server
can't find my LXC host inside the LXC cluster. I don't know anymore if my
trouble is because the checksum. I attach here the file with the log output

In case is still a checksum trouble how can i verify wich one was the
expected?

On Mon, Jun 10, 2019 at 1:42 PM Nicolas Vazquez <
nicolas.vazq...@shapeblue.com> wrote:

> Hi Alejandro,
>
> Can you verify if the expected checksum is the correct and i  that case
> update the checksum column on vm_template table fot that template and
> restart management server?
>
> Regards,
> Nicolas Vazquez
> 
> From: Alejandro Ruiz Bermejo 
> Sent: Monday, June 10, 2019 12:38:52 PM
> To: users@cloudstack.apache.org
> Subject: launch instance error
>
> Hi, I'm working with Cloudstack 4.11.2.0
>
> This is my environment:
> 1 Zone
> 1 Pod
> 2 Clusters (LXC and KVM)
> 2 Hosts (one in each cluster)
>
> i can launch perfectly VMs on the KVM cluster but when i'm trying to launch
> a new VM with an LXC template i'm getting this error:
>
> 2019-06-10 11:24:28,961 INFO  [a.c.c.a.ApiServer]
> (qtp895947612-18:ctx-e1a5bd5e ctx-bc704f1b) (logid:2635d81d) (userId=2
> accountId=2 sessionId=node0a04x8pj44gz8mf2hb5ikizk20) 10.8.2.116 -- GET
> command=listAlerts=json=1=4&_=1560180395194 200
>
> {"listalertsresponse":{"count":16,"alert":[{"id":"c896ae0a-764b-4976-9dc1-7868f9f61e3c","type":8,"name":"ALERT.USERVM","description":"Failed
> to deploy Vm with Id: 5, on Host with Id:
>
> null","sent":"2019-06-10T11:15:19-0400"},{"id":"42988261-9238-430f-af54-1582073b35d0","type":8,"name":"ALERT.USERVM","description":"Failed
> to deploy Vm with Id: 4, on Host with Id:
>
> null","sent":"2019-06-10T10:36:40-0400"},{"id":"fc89bba6-9890-4aa7-b57a-3dc3a2e14643","type":8,"name":"ALERT.USERVM","description":"Failed
> to deploy Vm with Id: 3, on Host with Id:
>
> null","sent":"2019-06-10T10:31:05-0400"},{"id":"e3ab7e4b-5ce7-4c50-893e-0432c23f684f","type":28,"name":"ALERT.UPLOAD.FAILED","description":"Failed
> to register template: 59eaea2c-8883-11e9-bc81-003048d2c1cc with error:
> Failed post download script: checksum
> \"{MD5}292bfea2667a3a23a71d42d53d68b8fc\" didn't match the given value,
>
> \"{MD5}c2c4fa2d0978121c7977db571f132d6e\"","sent":"2019-06-10T10:28:19-0400"}]}}
>
> I'm seeing two errors here one is the Host ID: * Host with Id: null*
> And the other one is a checksum verification
>
> When i created the  LXC template i had to change the Download Keyserver for
> the lxc-create -t download step so idk if this is causing the checksum
> error
>
> any help with this would be great
>
> nicolas.vazq...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>
root@cloudstack-manager:/home/team# grep -i "job-41" 
/var/log/cloudstack/management/management-server.log
2019-06-10 13:40:11,238 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:5a0455dd) Add job-41 into job 
monitoring
2019-06-10 13:40:11,246 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Executing AsyncJobVO 
{id:41, userId: 2, accountId: 2, instanceType: VirtualMachine, instanceId: 10, 
cmd: org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin, cmdInfo: 
{"httpmethod":"GET","templateid":"574ce32a-cb06-4f9e-b423-cb2a3e053950","ctxAccountId":"2","uuid":"e5106e36-c36d-49e1-b8e8-4a82ce6fc025","cmdEventType":"VM.CREATE","diskofferingid":"74099b7a-d7bd-44a8-b7da-b01e8e6fa2ed","serviceofferingid":"57675b96-98fa-49a3-9437-24f9dbf3fd90","response":"json","ctxUserId":"2","hypervisor":"LXC","zoneid":"5b03beea-ffdd-45ea-84eb-64110f3ff0d0","ctxStartEventId":"108","id":"10","ctxDetails":"{\"interface
 
com.cloud.vm.VirtualMachine\":\"e5106e36-c36d-49e1-b8e8-4a82ce6fc025\",\"interface
 
com.cloud.offering.ServiceOffering\":\"57675b96-98fa-49a3-9437-24f9dbf3fd90\",\"interface
 com.cloud.dc.DataCenter\":\"5

Re: launch instance error

2019-06-10 Thread Nicolas Vazquez
Hi Alejandro,

Can you verify if the expected checksum is the correct and i  that case update 
the checksum column on vm_template table fot that template and restart 
management server?

Regards,
Nicolas Vazquez

From: Alejandro Ruiz Bermejo 
Sent: Monday, June 10, 2019 12:38:52 PM
To: users@cloudstack.apache.org
Subject: launch instance error

Hi, I'm working with Cloudstack 4.11.2.0

This is my environment:
1 Zone
1 Pod
2 Clusters (LXC and KVM)
2 Hosts (one in each cluster)

i can launch perfectly VMs on the KVM cluster but when i'm trying to launch
a new VM with an LXC template i'm getting this error:

2019-06-10 11:24:28,961 INFO  [a.c.c.a.ApiServer]
(qtp895947612-18:ctx-e1a5bd5e ctx-bc704f1b) (logid:2635d81d) (userId=2
accountId=2 sessionId=node0a04x8pj44gz8mf2hb5ikizk20) 10.8.2.116 -- GET
command=listAlerts=json=1=4&_=1560180395194 200
{"listalertsresponse":{"count":16,"alert":[{"id":"c896ae0a-764b-4976-9dc1-7868f9f61e3c","type":8,"name":"ALERT.USERVM","description":"Failed
to deploy Vm with Id: 5, on Host with Id:
null","sent":"2019-06-10T11:15:19-0400"},{"id":"42988261-9238-430f-af54-1582073b35d0","type":8,"name":"ALERT.USERVM","description":"Failed
to deploy Vm with Id: 4, on Host with Id:
null","sent":"2019-06-10T10:36:40-0400"},{"id":"fc89bba6-9890-4aa7-b57a-3dc3a2e14643","type":8,"name":"ALERT.USERVM","description":"Failed
to deploy Vm with Id: 3, on Host with Id:
null","sent":"2019-06-10T10:31:05-0400"},{"id":"e3ab7e4b-5ce7-4c50-893e-0432c23f684f","type":28,"name":"ALERT.UPLOAD.FAILED","description":"Failed
to register template: 59eaea2c-8883-11e9-bc81-003048d2c1cc with error:
Failed post download script: checksum
\"{MD5}292bfea2667a3a23a71d42d53d68b8fc\" didn't match the given value,
\"{MD5}c2c4fa2d0978121c7977db571f132d6e\"","sent":"2019-06-10T10:28:19-0400"}]}}

I'm seeing two errors here one is the Host ID: * Host with Id: null*
And the other one is a checksum verification

When i created the  LXC template i had to change the Download Keyserver for
the lxc-create -t download step so idk if this is causing the checksum error

any help with this would be great

nicolas.vazq...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



launch instance error

2019-06-10 Thread Alejandro Ruiz Bermejo
Hi, I'm working with Cloudstack 4.11.2.0

This is my environment:
1 Zone
1 Pod
2 Clusters (LXC and KVM)
2 Hosts (one in each cluster)

i can launch perfectly VMs on the KVM cluster but when i'm trying to launch
a new VM with an LXC template i'm getting this error:

2019-06-10 11:24:28,961 INFO  [a.c.c.a.ApiServer]
(qtp895947612-18:ctx-e1a5bd5e ctx-bc704f1b) (logid:2635d81d) (userId=2
accountId=2 sessionId=node0a04x8pj44gz8mf2hb5ikizk20) 10.8.2.116 -- GET
command=listAlerts=json=1=4&_=1560180395194 200
{"listalertsresponse":{"count":16,"alert":[{"id":"c896ae0a-764b-4976-9dc1-7868f9f61e3c","type":8,"name":"ALERT.USERVM","description":"Failed
to deploy Vm with Id: 5, on Host with Id:
null","sent":"2019-06-10T11:15:19-0400"},{"id":"42988261-9238-430f-af54-1582073b35d0","type":8,"name":"ALERT.USERVM","description":"Failed
to deploy Vm with Id: 4, on Host with Id:
null","sent":"2019-06-10T10:36:40-0400"},{"id":"fc89bba6-9890-4aa7-b57a-3dc3a2e14643","type":8,"name":"ALERT.USERVM","description":"Failed
to deploy Vm with Id: 3, on Host with Id:
null","sent":"2019-06-10T10:31:05-0400"},{"id":"e3ab7e4b-5ce7-4c50-893e-0432c23f684f","type":28,"name":"ALERT.UPLOAD.FAILED","description":"Failed
to register template: 59eaea2c-8883-11e9-bc81-003048d2c1cc with error:
Failed post download script: checksum
\"{MD5}292bfea2667a3a23a71d42d53d68b8fc\" didn't match the given value,
\"{MD5}c2c4fa2d0978121c7977db571f132d6e\"","sent":"2019-06-10T10:28:19-0400"}]}}

I'm seeing two errors here one is the Host ID: * Host with Id: null*
And the other one is a checksum verification

When i created the  LXC template i had to change the Download Keyserver for
the lxc-create -t download step so idk if this is causing the checksum error

any help with this would be great