Re: Backup VM on Cloudstack

2024-06-12 Thread benoit lair
Hi Giles,

Is there something similar as Backroll https://github.com/DIMSI-IS/BackROLL for
Xenserver/Xcp-NG ?

Regards, Benoit Lair

Le mer. 12 juin 2024 à 17:45, Joffrey LUANGSAYSANA 
a écrit :

> Hi Jimmy,
>
> I am Joffrey from the Backroll team.
> That is correct, Backroll needs to have access to the storage, if for any
> reasons you do not want to have the local storage exposed to backroll then
> yes it will not be able to perform backups.
>
> Regards,
>
> *Joffrey LUANGSAYSANA* Ingénieur Cloud
> *@ : j.luangsays...@dimsi.fr *
> *Paris* • Lorient • Lyon
> [image: DIMSI_Logo_Color_Business_Solutions_x3.png]
> Microsoft Power Platform | Agilité | Cloud | Modern Apps | Code
>
>
> --
> *De :* Jimmy Huybrechts 
> *Envoyé :* mardi 11 juin 2024 11:20
> *À :* users@cloudstack.apache.org 
> *Objet :* Re: Backup VM on Cloudstack
>
> Hi Khang,
>
> Once tested let us know how it went, I was looking at this too but got
> misguided I think by the:
>
> To perform backup and restore tasks, Backroll's workers need an access to
> the VMs storage and to a backup storage.
>
> I only use local storage so that is not an option.
>
> --
> Jimmy
>
> From: Khang Nguyen Phuc 
> Date: Tuesday, 11 June 2024 at 04:18
> To: users@cloudstack.apache.org 
> Subject: Re: Backup VM on Cloudstack
> Hello everyone,
> Thank you for all your answers. I missed a few messages, so I couldn't
> reply in time. Thank you, Giles Sirett. I am experimenting with BackROLL
> and will review it as soon as possible.
> Best Regards,
>
> On Wed, May 29, 2024 at 11:04 PM Jimmy Huybrechts 
> wrote:
>
> > It says:
> > To perform backup and restore tasks, Backroll's workers need an access to
> > the VMs storage and to a backup storage.
> >
> > In case you use local storage for your vm’s it’s pretty useless then?
> >
> > --
> > Jimmy
> >
> > From: Giles Sirett 
> > Date: Wednesday, 29 May 2024 at 17:40
> > To: users@cloudstack.apache.org 
> > Subject: RE: Backup VM on Cloudstack
> > Hi Khang
> > The dummy provider is just that - it is a dummy provider for testing the
> > Backup and Recovery Framework - it doesn’t actually do any backups
> >
> > As Joao says - you can use Snaphots
> >
> > In terms of non commercial providers for the Backup & Recovery Framework,
> > have a look at Backroll
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FDIMSI-IS%2FBackROLL=05%7C02%7Cj.luangsaysana%40dimsi.fr%7C872fc7459c9a4b3221e308dc89f7c45b%7Cbab0ba86ddf44ac4b09fd48f7eb9d905%7C0%7C0%7C638536944412484235%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=k%2BSDIYDMNNlxuu%2BUEillMvWpnYzVYI7pSY3jzygRN0U%3D=0
> <https://github.com/DIMSI-IS/BackROLL>
> >
> >
> > Kind Regards
> > Giles
> >
> >
> >
> >
> > -Original Message-
> > From: João Jandre Paraquetti 
> > Sent: Friday, May 24, 2024 7:53 PM
> > To: users@cloudstack.apache.org
> > Subject: Re: Backup VM on Cloudstack
> >
> > Hello, Khang
> >
> > If your intention is to backup the VM's volumes, you can use the
> > 'Snapshot' feature on KVM, which will create a copy of the volume, this
> can
> > be done while the VM is stopped or running. With the
> > `snapshot.backup.to.secondary` configuration, the Snapshot will be backed
> > up to your secondary storage during the Snapshot process. Thus, if you
> lose
> > your primary storage, your secondary storage will still have backups of
> > your VM's volumes.
> >
> > Furthermore, version 4.20.0.0 will come with some optimizations of the
> > snapshot process, as well as introduce the concept of incremental
> snapshots
> > when using KVM as the hypervisor; which tends to be a lot faster and save
> > storage space.
> >
> > Best Regards,
> > João Jandre.
> >
> > On 5/24/24 04:32, Khang Nguyen Phuc wrote:
> > > Hello everyone,
> > >
> > > I'm looking for some advice on solutions for backing up VMs running on
> > > KVM in Cloudstack. I found two plugins in the documentation, but they
> > > are both paid. I also saw the dummy backup, but introductions to
> > > Cloudstack mention that it is only for testing the API's
> > > functionality. Can I consider dummy backup as a "native" backup
> > > solution for Cloudstack? I see two offerings, 'Gold' and 'Silver,' but
> > > there is no information on how many backups it makes or for how long...
> > >
> > > Can you suggest a backup solution for me or provide clearer
> > > documentation about the dummy backup?
> > >
> > > Thank you very much.
> > >
> >
>


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-12 Thread benoit lair
i succeded to install systemvm template 4.19 as serving template for
control and worker nodes with cks community iso 1.28.4
when cluster saying "starting", i connect to node controller via p, saw
that docker was not installed, but containerd.io yes

I've done the following :

apt install docker-ce
cp /etc/containerd/config.toml /etc/containerd/config.toml.bck
containerd config default | tee /etc/containerd/config.toml
/opt/bin/setup-kube-system
/opt/bin/deploy-kube-system

on to control node and same after on worker node
CS UI show kubernetes yaml
a "kubectl.exe --kubeconfig kube.conf.conf get nodes"

NAME   STATUS   ROLES   AGEVERSION
k8s-cks-cl19-control-18ed18311c9   Readycontrol-plane   3h1m   v1.28.4
k8s-cks-cl19-node-18ed1850433  Ready  170m   v1.28.4

However CS says the cluster is in alert state
And dashboad is not working

Any advice ?

when executing this on my laptop :
kubectl.exe --kubeconfig cl19_k8s_1.28.4.conf proxy

and opening this :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

I have this result :
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Somebody got this problem with dashboard ?

Did i miss something installing the cluster manually with acs stuff ?

Le jeu. 11 avr. 2024 à 17:40, benoit lair  a écrit :

> Hi Wei,
>
> Thanks for the sharing, i also tried to install the systemvm 4.19
> I have control node and worker node under template systemvm 4.19 (
> http://download.cloudstack.org/systemvm/4.19/systemvmtemplate-4.19.0-xen.vhd.bz2
> )
> I tried with community cks iso 1.25.0 and 1.28.9
> On systemvm 4.19 docker was not present by default, just containerd in
> 1.6.x version
> I installed docker-ce and docker-ce-cli :
>
> apt install -y apt-transport-https ca-certificates curl
> software-properties-common
> curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg
> --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
> echo "deb [arch=$(dpkg --print-architecture)
> signed-by=/usr/share/keyrings/docker-archive-keyring.gpg]
> https://download.docker.com/linux/debian $(lsb_release -cs) stable" |
> sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
> remove line about docker in double in /etc/apt/sources.list
> apt update
> apt install -y docker-ce
> apt install -y docker-cli
> i mounted the 1.28.4 cks community iso in control vm
>
> and /opt/bin/setup-kube-system
> i have these errors :
>
> Is it the good way to manually install kube system
>
> root@k8s-cks-cl17-control-18ecdc1e5bd:/home/core#
> /opt/bin/setup-kube-system
> Installing binaries from /mnt/k8sdisk/
> 5b1fa8e3e100: Loading layer
> [==>]  803.8kB/803.8kB
> 39c831b1aa26: Loading layer
> [==>]  26.25MB/26.25MB
> Loaded image: apache/cloudstack-kubernetes-autoscaler:latest
> 417cb9b79ade: Loading layer
> [==>]  657.7kB/657.7kB
> 8d323b160d65: Loading layer
> [==>]  24.95MB/24.95MB
> Loaded image: apache/cloudstack-kubernetes-provider:v1.0.0
> 6a4a177e62f3: Loading layer
> [==>]  115.2kB/115.2kB
> 398c9baff0ce: Loading layer
> [==>]  16.07MB/16.07MB
> Loaded image: registry.k8s.io/coredns/coredns:v1.10.1
> bd8a70623766: Loading layer
> [==>]  75.78MB/75.78MB
> c88361932af5: Loading layer
> [==>] 508B/508B
> Loaded image: kubernetesui/dashboard:v2.7.0
> e023e0e48e6e: Loading layer
> [==>]  103.7kB/103.7kB
> 6fbdf253bbc2: Loading layer
> [==>]   21.2kB/21.2kB
> 7bea6b893187: Loading layer
> [==>]  716.5kB/716.5kB
> ff5700ec5418: Loading layer
> [==>] 317B/317B
> d52f02c6501c: Loading layer
> [==>] 198B/198B
> e624a5370eca: Loading layer
> [==>] 113B/113B
> 1a73b54f556b: Loading layer
> [==>] 3

Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-11 Thread benoit lair
>]  16.85MB/16.85MB
Loaded image: registry.k8s.io/kube-proxy:v1.28.4
f35f4c1ae44f: Loading layer
[==>]  17.13MB/17.13MB
Loaded image: registry.k8s.io/kube-scheduler:v1.28.4
d01384fea991: Loading layer
[==>]  19.74MB/19.74MB
bcec7eb9e567: Loading layer
[==>] 530B/530B
Loaded image: kubernetesui/metrics-scraper:v1.0.8
e3e5579ddd43: Loading layer
[==>]  317.6kB/317.6kB
Loaded image: registry.k8s.io/pause:3.9
1b3ee35aacca: Loading layer
[==>]  2.796MB/2.796MB
910ce076f504: Loading layer
[==>]  7.425MB/7.425MB
a8e8b7b8e08a: Loading layer
[==>]  8.101MB/8.101MB
084e56a9c24b: Loading layer
[==>]  4.376MB/4.376MB
334d70dc85ec: Loading layer
[==>] 159B/159B
a13197d8bda5: Loading layer
[==>]  8.216MB/8.216MB
Loaded image: weaveworks/weave-kube:2.8.1
998efb010df6: Loading layer
[==>]  728.2kB/728.2kB
ed37391def99: Loading layer
[==>]   9.28MB/9.28MB
175a472c5f77: Loading layer
[==>] 299B/299B
a8764e32e9fe: Loading layer
[==>] 613B/613B
Loaded image: weaveworks/weave-npc:2.8.1
net.bridge.bridge-nf-call-iptables = 1
[init] Using Kubernetes version: v1.28.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output:
time="2024-04-11T15:33:26Z" level=fatal msg="validate service connection:
validate CRI v1 runtime API for endpoint
\"unix:///var/run/containerd/containerd.sock\": rpc error: code =
Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[init] Using Kubernetes version: v1.28.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output:
time="2024-04-11T15:33:26Z" level=fatal msg="validate service connection:
validate CRI v1 runtime API for endpoint
\"unix:///var/run/containerd/containerd.sock\": rpc error: code =
Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[init] Using Kubernetes version: v1.28.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output:
time="2024-04-11T15:33:27Z" level=fatal msg="validate service connection:
validate CRI v1 runtime API for endpoint
\"unix:///var/run/containerd/containerd.sock\": rpc error: code =
Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Error: kubeadm init failed!





Le jeu. 11 avr. 2024 à 17:19, Wei ZHOU  a écrit :

> Hi,
>
> Please refer to
> https://github.com/apache/cloudstack/issues/8681#issuecomment-1999083241
>
> The containerd in 4.16/4.17 sysyemvm template is too old.
>
> -Wei
>
>
> On Thursday, April 11, 2024, benoit lair  wrote:
>
> > Hi Rohit,
> >
> > I already tested with different iso from cloudstack from CKS repos,
> > It is ok with ACS 4.16 with community iso 1.23.3 but it fails from 1.24.0
> > and following others versions
> >
> > I tried to do an upgrade from 1.23.3 with 1.24.0, it fails,
> > I tried to bootstrap a k8s cluster with 1.24.0, also with 1.25 or 1.27.3
> > and 1.28.4
> >
> > Last try, i tested k8s community iso 1.25.0
> > in CS UI it saying   "Create Kubernetes Cluster k8s-cks-cl16 in progress"
> > and it fails
> > If i try to ssh on control node with VR_Pub_IP port 
> > i have nothing in execution
> >
> > If i try manual install :
> > core@k8s-cks-cl16-co

Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-11 Thread benoit lair
Rohit, does it mean i can overload the integrated kubernetes provider
plugin of cs 4.16 with CAPC ? i still can manage it via CS UI ? or just
with api calls ?

Le jeu. 11 avr. 2024 à 13:39, benoit lair  a écrit :

> Rohit,
>
> It is possible to install  CAPC on a CS 4.16 ?
>
> Le jeu. 11 avr. 2024 à 12:03, Rohit Yadav  a
> écrit :
>
>> Hi Benoit,
>>
>> The CKS feature has been improving over versions, I don't know if what
>> you're trying to achieve is possible with it. Maybe try a different version
>> of the data iso:
>> http://download.cloudstack.org/cks/
>>
>> Alternatively, you can also have a look at the CAPC project:
>> https://cluster-api-cloudstack.sigs.k8s.io
>>
>>
>> Regards.
>>
>>
>>
>>
>> 
>> From: benoit lair 
>> Sent: Thursday, April 11, 2024 14:16
>> To: users@cloudstack.apache.org ; dev <
>> d...@cloudstack.apache.org>
>> Subject: Re: ACS 4.16 - Change SystemVM template for CKS
>>
>> Any advices ?
>>
>> Best regards
>>
>> Le lun. 8 avr. 2024 à 16:53, benoit lair  a écrit
>> :
>>
>> > I am opened to every alternatives of changing system vm templates
>> > I just need to run K8s clyusters 1.28 with my CS 4.16 :)
>> >
>> > Le lun. 8 avr. 2024 à 16:52, benoit lair  a
>> écrit :
>> >
>> >> Hello Folks,
>> >>
>> >> I am trying to install K8s cluster with community iso 1.28.4
>> >> I am with a ACS 4.16.0 environment
>> >> It seems K8S is not working out of the box with v > 1.23.3 due to
>> >> containerd.io version
>> >>
>> >> I would like to release a template who will work with K8s > 1.23.3 on
>> acs
>> >> 4.16
>> >> How can i tell CS to take another system vm template, avoiding to mess
>> >> normal features with VR and VPC VR keeping the systemvm template issued
>> >> with CS 4.16 ?
>> >>
>> >> I am with XCP-NG 8.2.1 in production
>> >>
>> >> Thanks for your help or advises
>> >> Best regards,
>> >> Benoit
>> >>
>> >
>>
>


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-11 Thread benoit lair
Rohit,

It is possible to install  CAPC on a CS 4.16 ?

Le jeu. 11 avr. 2024 à 12:03, Rohit Yadav  a
écrit :

> Hi Benoit,
>
> The CKS feature has been improving over versions, I don't know if what
> you're trying to achieve is possible with it. Maybe try a different version
> of the data iso:
> http://download.cloudstack.org/cks/
>
> Alternatively, you can also have a look at the CAPC project:
> https://cluster-api-cloudstack.sigs.k8s.io
>
>
> Regards.
>
>
>
>
> ____
> From: benoit lair 
> Sent: Thursday, April 11, 2024 14:16
> To: users@cloudstack.apache.org ; dev <
> d...@cloudstack.apache.org>
> Subject: Re: ACS 4.16 - Change SystemVM template for CKS
>
> Any advices ?
>
> Best regards
>
> Le lun. 8 avr. 2024 à 16:53, benoit lair  a écrit :
>
> > I am opened to every alternatives of changing system vm templates
> > I just need to run K8s clyusters 1.28 with my CS 4.16 :)
> >
> > Le lun. 8 avr. 2024 à 16:52, benoit lair  a
> écrit :
> >
> >> Hello Folks,
> >>
> >> I am trying to install K8s cluster with community iso 1.28.4
> >> I am with a ACS 4.16.0 environment
> >> It seems K8S is not working out of the box with v > 1.23.3 due to
> >> containerd.io version
> >>
> >> I would like to release a template who will work with K8s > 1.23.3 on
> acs
> >> 4.16
> >> How can i tell CS to take another system vm template, avoiding to mess
> >> normal features with VR and VPC VR keeping the systemvm template issued
> >> with CS 4.16 ?
> >>
> >> I am with XCP-NG 8.2.1 in production
> >>
> >> Thanks for your help or advises
> >> Best regards,
> >> Benoit
> >>
> >
>


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-11 Thread benoit lair
errors occurred:
[ERROR CRI]: container runtime is not running: output: E0411
11:38:11.935458   12988 remote_runtime.go:948] "Status from runtime service
failed" err="rpc error: code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService"
time="2024-04-11T11:38:11Z" level=fatal msg="getting status of runtime: rpc
error: code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E0411
11:38:12.079422   13015 remote_runtime.go:948] "Status from runtime service
failed" err="rpc error: code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService"
time="2024-04-11T11:38:12Z" level=fatal msg="getting status of runtime: rpc
error: code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E0411
11:38:12.228669   13043 remote_runtime.go:948] "Status from runtime service
failed" err="rpc error: code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService"
time="2024-04-11T11:38:12Z" level=fatal msg="getting status of runtime: rpc
error: code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Error: kubeadm init failed!


Le jeu. 11 avr. 2024 à 12:03, Rohit Yadav  a
écrit :

> Hi Benoit,
>
> The CKS feature has been improving over versions, I don't know if what
> you're trying to achieve is possible with it. Maybe try a different version
> of the data iso:
> http://download.cloudstack.org/cks/
>
> Alternatively, you can also have a look at the CAPC project:
> https://cluster-api-cloudstack.sigs.k8s.io
>
>
> Regards.
>
>
>
>
> ____
> From: benoit lair 
> Sent: Thursday, April 11, 2024 14:16
> To: users@cloudstack.apache.org ; dev <
> d...@cloudstack.apache.org>
> Subject: Re: ACS 4.16 - Change SystemVM template for CKS
>
> Any advices ?
>
> Best regards
>
> Le lun. 8 avr. 2024 à 16:53, benoit lair  a écrit :
>
> > I am opened to every alternatives of changing system vm templates
> > I just need to run K8s clyusters 1.28 with my CS 4.16 :)
> >
> > Le lun. 8 avr. 2024 à 16:52, benoit lair  a
> écrit :
> >
> >> Hello Folks,
> >>
> >> I am trying to install K8s cluster with community iso 1.28.4
> >> I am with a ACS 4.16.0 environment
> >> It seems K8S is not working out of the box with v > 1.23.3 due to
> >> containerd.io version
> >>
> >> I would like to release a template who will work with K8s > 1.23.3 on
> acs
> >> 4.16
> >> How can i tell CS to take another system vm template, avoiding to mess
> >> normal features with VR and VPC VR keeping the systemvm template issued
> >> with CS 4.16 ?
> >>
> >> I am with XCP-NG 8.2.1 in production
> >>
> >> Thanks for your help or advises
> >> Best regards,
> >> Benoit
> >>
> >
>


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-11 Thread benoit lair
Any advices ?

Best regards

Le lun. 8 avr. 2024 à 16:53, benoit lair  a écrit :

> I am opened to every alternatives of changing system vm templates
> I just need to run K8s clyusters 1.28 with my CS 4.16 :)
>
> Le lun. 8 avr. 2024 à 16:52, benoit lair  a écrit :
>
>> Hello Folks,
>>
>> I am trying to install K8s cluster with community iso 1.28.4
>> I am with a ACS 4.16.0 environment
>> It seems K8S is not working out of the box with v > 1.23.3 due to
>> containerd.io version
>>
>> I would like to release a template who will work with K8s > 1.23.3 on acs
>> 4.16
>> How can i tell CS to take another system vm template, avoiding to mess
>> normal features with VR and VPC VR keeping the systemvm template issued
>> with CS 4.16 ?
>>
>> I am with XCP-NG 8.2.1 in production
>>
>> Thanks for your help or advises
>> Best regards,
>> Benoit
>>
>


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-08 Thread benoit lair
I am opened to every alternatives of changing system vm templates
I just need to run K8s clyusters 1.28 with my CS 4.16 :)

Le lun. 8 avr. 2024 à 16:52, benoit lair  a écrit :

> Hello Folks,
>
> I am trying to install K8s cluster with community iso 1.28.4
> I am with a ACS 4.16.0 environment
> It seems K8S is not working out of the box with v > 1.23.3 due to
> containerd.io version
>
> I would like to release a template who will work with K8s > 1.23.3 on acs
> 4.16
> How can i tell CS to take another system vm template, avoiding to mess
> normal features with VR and VPC VR keeping the systemvm template issued
> with CS 4.16 ?
>
> I am with XCP-NG 8.2.1 in production
>
> Thanks for your help or advises
> Best regards,
> Benoit
>


ACS 4.16 - Change SystemVM template for CKS

2024-04-08 Thread benoit lair
Hello Folks,

I am trying to install K8s cluster with community iso 1.28.4
I am with a ACS 4.16.0 environment
It seems K8S is not working out of the box with v > 1.23.3 due to
containerd.io version

I would like to release a template who will work with K8s > 1.23.3 on acs
4.16
How can i tell CS to take another system vm template, avoiding to mess
normal features with VR and VPC VR keeping the systemvm template issued
with CS 4.16 ?

I am with XCP-NG 8.2.1 in production

Thanks for your help or advises
Best regards,
Benoit


Re: ACS 4.16 - Issues after configuring secondary ip on VPC

2022-08-19 Thread benoit lair
Hello,
I tried to reboot with Clean Vpc option,
i still have errors onhaproxy_check.py
it says me : Missing section for load balancing listen a.b.c.d-1234


Le mar. 16 août 2022 à 18:52, Ricardo Pertuz  a
écrit :

> Try doing a VPC CleanUp
>
> BR,
>
> Ricardo
>
> On 16/08/22, 11:51 AM, "benoit lair"  wrote:
>
> Hello Folks,
>
> I tried to add a secondary ip on a VM (acs 4.16+xcp-ng) which is on a
> VPC
> After what i cant no more have dhcp lease after rebooting vm
>
> I have theses errors on VPC Vr in /var/log/dnsmasq.log
> Aug 16 16:23:51 dnsmasq-dhcp[2993336]: DHCPDISCOVER(eth4)
> 02:00:54:22:00:58
> no address available
> Aug 16 16:23:58 dnsmasq-dhcp[2993336]: DHCPDISCOVER(eth4)
> 02:00:54:22:00:58
> no address available
>
> Any ideas ?
>
> I cant add new vms, network doesnt assign anymore some ips from VR
>
> Also i have errors on dhcp_check, dns_check and haproxy_check.py on VR
>
> dhcp_check : Missing elements in dhcphosts.txt - ...
> dns_check : Missing entries for VMs in /etc/hosts
> haproxy_check : Missing section for load balancing...
>
> Regards, Benoit
>
>


ACS 4.16 - Issues after configuring secondary ip on VPC

2022-08-16 Thread benoit lair
Hello Folks,

I tried to add a secondary ip on a VM (acs 4.16+xcp-ng) which is on a VPC
After what i cant no more have dhcp lease after rebooting vm

I have theses errors on VPC Vr in /var/log/dnsmasq.log
Aug 16 16:23:51 dnsmasq-dhcp[2993336]: DHCPDISCOVER(eth4) 02:00:54:22:00:58
no address available
Aug 16 16:23:58 dnsmasq-dhcp[2993336]: DHCPDISCOVER(eth4) 02:00:54:22:00:58
no address available

Any ideas ?

I cant add new vms, network doesnt assign anymore some ips from VR

Also i have errors on dhcp_check, dns_check and haproxy_check.py on VR

dhcp_check : Missing elements in dhcphosts.txt - ...
dns_check : Missing entries for VMs in /etc/hosts
haproxy_check : Missing section for load balancing...

Regards, Benoit


Re: ACS 4.16.1 ::XCP-ng 8.2.1 CS Guest VM can't communicate with virtual routers when they are on different hosts

2022-05-03 Thread benoit lair
Hello Midhun,

Faced an issue during last update
Updating to Xcp-NG 8.2.1 causes bugs on some features.

Take a look at the issue i found about. There is a fix i tried and succeded
to continue working with Xcp-NG 8.2.1

https://github.com/apache/cloudstack/issues/6349

Hope this could help


Le mar. 12 avr. 2022 à 09:01, Midhun Jose  a
écrit :

> Hi vivek/Nux,
>
> Our network department updated that the ethernet ports on our switch are
> access ports, not trunk ports,  and hence no vlans are allowed.
> They asked us to check the configuration in the virtual router and make
> sure the vlans are allowed.
> could you please suggest anything on this.
>
>
> Midhun Jose
>
>
> - Original Message -
> From: "Vivek Kumar" 
> To: "users" 
> Sent: Thursday, April 7, 2022 1:17:07 PM
> Subject: Re: ACS 4.16.1 ::XCP-ng 8.2.1 CS Guest VM can't communicate with
> virtual routers when they are on different hosts
>
> Hello Midhun,
>
> This typically happens when your guest VLAN range is not allowed in the
> backend switch ports. So allowed all of your VLAN range on the ports where
> you have defined your guest traffic.
>
>
>
> Regards,
> Vivek Kumar
>
>
> > On 07-Apr-2022, at 12:13 PM, Midhun Jose 
> wrote:
> >
> > Hi @All,
> >
> > I'm using Cloudstack 4.16.1 with XCP-ng Cluster having 2 hosts.
> > I am facing issue  When virtual router is created on host 1 and a guest
> VM that uses that virtual router is created on host 2. there is no
> connectivity from VM and the VR.
> > (refer the screenshot attached.)
> > But when both virtual router and guest VM are created on the same host
> everything works like normal.
> > Did I miss something on configuring the network?
> >
> > Best Regards,
> > Midhun Jose
> >
>
>
> --
> This message is intended only for the use of the individual or entity to
> which it is addressed and may contain confidential and/or privileged
> information. If you are not the intended recipient, please delete the
> original message and any copy of it from your computer system. You are
> hereby notified that any dissemination, distribution or copying of this
> communication is strictly prohibited unless proper authorization has been
> obtained for such action. If you have received this communication in
> error,
> please notify the sender immediately. Although IndiQus attempts to sweep
> e-mail and attachments for viruses, it does not guarantee that both are
> virus-free and accepts no liability for any damage sustained as a result
> of
> viruses.
>


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
Hello Wei,

The issue is opened here : https://github.com/apache/cloudstack/issues/6349

Have a nice day

Le mar. 3 mai 2022 à 11:19, benoit lair  a écrit :

> Hello Wei,
>
> Yes i'm going to open an issue :)
> I am doing some units tests on xcp-ng 8.2.1 with acs 4.16
>
> Le mar. 3 mai 2022 à 10:54, Wei ZHOU  a écrit :
>
>> Good, you have solved the problem.
>>
>> CloudStack supports 8.2.0 but not 8.2.1.
>>
>> Can you add a github issue ? we could support it in future releases.
>>
>> -Wei
>>
>>
>> On Tue, 3 May 2022 at 10:02, benoit lair  wrote:
>>
>> > I precise after adding these 2 two lines into  hypervisor_capabilities
>> and
>> > guest_os_hypervisor this fixed the feature of live storage migration
>> for me
>> >
>> > Le mar. 3 mai 2022 à 10:01, benoit lair  a
>> écrit :
>> >
>> > > Hello Antoine,
>> > >
>> > > I saw that this time my yum update upgraded me to 8.2.1
>> > > You were in 8.2.1 too ?
>> > >
>> > > I tried this fix in ACS :
>> > >
>> > > #add hypervsisor xcp 8.2.1 to acs 4.16
>> > > INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid,
>> > > hypervisor_type,
>> > > hypervisor_version, max_guests_limit, max_data_volumes_limit,
>> > > max_hosts_per_cluster, storage_motion_supported) values (UUID(),
>> > > 'XenServer',
>> > > '8.2.1', 1000, 253, 64, 1);
>> > >
>> > > +-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer
>> 8.2.1
>> > > +INSERT IGNORE INTO `cloud`.`guest_os_hypervisor`
>> (uuid,hypervisor_type,
>> > > hypervisor_version, guest_os_name, guest_os_id, created,
>> is_user_defined)
>> > > SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
>> > > utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
>> > > hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';
>> > >
>> > > Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15
>> > >
>> > > Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
>> > > capabilities
>> > >
>> > > I tried to reboot and this is not working on another Cloudstack mgmt
>> > > instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update
>> > >
>> > >
>> > > Regards, Benoit
>> > >
>> > > Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
>> > > écrit :
>> > >
>> > >> Bonjour Benoit,
>> > >>
>> > >> I had similar issues after I did a yum update and I was only able to
>> > fitx
>> > >> the issue by rebooting my hosts.
>> > >>
>> > >> -Antoine
>> > >>
>> > >> > On May 2, 2022, at 12:04 PM, benoit lair 
>> > wrote:
>> > >> >
>> > >> > Hello all,
>> > >> >
>> > >> > This is surely due to my yum update which updated to xcp 8.2.1
>> > >> >
>> > >> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would
>> it
>> > be
>> > >> > possible to add hypervisor capabilities without doing it in beta
>> mode
>> > ?
>> > >> >
>> > >> > Le lun. 2 mai 2022 à 16:15, benoit lair  a
>> > >> écrit :
>> > >> >
>> > >> >> Hello folks,
>> > >> >>
>> > >> >> I have a several issue
>> > >> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster
>> > and i
>> > >> >> cant live migrate
>> > >> >> When clicking on the "Migrate volume" button, i have the following
>> > >> message
>> > >> >> :
>> > >> >>
>> > >> >> No primary storage pools available for migration
>> > >> >>
>> > >> >> and  it generates this in logs : "the hypervisor doesn't support
>> > >> storage
>> > >> >> motion."
>> > >> >>
>> > >> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
>> > >> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
>> > >> >> 192.168.4.30 -- GET
>> > >> >>
>> > >>
>> >
>> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=fi

Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
Hello Wei,

Yes i'm going to open an issue :)
I am doing some units tests on xcp-ng 8.2.1 with acs 4.16

Le mar. 3 mai 2022 à 10:54, Wei ZHOU  a écrit :

> Good, you have solved the problem.
>
> CloudStack supports 8.2.0 but not 8.2.1.
>
> Can you add a github issue ? we could support it in future releases.
>
> -Wei
>
>
> On Tue, 3 May 2022 at 10:02, benoit lair  wrote:
>
> > I precise after adding these 2 two lines into  hypervisor_capabilities
> and
> > guest_os_hypervisor this fixed the feature of live storage migration for
> me
> >
> > Le mar. 3 mai 2022 à 10:01, benoit lair  a écrit
> :
> >
> > > Hello Antoine,
> > >
> > > I saw that this time my yum update upgraded me to 8.2.1
> > > You were in 8.2.1 too ?
> > >
> > > I tried this fix in ACS :
> > >
> > > #add hypervsisor xcp 8.2.1 to acs 4.16
> > > INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid,
> > > hypervisor_type,
> > > hypervisor_version, max_guests_limit, max_data_volumes_limit,
> > > max_hosts_per_cluster, storage_motion_supported) values (UUID(),
> > > 'XenServer',
> > > '8.2.1', 1000, 253, 64, 1);
> > >
> > > +-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer
> 8.2.1
> > > +INSERT IGNORE INTO `cloud`.`guest_os_hypervisor`
> (uuid,hypervisor_type,
> > > hypervisor_version, guest_os_name, guest_os_id, created,
> is_user_defined)
> > > SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
> > > utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
> > > hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';
> > >
> > > Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15
> > >
> > > Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
> > > capabilities
> > >
> > > I tried to reboot and this is not working on another Cloudstack mgmt
> > > instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update
> > >
> > >
> > > Regards, Benoit
> > >
> > > Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
> > > écrit :
> > >
> > >> Bonjour Benoit,
> > >>
> > >> I had similar issues after I did a yum update and I was only able to
> > fitx
> > >> the issue by rebooting my hosts.
> > >>
> > >> -Antoine
> > >>
> > >> > On May 2, 2022, at 12:04 PM, benoit lair 
> > wrote:
> > >> >
> > >> > Hello all,
> > >> >
> > >> > This is surely due to my yum update which updated to xcp 8.2.1
> > >> >
> > >> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it
> > be
> > >> > possible to add hypervisor capabilities without doing it in beta
> mode
> > ?
> > >> >
> > >> > Le lun. 2 mai 2022 à 16:15, benoit lair  a
> > >> écrit :
> > >> >
> > >> >> Hello folks,
> > >> >>
> > >> >> I have a several issue
> > >> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster
> > and i
> > >> >> cant live migrate
> > >> >> When clicking on the "Migrate volume" button, i have the following
> > >> message
> > >> >> :
> > >> >>
> > >> >> No primary storage pools available for migration
> > >> >>
> > >> >> and  it generates this in logs : "the hypervisor doesn't support
> > >> storage
> > >> >> motion."
> > >> >>
> > >> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
> > >> >> 192.168.4.30 -- GET
> > >> >>
> > >>
> >
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
> > >> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > CIDRs
> > >> >> from which account
> 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin]
> > --
> > >> >> Account {"id": 2, "name": "admin", "uuid":
> > >> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
> > >> calls:
> > >> >> 0.0.0.0/0,::/0
> > >> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> Volume
> > >> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for
> > >> storage
> > >> >> pools in the cluster to which this volumes can be migrated.
> > >> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> >> Capabilities for host Host {"id": "2", "name":
> "xcp-cluster1-node2",
> > >> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
> > >> couldn't
> > >> >> be retrieved.
> > >> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> Volume
> > >> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the
> > hypervisor
> > >> >> doesn't support storage motion.
> > >> >> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> ===END===
> > >> >> 192.168.4.30 -- GET
> > >> >>
> > >>
> >
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
> > >> >>
> > >>
> > >>
> >
>


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
I precise after adding these 2 two lines into  hypervisor_capabilities and
guest_os_hypervisor this fixed the feature of live storage migration for me

Le mar. 3 mai 2022 à 10:01, benoit lair  a écrit :

> Hello Antoine,
>
> I saw that this time my yum update upgraded me to 8.2.1
> You were in 8.2.1 too ?
>
> I tried this fix in ACS :
>
> #add hypervsisor xcp 8.2.1 to acs 4.16
> INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid,
> hypervisor_type,
> hypervisor_version, max_guests_limit, max_data_volumes_limit,
> max_hosts_per_cluster, storage_motion_supported) values (UUID(),
> 'XenServer',
> '8.2.1', 1000, 253, 64, 1);
>
> +-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer 8.2.1
> +INSERT IGNORE INTO `cloud`.`guest_os_hypervisor` (uuid,hypervisor_type,
> hypervisor_version, guest_os_name, guest_os_id, created, is_user_defined)
> SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
> utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
> hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';
>
> Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15
>
> Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
> capabilities
>
> I tried to reboot and this is not working on another Cloudstack mgmt
> instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update
>
>
> Regards, Benoit
>
> Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
> écrit :
>
>> Bonjour Benoit,
>>
>> I had similar issues after I did a yum update and I was only able to fitx
>> the issue by rebooting my hosts.
>>
>> -Antoine
>>
>> > On May 2, 2022, at 12:04 PM, benoit lair  wrote:
>> >
>> > Hello all,
>> >
>> > This is surely due to my yum update which updated to xcp 8.2.1
>> >
>> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it be
>> > possible to add hypervisor capabilities without doing it in beta mode ?
>> >
>> > Le lun. 2 mai 2022 à 16:15, benoit lair  a
>> écrit :
>> >
>> >> Hello folks,
>> >>
>> >> I have a several issue
>> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster and i
>> >> cant live migrate
>> >> When clicking on the "Migrate volume" button, i have the following
>> message
>> >> :
>> >>
>> >> No primary storage pools available for migration
>> >>
>> >> and  it generates this in logs : "the hypervisor doesn't support
>> storage
>> >> motion."
>> >>
>> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
>> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
>> >> 192.168.4.30 -- GET
>> >>
>> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
>> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) CIDRs
>> >> from which account 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin] --
>> >> Account {"id": 2, "name": "admin", "uuid":
>> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
>> calls:
>> >> 0.0.0.0/0,::/0
>> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> Volume
>> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for
>> storage
>> >> pools in the cluster to which this volumes can be migrated.
>> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> >> Capabilities for host Host {"id": "2", "name": "xcp-cluster1-node2",
>> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
>> couldn't
>> >> be retrieved.
>> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> Volume
>> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the hypervisor
>> >> doesn't support storage motion.
>> >> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> ===END===
>> >> 192.168.4.30 -- GET
>> >>
>> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
>> >>
>>
>>


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
Hello Antoine,

I saw that this time my yum update upgraded me to 8.2.1
You were in 8.2.1 too ?

I tried this fix in ACS :

#add hypervsisor xcp 8.2.1 to acs 4.16
INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid, hypervisor_type,
hypervisor_version, max_guests_limit, max_data_volumes_limit,
max_hosts_per_cluster, storage_motion_supported) values (UUID(),
'XenServer',
'8.2.1', 1000, 253, 64, 1);

+-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer 8.2.1
+INSERT IGNORE INTO `cloud`.`guest_os_hypervisor` (uuid,hypervisor_type,
hypervisor_version, guest_os_name, guest_os_id, created, is_user_defined)
SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';

Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15

Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
capabilities

I tried to reboot and this is not working on another Cloudstack mgmt
instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update


Regards, Benoit

Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
écrit :

> Bonjour Benoit,
>
> I had similar issues after I did a yum update and I was only able to fitx
> the issue by rebooting my hosts.
>
> -Antoine
>
> > On May 2, 2022, at 12:04 PM, benoit lair  wrote:
> >
> > Hello all,
> >
> > This is surely due to my yum update which updated to xcp 8.2.1
> >
> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it be
> > possible to add hypervisor capabilities without doing it in beta mode ?
> >
> > Le lun. 2 mai 2022 à 16:15, benoit lair  a écrit
> :
> >
> >> Hello folks,
> >>
> >> I have a several issue
> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster and i
> >> cant live migrate
> >> When clicking on the "Migrate volume" button, i have the following
> message
> >> :
> >>
> >> No primary storage pools available for migration
> >>
> >> and  it generates this in logs : "the hypervisor doesn't support storage
> >> motion."
> >>
> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
> >> 192.168.4.30 -- GET
> >>
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) CIDRs
> >> from which account 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin] --
> >> Account {"id": 2, "name": "admin", "uuid":
> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
> calls:
> >> 0.0.0.0/0,::/0
> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for
> storage
> >> pools in the cluster to which this volumes can be migrated.
> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> >> Capabilities for host Host {"id": "2", "name": "xcp-cluster1-node2",
> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
> couldn't
> >> be retrieved.
> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the hypervisor
> >> doesn't support storage motion.
> >> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> ===END===
> >> 192.168.4.30 -- GET
> >>
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
> >>
>
>


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-02 Thread benoit lair
Hello all,

This is surely due to my yum update which updated to xcp 8.2.1

Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it be
possible to add hypervisor capabilities without doing it in beta mode ?

Le lun. 2 mai 2022 à 16:15, benoit lair  a écrit :

> Hello folks,
>
> I have a several issue
> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster and i
> cant live migrate
> When clicking on the "Migrate volume" button, i have the following message
> :
>
> No primary storage pools available for migration
>
> and  it generates this in logs : "the hypervisor doesn't support storage
> motion."
>
> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
> 192.168.4.30 -- GET
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) CIDRs
> from which account 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin] --
> Account {"id": 2, "name": "admin", "uuid":
> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API calls:
> 0.0.0.0/0,::/0
> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for storage
> pools in the cluster to which this volumes can be migrated.
> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> Capabilities for host Host {"id": "2", "name": "xcp-cluster1-node2",
> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"} couldn't
> be retrieved.
> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
> Vol[320|vm=191|DATADISK] is attached to a running vm and the hypervisor
> doesn't support storage motion.
> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) ===END===
> 192.168.4.30 -- GET
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
>


ACS 4.16 and xcp-ng - cant live storage migration

2022-05-02 Thread benoit lair
Hello folks,

I have a several issue
I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster and i
cant live migrate
When clicking on the "Migrate volume" button, i have the following message
:

No primary storage pools available for migration

and  it generates this in logs : "the hypervisor doesn't support storage
motion."

2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
(qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
192.168.4.30 -- GET
id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json
2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
(qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) CIDRs
from which account 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin] --
Account {"id": 2, "name": "admin", "uuid":
"a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API calls:
0.0.0.0/0,::/0
2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
(qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for storage
pools in the cluster to which this volumes can be migrated.
2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
(qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
Capabilities for host Host {"id": "2", "name": "xcp-cluster1-node2",
"uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"} couldn't
be retrieved.
2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
(qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
Vol[320|vm=191|DATADISK] is attached to a running vm and the hypervisor
doesn't support storage motion.
2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
(qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) ===END===
192.168.4.30 -- GET
id=b8d15b4c-93e9-4931-81ab-26a47ada32d5=findStoragePoolsForMigration=json


Re: XCP-ng 8.2 cannot start vm more than 4 core

2022-04-25 Thread benoit lair
Hello Abishek,

I have the same issue too, did you find a solution ?

Best regards

Le lun. 19 juil. 2021 à 13:36, Abishek  a écrit :

> I am very grateful for your response. I have used the following deployment
> scenario.
> 2 host with XCP-ng 8.2
> 1st host has 2CPU socket each of 24 core (96vCPU)
> 2nd host has 2CPU socket each of 6 core (24 vCPU)
>
> The value "xen.vm.vcpu.max" is currently set to 16. And the dynamic scale
> for template is turned off. I will further test as per the details
> provided. I will revert you back after the test.
>
> Thank You very much.
>
> On 2021/07/19 10:51:20, Harikrishna Patnala <
> harikrishna.patn...@shapeblue.com> wrote:
> > Hi Abhishek,
> >
> > There is a global setting "xen.vm.vcpu.max" which can be configured to
> the desired value. But I think hosts in the cluster should also have the
> required number of CPU sockets. The minimum of CPU sockets numbers of the
> hosts in the cluster will be assigned to the VPCUs-max. I remember
> assigning a greater value to vCPU-max than the CPU sockets number of the
> host results in VM start error. So in your case, it should be a maximum of
> 4.
> >
> > Regards,
> > Harikrishna
> > 
> > From: Abishek 
> > Sent: Monday, July 19, 2021 9:25 AM
> > To: users@cloudstack.apache.org 
> > Subject: XCP-ng 8.2 cannot start vm more than 4 core
> >
> > Hello EveryOne,
> >
> > I am deployed XCP-ng 8.2 host with cloudstack 4.15.1. Every thing is
> sucessfully setup with 2 XCP-ng host. But I am facing a problem while
> deploying a vm greater than 4 cores. Every time I try to start a VM with
> more than 4 cores I get the error VCPUs-at-startup, 5, value greater than
> VCPUs-max.
> > I will be very grateful if somebody can help me resolve the issue. We
> are trying to go into production with XCP-ng 8.2.
> >
> > Thank You.
> >
> >
> >
> >
>


Re: ACS 4.15.1 + XCP 8.2 - VM are not reachable

2022-04-22 Thread benoit lair
Hello,

Are you on xcp-ng 8.2 or 8.3 ?
Do you have a single host or a cluster ?
>From ACS management server, can you ping the xcp-ng ip ?

Regards, Benoit

Le jeu. 21 avr. 2022 à 14:16, Biswajit Banerjee
 a écrit :

> Hi
>
> Newly Deployed ACS 4.15.1 on XCPng 8.3 Hypervisor . The networking is
> basic without any vlan with Single network Interface for  guest and
> mangement Segment
>
> System VM , Virtual Router and Vm are not reachable from same segment
> also . From the same host they are reachable .
>
> While initial building , host addition failed , due to "unable to
> connect due to com.cloud.exception.ConnectionException: Reinitialize
> agent after setup.
> Cannot transit agent status with event AgentDisconnected for host X,
> mangement server id is XXX,Unable to transition to a new state
> from Creating via AgentDisconnected" error in Log
>
> I fixed with with changing the xcp -ng to bridge  by
> "xe-switch-network-backend bridge" abd rebooted host . The host got
> added . is it the right way ?
>
> Please guide a new bie .
>
> TIA
>
> Biswajit
>
>


Re: ACS 4.16 - XCP-NG 8.2 - StatsCollector generating some important traffic on storage

2022-04-15 Thread benoit lair
Also, what is executing StatsCollector on a xcp-ng ? I would like to be sure

Le ven. 15 avr. 2022 à 15:31, benoit lair  a écrit :

> Hi ,
>
> Somebody could tell me how much Storage iscsi BW is used with one Xcp-ng
> 8.2 Lvm SR Iscsi polled without prod , just with Cloudstack StatsCollector
> running every 6 ms ?
> I am around 2,5MB/s with 8 SR Iscsi MPIO
>
> Thanks a lot for your help or any advice
>
> Le mer. 13 avr. 2022 à 10:38, benoit lair  a
> écrit :
>
>> Hi Wei,
>>
>> I am going to tweak this value, however which is the purpose of
>> StatsCollector ? Which impact will it have by increasing this value ?
>>
>> I compared with an older ACS management server and saw the footprint
>> generated for StatsCollector was not so consuming
>>
>> I stopped all other management servers on acs 4.16 and kept only one
>> management server, i see StatsCollector entries several times a minute
>>
>> Do the entries i have in logs are normal ?
>>
>> I have this regularly in them :
>>
>> 2022-04-11 00:00:28,430 DEBUG [c.c.a.t.Request]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 2-9020710053623117925:
>> Received:  { Ans: , MgmtId: 2955451650215, via: 2(xcp-cluster1-node2), Ver:
>> v1, Flags:
>> 10, { GetStorageStatsAnswer } }
>> 2022-04-11 00:00:28,432 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) getCommandHostDelegation:
>> class com.cloud.agent.api.GetStorageStatsCommand
>> 2022-04-11 00:00:28,432 DEBUG [c.c.h.XenServerGuru]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) We are returning the
>> default host to execute commands because the command is not of Copy type.
>> 2022-04-11 00:00:28,436 DEBUG [c.c.a.m.ClusteredAgentAttache]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473817:
>> Forwarding null to 77026952423534
>> 2022-04-11 00:00:32,114 DEBUG [o.a.c.h.HAManagerImpl]
>> (BackgroundTaskPollManager-3:ctx-80a31965) (logid:046dbcfc) HA health check
>> task is running...
>> 2022-04-11 00:00:32,299 DEBUG [c.c.a.t.Request]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473817:
>> Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
>> v1, Flags:
>> 10, { GetStorageStatsAnswer } }
>> 2022-04-11 00:00:32,301 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) getCommandHostDelegation:
>> class com.cloud.agent.api.GetStorageStatsCommand
>> 2022-04-11 00:00:32,301 DEBUG [c.c.h.XenServerGuru]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) We are returning the
>> default host to execute commands because the command is not of Copy type.
>> 2022-04-11 00:00:32,305 DEBUG [c.c.a.m.ClusteredAgentAttache]
>> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473818:
>> Forwarding null to 77026952423534
>> 2022-04-11 00:00:35,335 DEBUG [c.c.s.StatsCollector]
>> (StatsCollector-5:ctx-c31d4516) (logid:f7389038) HostStatsCollector is
>> running...
>> 2022-04-11 00:00:35,351 DEBUG [c.c.a.m.ClusteredAgentAttache]
>> (StatsCollector-5:ctx-c31d4516) (logid:f7389038) Seq 1-5337328508387473819:
>> Forwarding null to 77026952423534
>> 2022-04-11 00:00:35,407 DEBUG [c.c.a.t.Request]
>> (StatsCollector-5:ctx-c31d4516) (logid:f7389038) Seq 1-5337328508387473819:
>> Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
>> v1, Flags:
>>
>> Is it normal ?
>>
>>
>> Le lun. 11 avr. 2022 à 20:13, Wei ZHOU  a écrit :
>>
>>> Hi,
>>>
>>> You can change the global setting "storage.stats.interval" to a value
>>> which
>>> is suitable to you. The default value is 6 milliseconds. Do not
>>> forget
>>> to restart the management server after change.
>>>
>>> -Wei
>>>
>>> On Fri, 8 Apr 2022 at 16:11, benoit lair  wrote:
>>>
>>> > Hello Folks,
>>> >
>>> > I am facing to a strange issue on my xcp-ng cluster with acs 4.16
>>> >
>>> > I have 4 ACS Mgmt servers participating to my Cloud installation
>>> >
>>> > All of them are contacting every time and very (too) regularly my
>>> xcp-ng
>>> > Pool master, generation some load avg and some Iops
>>> >
>>> > From ACS logs i have these entries which occurs very regularly :
>>> >
>>> > 2022-04-08 16:03:47,368 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>>> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54)
>>> getComm

Re: ACS 4.16 - XCP-NG 8.2 - StatsCollector generating some important traffic on storage

2022-04-15 Thread benoit lair
Hi ,

Somebody could tell me how much Storage iscsi BW is used with one Xcp-ng
8.2 Lvm SR Iscsi polled without prod , just with Cloudstack StatsCollector
running every 6 ms ?
I am around 2,5MB/s with 8 SR Iscsi MPIO

Thanks a lot for your help or any advice

Le mer. 13 avr. 2022 à 10:38, benoit lair  a écrit :

> Hi Wei,
>
> I am going to tweak this value, however which is the purpose of
> StatsCollector ? Which impact will it have by increasing this value ?
>
> I compared with an older ACS management server and saw the footprint
> generated for StatsCollector was not so consuming
>
> I stopped all other management servers on acs 4.16 and kept only one
> management server, i see StatsCollector entries several times a minute
>
> Do the entries i have in logs are normal ?
>
> I have this regularly in them :
>
> 2022-04-11 00:00:28,430 DEBUG [c.c.a.t.Request]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 2-9020710053623117925:
> Received:  { Ans: , MgmtId: 2955451650215, via: 2(xcp-cluster1-node2), Ver:
> v1, Flags:
> 10, { GetStorageStatsAnswer } }
> 2022-04-11 00:00:28,432 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) getCommandHostDelegation:
> class com.cloud.agent.api.GetStorageStatsCommand
> 2022-04-11 00:00:28,432 DEBUG [c.c.h.XenServerGuru]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) We are returning the
> default host to execute commands because the command is not of Copy type.
> 2022-04-11 00:00:28,436 DEBUG [c.c.a.m.ClusteredAgentAttache]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473817:
> Forwarding null to 77026952423534
> 2022-04-11 00:00:32,114 DEBUG [o.a.c.h.HAManagerImpl]
> (BackgroundTaskPollManager-3:ctx-80a31965) (logid:046dbcfc) HA health check
> task is running...
> 2022-04-11 00:00:32,299 DEBUG [c.c.a.t.Request]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473817:
> Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
> v1, Flags:
> 10, { GetStorageStatsAnswer } }
> 2022-04-11 00:00:32,301 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) getCommandHostDelegation:
> class com.cloud.agent.api.GetStorageStatsCommand
> 2022-04-11 00:00:32,301 DEBUG [c.c.h.XenServerGuru]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) We are returning the
> default host to execute commands because the command is not of Copy type.
> 2022-04-11 00:00:32,305 DEBUG [c.c.a.m.ClusteredAgentAttache]
> (StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473818:
> Forwarding null to 77026952423534
> 2022-04-11 00:00:35,335 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-5:ctx-c31d4516) (logid:f7389038) HostStatsCollector is
> running...
> 2022-04-11 00:00:35,351 DEBUG [c.c.a.m.ClusteredAgentAttache]
> (StatsCollector-5:ctx-c31d4516) (logid:f7389038) Seq 1-5337328508387473819:
> Forwarding null to 77026952423534
> 2022-04-11 00:00:35,407 DEBUG [c.c.a.t.Request]
> (StatsCollector-5:ctx-c31d4516) (logid:f7389038) Seq 1-5337328508387473819:
> Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
> v1, Flags:
>
> Is it normal ?
>
>
> Le lun. 11 avr. 2022 à 20:13, Wei ZHOU  a écrit :
>
>> Hi,
>>
>> You can change the global setting "storage.stats.interval" to a value
>> which
>> is suitable to you. The default value is 6 milliseconds. Do not forget
>> to restart the management server after change.
>>
>> -Wei
>>
>> On Fri, 8 Apr 2022 at 16:11, benoit lair  wrote:
>>
>> > Hello Folks,
>> >
>> > I am facing to a strange issue on my xcp-ng cluster with acs 4.16
>> >
>> > I have 4 ACS Mgmt servers participating to my Cloud installation
>> >
>> > All of them are contacting every time and very (too) regularly my xcp-ng
>> > Pool master, generation some load avg and some Iops
>> >
>> > From ACS logs i have these entries which occurs very regularly :
>> >
>> > 2022-04-08 16:03:47,368 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54)
>> getCommandHostDelegation:
>> > class com.cloud.agent.api.GetStorageStatsCommand
>> > 2022-04-08 16:03:47,368 DEBUG [c.c.h.XenServerGuru]
>> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
>> > default host to execute commands because the command is not of Copy
>> type.
>> > 2022-04-08 16:03:47,372 DEBUG [c.c.a.m.ClusteredAgentAttache]
>> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq
>> 1-5337328508387459309:
>> > Forwarding null to 77026952423534
>> &g

Re: ACS 4.16 - XCP-NG 8.2 - StatsCollector generating some important traffic on storage

2022-04-13 Thread benoit lair
Hi Wei,

I am going to tweak this value, however which is the purpose of
StatsCollector ? Which impact will it have by increasing this value ?

I compared with an older ACS management server and saw the footprint
generated for StatsCollector was not so consuming

I stopped all other management servers on acs 4.16 and kept only one
management server, i see StatsCollector entries several times a minute

Do the entries i have in logs are normal ?

I have this regularly in them :

2022-04-11 00:00:28,430 DEBUG [c.c.a.t.Request]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 2-9020710053623117925:
Received:  { Ans: , MgmtId: 2955451650215, via: 2(xcp-cluster1-node2), Ver:
v1, Flags:
10, { GetStorageStatsAnswer } }
2022-04-11 00:00:28,432 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) getCommandHostDelegation:
class com.cloud.agent.api.GetStorageStatsCommand
2022-04-11 00:00:28,432 DEBUG [c.c.h.XenServerGuru]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) We are returning the
default host to execute commands because the command is not of Copy type.
2022-04-11 00:00:28,436 DEBUG [c.c.a.m.ClusteredAgentAttache]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473817:
Forwarding null to 77026952423534
2022-04-11 00:00:32,114 DEBUG [o.a.c.h.HAManagerImpl]
(BackgroundTaskPollManager-3:ctx-80a31965) (logid:046dbcfc) HA health check
task is running...
2022-04-11 00:00:32,299 DEBUG [c.c.a.t.Request]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473817:
Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
v1, Flags:
10, { GetStorageStatsAnswer } }
2022-04-11 00:00:32,301 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) getCommandHostDelegation:
class com.cloud.agent.api.GetStorageStatsCommand
2022-04-11 00:00:32,301 DEBUG [c.c.h.XenServerGuru]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) We are returning the
default host to execute commands because the command is not of Copy type.
2022-04-11 00:00:32,305 DEBUG [c.c.a.m.ClusteredAgentAttache]
(StatsCollector-6:ctx-ded00a7f) (logid:95bd3e9c) Seq 1-5337328508387473818:
Forwarding null to 77026952423534
2022-04-11 00:00:35,335 DEBUG [c.c.s.StatsCollector]
(StatsCollector-5:ctx-c31d4516) (logid:f7389038) HostStatsCollector is
running...
2022-04-11 00:00:35,351 DEBUG [c.c.a.m.ClusteredAgentAttache]
(StatsCollector-5:ctx-c31d4516) (logid:f7389038) Seq 1-5337328508387473819:
Forwarding null to 77026952423534
2022-04-11 00:00:35,407 DEBUG [c.c.a.t.Request]
(StatsCollector-5:ctx-c31d4516) (logid:f7389038) Seq 1-5337328508387473819:
Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
v1, Flags:

Is it normal ?


Le lun. 11 avr. 2022 à 20:13, Wei ZHOU  a écrit :

> Hi,
>
> You can change the global setting "storage.stats.interval" to a value which
> is suitable to you. The default value is 6 milliseconds. Do not forget
> to restart the management server after change.
>
> -Wei
>
> On Fri, 8 Apr 2022 at 16:11, benoit lair  wrote:
>
> > Hello Folks,
> >
> > I am facing to a strange issue on my xcp-ng cluster with acs 4.16
> >
> > I have 4 ACS Mgmt servers participating to my Cloud installation
> >
> > All of them are contacting every time and very (too) regularly my xcp-ng
> > Pool master, generation some load avg and some Iops
> >
> > From ACS logs i have these entries which occurs very regularly :
> >
> > 2022-04-08 16:03:47,368 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54)
> getCommandHostDelegation:
> > class com.cloud.agent.api.GetStorageStatsCommand
> > 2022-04-08 16:03:47,368 DEBUG [c.c.h.XenServerGuru]
> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
> > default host to execute commands because the command is not of Copy type.
> > 2022-04-08 16:03:47,372 DEBUG [c.c.a.m.ClusteredAgentAttache]
> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq
> 1-5337328508387459309:
> > Forwarding null to 77026952423534
> > 2022-04-08 16:03:50,588 DEBUG [c.c.a.t.Request]
> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq
> 1-5337328508387459309:
> > Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3),
> Ver:
> > v1, Flags: 10, { GetStorageStatsAnswer } }
> > 2022-04-08 16:03:50,597 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54)
> getCommandHostDelegation:
> > class com.cloud.agent.api.GetStorageStatsCommand
> > 2022-04-08 16:03:50,597 DEBUG [c.c.h.XenServerGuru]
> > (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
> > default host to execute commands because the command is not of Copy type.
> > 2022-04-08 16:03:50,601 

Re: ACS 4.16 - XCP-NG 8.2 - StatsCollector generating some important traffic on storage

2022-04-11 Thread benoit lair
It seems each host is queried two times a minute by the management server
It is consuming some resources as there is no production on this cluster

Le lun. 11 avr. 2022 à 14:10, benoit lair  a écrit :

> It seems that accordingly to the number of Management servers, the
> requests with StatsCollector are increasing the same way
> It seems that StatsCollector are not load-balanced across the # of
> Management servers
>
> Do you have a solution to avoid this charge ?
>
> Regards, Benoit
>
> Le lun. 11 avr. 2022 à 12:40, benoit lair  a
> écrit :
>
>> Hello,
>>
>> Nobody has ideas why ACS is scanning so often my Datastores ?
>>
>> Le ven. 8 avr. 2022 à 16:10, benoit lair  a
>> écrit :
>>
>>> Hello Folks,
>>>
>>> I am facing to a strange issue on my xcp-ng cluster with acs 4.16
>>>
>>> I have 4 ACS Mgmt servers participating to my Cloud installation
>>>
>>> All of them are contacting every time and very (too) regularly my xcp-ng
>>> Pool master, generation some load avg and some Iops
>>>
>>> From ACS logs i have these entries which occurs very regularly :
>>>
>>> 2022-04-08 16:03:47,368 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
>>> class com.cloud.agent.api.GetStorageStatsCommand
>>> 2022-04-08 16:03:47,368 DEBUG [c.c.h.XenServerGuru]
>>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
>>> default host to execute commands because the command is not of Copy type.
>>> 2022-04-08 16:03:47,372 DEBUG [c.c.a.m.ClusteredAgentAttache]
>>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
>>> Forwarding null to 77026952423534
>>> 2022-04-08 16:03:50,588 DEBUG [c.c.a.t.Request]
>>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
>>> Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
>>> v1, Flags: 10, { GetStorageStatsAnswer } }
>>> 2022-04-08 16:03:50,597 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
>>> class com.cloud.agent.api.GetStorageStatsCommand
>>> 2022-04-08 16:03:50,597 DEBUG [c.c.h.XenServerGuru]
>>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
>>> default host to execute commands because the command is not of Copy type.
>>> 2022-04-08 16:03:50,601 DEBUG [c.c.a.m.ClusteredAgentAttache]
>>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459310:
>>> Forwarding null to 77026952423534
>>>
>>> From Xcp-ng, this is launching :
>>>
>>> "/usr/bin/python /opt/xensource/sm/LVMoISCSISR
>>> sr_scan."
>>>
>>> I have my /var/log/SMLog which is growing very fastly
>>>
>>> Do you know why ACS is scanning so fastly my xcp-ng storage ?
>>>
>>>
>>> Thanks a lot for your help and ideas
>>>
>>> Best regards
>>> Benoit Lair
>>>
>>


Re: ACS 4.16 - XCP-NG 8.2 - StatsCollector generating some important traffic on storage

2022-04-11 Thread benoit lair
It seems that accordingly to the number of Management servers, the requests
with StatsCollector are increasing the same way
It seems that StatsCollector are not load-balanced across the # of
Management servers

Do you have a solution to avoid this charge ?

Regards, Benoit

Le lun. 11 avr. 2022 à 12:40, benoit lair  a écrit :

> Hello,
>
> Nobody has ideas why ACS is scanning so often my Datastores ?
>
> Le ven. 8 avr. 2022 à 16:10, benoit lair  a écrit :
>
>> Hello Folks,
>>
>> I am facing to a strange issue on my xcp-ng cluster with acs 4.16
>>
>> I have 4 ACS Mgmt servers participating to my Cloud installation
>>
>> All of them are contacting every time and very (too) regularly my xcp-ng
>> Pool master, generation some load avg and some Iops
>>
>> From ACS logs i have these entries which occurs very regularly :
>>
>> 2022-04-08 16:03:47,368 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
>> class com.cloud.agent.api.GetStorageStatsCommand
>> 2022-04-08 16:03:47,368 DEBUG [c.c.h.XenServerGuru]
>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
>> default host to execute commands because the command is not of Copy type.
>> 2022-04-08 16:03:47,372 DEBUG [c.c.a.m.ClusteredAgentAttache]
>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
>> Forwarding null to 77026952423534
>> 2022-04-08 16:03:50,588 DEBUG [c.c.a.t.Request]
>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
>> Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
>> v1, Flags: 10, { GetStorageStatsAnswer } }
>> 2022-04-08 16:03:50,597 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
>> class com.cloud.agent.api.GetStorageStatsCommand
>> 2022-04-08 16:03:50,597 DEBUG [c.c.h.XenServerGuru]
>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
>> default host to execute commands because the command is not of Copy type.
>> 2022-04-08 16:03:50,601 DEBUG [c.c.a.m.ClusteredAgentAttache]
>> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459310:
>> Forwarding null to 77026952423534
>>
>> From Xcp-ng, this is launching :
>>
>> "/usr/bin/python /opt/xensource/sm/LVMoISCSISR
>> sr_scan."
>>
>> I have my /var/log/SMLog which is growing very fastly
>>
>> Do you know why ACS is scanning so fastly my xcp-ng storage ?
>>
>>
>> Thanks a lot for your help and ideas
>>
>> Best regards
>> Benoit Lair
>>
>


Re: ACS 4.16 - XCP-NG 8.2 - StatsCollector generating some important traffic on storage

2022-04-11 Thread benoit lair
Hello,

Nobody has ideas why ACS is scanning so often my Datastores ?

Le ven. 8 avr. 2022 à 16:10, benoit lair  a écrit :

> Hello Folks,
>
> I am facing to a strange issue on my xcp-ng cluster with acs 4.16
>
> I have 4 ACS Mgmt servers participating to my Cloud installation
>
> All of them are contacting every time and very (too) regularly my xcp-ng
> Pool master, generation some load avg and some Iops
>
> From ACS logs i have these entries which occurs very regularly :
>
> 2022-04-08 16:03:47,368 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
> class com.cloud.agent.api.GetStorageStatsCommand
> 2022-04-08 16:03:47,368 DEBUG [c.c.h.XenServerGuru]
> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
> default host to execute commands because the command is not of Copy type.
> 2022-04-08 16:03:47,372 DEBUG [c.c.a.m.ClusteredAgentAttache]
> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
> Forwarding null to 77026952423534
> 2022-04-08 16:03:50,588 DEBUG [c.c.a.t.Request]
> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
> Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
> v1, Flags: 10, { GetStorageStatsAnswer } }
> 2022-04-08 16:03:50,597 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
> class com.cloud.agent.api.GetStorageStatsCommand
> 2022-04-08 16:03:50,597 DEBUG [c.c.h.XenServerGuru]
> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
> default host to execute commands because the command is not of Copy type.
> 2022-04-08 16:03:50,601 DEBUG [c.c.a.m.ClusteredAgentAttache]
> (StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459310:
> Forwarding null to 77026952423534
>
> From Xcp-ng, this is launching :
>
> "/usr/bin/python /opt/xensource/sm/LVMoISCSISR
> sr_scan."
>
> I have my /var/log/SMLog which is growing very fastly
>
> Do you know why ACS is scanning so fastly my xcp-ng storage ?
>
>
> Thanks a lot for your help and ideas
>
> Best regards
> Benoit Lair
>


ACS 4.16 - XCP-NG 8.2 - StatsCollector generating some important traffic on storage

2022-04-08 Thread benoit lair
Hello Folks,

I am facing to a strange issue on my xcp-ng cluster with acs 4.16

I have 4 ACS Mgmt servers participating to my Cloud installation

All of them are contacting every time and very (too) regularly my xcp-ng
Pool master, generation some load avg and some Iops

>From ACS logs i have these entries which occurs very regularly :

2022-04-08 16:03:47,368 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
(StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
class com.cloud.agent.api.GetStorageStatsCommand
2022-04-08 16:03:47,368 DEBUG [c.c.h.XenServerGuru]
(StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
default host to execute commands because the command is not of Copy type.
2022-04-08 16:03:47,372 DEBUG [c.c.a.m.ClusteredAgentAttache]
(StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
Forwarding null to 77026952423534
2022-04-08 16:03:50,588 DEBUG [c.c.a.t.Request]
(StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459309:
Received:  { Ans: , MgmtId: 2955451650215, via: 1(xcp-cluster1-node3), Ver:
v1, Flags: 10, { GetStorageStatsAnswer } }
2022-04-08 16:03:50,597 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
(StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) getCommandHostDelegation:
class com.cloud.agent.api.GetStorageStatsCommand
2022-04-08 16:03:50,597 DEBUG [c.c.h.XenServerGuru]
(StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) We are returning the
default host to execute commands because the command is not of Copy type.
2022-04-08 16:03:50,601 DEBUG [c.c.a.m.ClusteredAgentAttache]
(StatsCollector-6:ctx-e75f548d) (logid:09ef2f54) Seq 1-5337328508387459310:
Forwarding null to 77026952423534

>From Xcp-ng, this is launching :

"/usr/bin/python /opt/xensource/sm/LVMoISCSISR
sr_scan."

I have my /var/log/SMLog which is growing very fastly

Do you know why ACS is scanning so fastly my xcp-ng storage ?


Thanks a lot for your help and ideas

Best regards
Benoit Lair


Re: How are you monitoring Cloudstack?

2022-03-07 Thread benoit lair
Thanks Ivet, i will take a look at it with a poc soon :)

Regards, Benoit

Le lun. 7 mars 2022 à 12:02, Ivet Petrova  a
écrit :

> Maybe this talk from the last CloudStack Collaboration Conference can be
> useful: https://www.youtube.com/watch?v=m8mYdWHoxLY
>
>
> Kind regards,
>
>
>
>
> On 7 Mar 2022, at 12:10, benoit lair  kurushi4...@gmail.com>> wrote:
>
> Hi,
>
> We are now using Centreon with a custom autodeclaration feature done with
> our own templates (now acs 4.16 in production)
> If we can use something more out of the box i would enjoy to change it
> We used to use Zenoss for our ACS 4.3 which has a plugin specific to
> Cloudstack, but it is no more maintained
>
> Regards, Benoit
>
> Le dim. 6 mars 2022 à 16:40, Paul Angus  pau...@apache.org>> a écrit :
>
> Hi Nux,
>
> At Ticketmaster we use the Prometheus exporter.  We about to work on adding
> more detail to what's exported wrt to VMs, as it very infrastructure
> focused
> out-of-the-box.
>
>
>
> Kind regards
>
> Paul Angus
>
> -Original Message-
> From: Nux mailto:n...@li.nux.ro>>
> Sent: 02 March 2022 10:56
> To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>;
> d...@cloudstack.apache.org<mailto:d...@cloudstack.apache.org>
> Subject: Re: How are you monitoring Cloudstack?
>
> Hi!
>
> Another nudge on the $subject in case people missed this.
>
> If you have a functioning way of monitoring Cloudstack & co in your
> organisation I'd like to hear about it.
> It doesn't have to be anything exotic, so don't be shy as long as we have
> anything to talk about.
>
> Thanks :)
>
>
> On 2022-02-21 14:38, Nux! wrote:
> Hi folks,
>
> If anyone cares to share (on or off list) with me a few words about
> how they are monitoring Cloudstack and related infrastructure that'd
> be lovely.
> I'm trying to find out what are the choices currently and how we can
> improve the overall experience.
>
> Don't be shy!
>
> Cheers
>
>
>
>


Re: How are you monitoring Cloudstack?

2022-03-07 Thread benoit lair
Hi,

We are now using Centreon with a custom autodeclaration feature done with
our own templates (now acs 4.16 in production)
If we can use something more out of the box i would enjoy to change it
We used to use Zenoss for our ACS 4.3 which has a plugin specific to
Cloudstack, but it is no more maintained

Regards, Benoit

Le dim. 6 mars 2022 à 16:40, Paul Angus  a écrit :

> Hi Nux,
>
> At Ticketmaster we use the Prometheus exporter.  We about to work on adding
> more detail to what's exported wrt to VMs, as it very infrastructure
> focused
> out-of-the-box.
>
>
>
> Kind regards
>
> Paul Angus
>
> -Original Message-
> From: Nux 
> Sent: 02 March 2022 10:56
> To: users@cloudstack.apache.org; d...@cloudstack.apache.org
> Subject: Re: How are you monitoring Cloudstack?
>
> Hi!
>
> Another nudge on the $subject in case people missed this.
>
> If you have a functioning way of monitoring Cloudstack & co in your
> organisation I'd like to hear about it.
> It doesn't have to be anything exotic, so don't be shy as long as we have
> anything to talk about.
>
> Thanks :)
>
>
> On 2022-02-21 14:38, Nux! wrote:
> > Hi folks,
> >
> > If anyone cares to share (on or off list) with me a few words about
> > how they are monitoring Cloudstack and related infrastructure that'd
> > be lovely.
> > I'm trying to find out what are the choices currently and how we can
> > improve the overall experience.
> >
> > Don't be shy!
> >
> > Cheers
>
>


Re: ACS 4.16 Can't Add VPX Netscaler - type not supported

2022-01-18 Thread benoit lair
Hello,

In 
plugins/network-elements/netscaler/src/main/java/com/cloud/network/resource/NetscalerResource.java

I see that the test checks l370 :

> if (_deviceName.equalsIgnoreCase("NetscalerMPXLoadBalancer") && 
> nsHw.get_hwdescription().contains("MPX") ||
> 
> _deviceName.equalsIgnoreCase("NetscalerVPXLoadBalancer") && 
> nsHw.get_hwdescription().contains("NetScaler Virtual Appliance")) {
> return;
> }
> throw new ExecutionException("Netscalar device type 
> specified does not match with the actuall device type.");
> }
>
>
However Netscaler v12 and v13 shows : " Netscaler Remote Licensed
Virtual Appliance 45"
as value for hwdescription

Where can i add issue for this for ACS 4.16 ?
Have you got a tweak in order to getting this working ?

Regards, Benoit

Le mar. 18 janv. 2022 à 09:49, benoit lair  a écrit :

> Hello Folks,
>
> When i try to add VPX Netscaler (version 13 or 12) i got an error  message
> on the UI :
> Add Netscaler device
> (Netscaler) Failed to verify device type specified when matching with
> actuall device type due to Netscalar device type specified does not match
> with the actuall device type.
> In the logs i have this :
>
> 2022-01-18 09:48:02,619 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (qtp1850777594-16:ctx-44114730 ctx-19ef6a30) (logid:96d8f8a8) submit async
> job-171, details: AsyncJobVO {id:171, userId: 2, accountId: 2,
> instanceType: None, instanceId: null, cmd:
> com.cloud.api.commands.AddNetscalerLoadBalancerCmd, cmdInfo:
> {"physicalnetworkid":"81ab1674-8acb-49bc-9e02-1323e3cd2e3f","httpmethod":"GET","ctxAccountId":"2","uuid":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","url":"
> https://10.20.2.225?publicinterface\u003d1/4\u0026privateinterface\u003d1/4\u0026lbdevicededicated\u003dtrue","cmdEventType":"PHYSICAL.LOADBALANCER.ADD","networkdevicetype":"NetscalerVPXLoadBalancer","response":"json","ctxUserId":"2","ctxStartEventId":"1431","gslbprovider":"false","id":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","ctxDetails":"{\"interface
> com.cloud.network.PhysicalNetwork\":\"81ab1674-8acb-49bc-9e02-1323e3cd2e3f\"}","username":"nsroot"},
> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
> result: null, initMsid: 2955451650215, completeMsid: null, lastUpdated:
> null, lastPolled: null, created: null, removed: null}
> 2022-01-18 09:48:02,620 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-39:ctx-e9b9b2ef job-171) (logid:4fa2b7ad) Executing
> AsyncJobVO {id:171, userId: 2, accountId: 2, instanceType: None,
> instanceId: null, cmd: com.cloud.api.commands.AddNetscalerLoadBalancerCmd,
> cmdInfo:
> {"physicalnetworkid":"81ab1674-8acb-49bc-9e02-1323e3cd2e3f","httpmethod":"GET","ctxAccountId":"2","uuid":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","url":"
> https://10.20.2.225?publicinterface\u003d1/4\u0026privateinterface\u003d1/4\u0026lbdevicededicated\u003dtrue","cmdEventType":"PHYSICAL.LOADBALANCER.ADD","networkdevicetype":"NetscalerVPXLoadBalancer","response":"json","ctxUserId":"2","ctxStartEventId":"1431","gslbprovider":"false","id":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","ctxDetails":"{\"interface
> com.cloud.network.PhysicalNetwork\":\"81ab1674-8acb-49bc-9e02-1323e3cd2e3f\"}","username":"nsroot"},
> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
> result: null, initMsid: 2955451650215, completeMsid: null, lastUpdated:
> null, lastPolled: null, created: null, removed: null}
> 2022-01-18 09:48:02,622 DEBUG [c.c.a.ApiServlet]
> (qtp1850777594-16:ctx-44114730 ctx-19ef6a30) (logid:96d8f8a8) ===END===
>  192.168.4.31 -- GET
>  
> physicalnetworkid=81ab1674-8acb-49bc-9e02-1323e3cd2e3f=nsroot=NetscalerVPXLoadBalancer=false=https:%2F%2F10.20.2.225%3Fpublicinterface%3D1%2F4%26privateinterface%3D1%2F4%26lbdevicededicated%3Dtrue=aaf152ef-f8bd-4071-bfdb-75c6df1a17c5=addNetscalerLoadBalancer=json
> 2022-01-18 09:48:02,665 DEBUG [c.c.a.ApiServlet]
> (qtp1850777594-19:ctx-5bf4aee9) (logid:46334c3d) ===START===  192.168.4.31
> -- GET
>  
> jobId=4fa2b7ad-6c5a-4f89-b264-6d2003d9fdf2=queryAsyncJobResult=json
> 2022-01-18 09:48:02,686 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-39:ctx-e9b9b2ef job-171) (logid:4fa2b7ad) Complete async
> job-171, jobStatus: FAILED, resultCode: 530, result:
> org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":"530","errortext":"Failed
> to verify device type specified when matching with actuall device type due
> to Netscalar device type specified does not match with the actuall device
> type."}
>
>
> Anybody succeeded to add VPX to ACS 4.16 ?
>
> Regards, Benoit
>
>
>
>
>


ACS 4.16 Can't Add VPX Netscaler - type not supported

2022-01-18 Thread benoit lair
Hello Folks,

When i try to add VPX Netscaler (version 13 or 12) i got an error  message
on the UI :
Add Netscaler device
(Netscaler) Failed to verify device type specified when matching with
actuall device type due to Netscalar device type specified does not match
with the actuall device type.
In the logs i have this :

2022-01-18 09:48:02,619 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(qtp1850777594-16:ctx-44114730 ctx-19ef6a30) (logid:96d8f8a8) submit async
job-171, details: AsyncJobVO {id:171, userId: 2, accountId: 2,
instanceType: None, instanceId: null, cmd:
com.cloud.api.commands.AddNetscalerLoadBalancerCmd, cmdInfo:
{"physicalnetworkid":"81ab1674-8acb-49bc-9e02-1323e3cd2e3f","httpmethod":"GET","ctxAccountId":"2","uuid":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","url":"
https://10.20.2.225?publicinterface\u003d1/4\u0026privateinterface\u003d1/4\u0026lbdevicededicated\u003dtrue","cmdEventType":"PHYSICAL.LOADBALANCER.ADD","networkdevicetype":"NetscalerVPXLoadBalancer","response":"json","ctxUserId":"2","ctxStartEventId":"1431","gslbprovider":"false","id":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","ctxDetails":"{\"interface
com.cloud.network.PhysicalNetwork\":\"81ab1674-8acb-49bc-9e02-1323e3cd2e3f\"}","username":"nsroot"},
cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
result: null, initMsid: 2955451650215, completeMsid: null, lastUpdated:
null, lastPolled: null, created: null, removed: null}
2022-01-18 09:48:02,620 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-39:ctx-e9b9b2ef job-171) (logid:4fa2b7ad) Executing
AsyncJobVO {id:171, userId: 2, accountId: 2, instanceType: None,
instanceId: null, cmd: com.cloud.api.commands.AddNetscalerLoadBalancerCmd,
cmdInfo:
{"physicalnetworkid":"81ab1674-8acb-49bc-9e02-1323e3cd2e3f","httpmethod":"GET","ctxAccountId":"2","uuid":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","url":"
https://10.20.2.225?publicinterface\u003d1/4\u0026privateinterface\u003d1/4\u0026lbdevicededicated\u003dtrue","cmdEventType":"PHYSICAL.LOADBALANCER.ADD","networkdevicetype":"NetscalerVPXLoadBalancer","response":"json","ctxUserId":"2","ctxStartEventId":"1431","gslbprovider":"false","id":"aaf152ef-f8bd-4071-bfdb-75c6df1a17c5","ctxDetails":"{\"interface
com.cloud.network.PhysicalNetwork\":\"81ab1674-8acb-49bc-9e02-1323e3cd2e3f\"}","username":"nsroot"},
cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
result: null, initMsid: 2955451650215, completeMsid: null, lastUpdated:
null, lastPolled: null, created: null, removed: null}
2022-01-18 09:48:02,622 DEBUG [c.c.a.ApiServlet]
(qtp1850777594-16:ctx-44114730 ctx-19ef6a30) (logid:96d8f8a8) ===END===
 192.168.4.31 -- GET
 
physicalnetworkid=81ab1674-8acb-49bc-9e02-1323e3cd2e3f=nsroot=NetscalerVPXLoadBalancer=false=https:%2F%2F10.20.2.225%3Fpublicinterface%3D1%2F4%26privateinterface%3D1%2F4%26lbdevicededicated%3Dtrue=aaf152ef-f8bd-4071-bfdb-75c6df1a17c5=addNetscalerLoadBalancer=json
2022-01-18 09:48:02,665 DEBUG [c.c.a.ApiServlet]
(qtp1850777594-19:ctx-5bf4aee9) (logid:46334c3d) ===START===  192.168.4.31
-- GET
 
jobId=4fa2b7ad-6c5a-4f89-b264-6d2003d9fdf2=queryAsyncJobResult=json
2022-01-18 09:48:02,686 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-39:ctx-e9b9b2ef job-171) (logid:4fa2b7ad) Complete async
job-171, jobStatus: FAILED, resultCode: 530, result:
org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":"530","errortext":"Failed
to verify device type specified when matching with actuall device type due
to Netscalar device type specified does not match with the actuall device
type."}


Anybody succeeded to add VPX to ACS 4.16 ?

Regards, Benoit


Re: ACS 4.16/4.15 with Netscaler VPX 13 issues

2022-01-17 Thread benoit lair
I have this error in mgmt logs server :

2022-01-17 16:37:07,698 DEBUG [c.c.a.ApiServlet]
(qtp1850777594-337:ctx-deeb85b2 ctx-09e01ead) (logid:3930828f) ===END===
 192.168.4.31 -- GET
 
physicalnetworkid=81ab1674-8acb-49bc-9e02-1323e3cd2e3f=nsroot=NetscalerVPXLoadBalancer=false=https:%2F%2F10.20.2.225%3Fpublicinterface%3D1%2F1%26privateinterface%3D1%2F1%26lbdevicededicated%3Dtrue=aaf152ef-f8bd-4071-bfdb-75c6df1a17c5=addNetscalerLoadBalancer=json
2022-01-17 16:37:07,759 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-24:ctx-5b0b91f8 job-156) (logid:249190c0) Complete async
job-156, jobStatus: FAILED, resultCode: 530, result:
org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":"530","errortext":"Failed
to verify device type specified when matching with actuall device type due
to Netscalar device type specified does not match with the actuall device
type."}

Le lun. 17 janv. 2022 à 16:25, benoit lair  a écrit :

> Hello Folks,
>
> I'am trying to add Netscaler vpx device to my 4.16 Acs mgmt server
> i have set all the entries like asked, but it saying me that credentials
> are not corrects :
>
> Add Netscaler device
> (Netscaler) Failed to log in to Netscaler device at a.b.c;d due to Invalid
> username or password
>
> Does somebody use Netscaler with ACS 4.16 ?
> I tried too on acs 4.15 and i have same error
>
> Is VPX version 13 is valid ?
>
> The credentials are well working from a browser
>
> Regards, Benoit
>


ACS 4.16/4.15 with Netscaler VPX 13 issues

2022-01-17 Thread benoit lair
Hello Folks,

I'am trying to add Netscaler vpx device to my 4.16 Acs mgmt server
i have set all the entries like asked, but it saying me that credentials
are not corrects :

Add Netscaler device
(Netscaler) Failed to log in to Netscaler device at a.b.c;d due to Invalid
username or password

Does somebody use Netscaler with ACS 4.16 ?
I tried too on acs 4.15 and i have same error

Is VPX version 13 is valid ?

The credentials are well working from a browser

Regards, Benoit


Re: AutoScale without using NetScaler

2022-01-17 Thread benoit lair
Hello folks,

From my memory  this was Nguyen Anh Tu  whom implemented
Autocaling without Netscaler in ACS 4.4

Le mar. 18 mai 2021 à 09:19, Дикевич Евгений Александрович <
evgeniy.dikev...@becloud.by> a écrit :

> Hi
> Thx a lot. I will try it later.
>
> -Original Message-
> From: Rene Moser [mailto:m...@renemoser.net]
> Sent: Tuesday, May 18, 2021 10:04 AM
> To: users@cloudstack.apache.org
> Subject: Re: AutoScale without using NetScaler
>
> Hi
>
> On 4/16/21 3:37 PM, Дикевич Евгений Александрович wrote:
> > Hi all.
> > MB someone configured autoscale without using NetScaler?
> We developed our own generic autoscaler for Clouds: scalr.
>
> It's open source (MIT) and can currently scale CloudStack, Digital Ocean
> and Hetzner Cloud and cloudscale.ch. Easy extendable and customizable.
> Still alpha state though.
>
> Read more about on https://ngine-io.github.io/scalr/
>
> Yours
> René
>
> Внимание!
> Это электронное письмо и все прикрепленные к нему файлы являются
> конфиденциальными и предназначены исключительно для использования лицом
> (лицами), которому (которым) оно предназначено. Если Вы не являетесь лицом
> (лицами), которому (которым) предназначено это письмо, не копируйте и не
> разглашайте его содержимое и удалите это сообщение и все вложения из Вашей
> почтовой системы. Любое несанкционированное использование, распространение,
> раскрытие, печать или копирование этого электронного письма и прикрепленных
> к нему файлов, кроме как лицом (лицами) которому (которым) они
> предназначены, является незаконным и запрещено. Принимая во внимание, что
> передача данных посредством Интернет не является безопасной, мы не несем
> никакой ответственности за любой потенциальный ущерб, причиненный в
> результате ошибок при передаче данных или этим сообщением и прикрепленными
> к нему файлами.
>
> Attention!
> This email and all attachments to it are confidential and are intended
> solely for use by the person (or persons) referred to (mentioned) as the
> intended recipient (recipients). If you are not the intended recipient of
> this email, do not copy or disclose its contents and delete the message and
> any attachments to it from your e-mail system. Any unauthorized use,
> dissemination, disclosure, printing or copying of this e-mail and files
> attached to it, except by the intended recipient, is illegal and is
> prohibited. Taking into account that data transmission via Internet is not
> secure, we assume no responsibility for any potential damage caused by data
> transmission errors or this message and the files attached to it.
>


ACS 4.16 Multiples management servers

2022-01-11 Thread benoit lair
Hello Folks,

I joined a third ACS Server to my pool, configured it setted the "host"
value
All seems to be fully functionnal
However in the third mgmt server i have this entry in logs every 10-15
seconds :

2022-01-11 18:33:37,991 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-8:null) (logid:) SeqA 7-12389: Processing Seq
7-12389:  { Cmd , MgmtId: -1, via: 7, Ver: v1, Flags: 11,
[{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":"3","_loadInfo":"{
  "connections": [
{
  "id": 5,
  "clientInfo": "",
  "host": "10.20.0.121",
  "port": -1,
  "tag": "04da28e7-818a-4a4d-ae4b-3edc7695a848",
  "createTime": 1641921092321,
  "lastUsedTime": 1641922411160
}
  ]
}","wait":"0","bypassHostMaintenance":"false"}}] }


Have you an idea why ?

Regards, Benoit


Re: ACS 4.15 - Disaster recovery after secondary storage issue

2021-10-28 Thread benoit lair
I succeded to re-install the System template

despite the fact i declared a new template with the GUI, set type = SYSTEM,
installed with cloud-install-sys-tmplt the template on NFS Sec storage
I had still errors when system trying to provisioning the vm template on
Storage Pool
The log of mgmt says that no storage pool were available although i have
sufficient space and no blocking tags on Storage Pool

The new template was still in state "allocated"
I edited the following data like

state = Ready, (was allocated)
install_path = template/tmpl/1/225/5951639d-e36b-494a-9e49-6a9ce2f3542c.vhd
(was empty),
download_state = DOWNLOADED (was empty),
physical_size = size of template id1 (was empry),
size = size of template id1, (was empty)
downloaded_pct = 100 (was empty)


Then system successfully installed the new system template on my 2
StoragePool and CPVM and SSVM have been successfully reinstalled

Have you got some ideas of something i would have skipped ?
Is there another way (more conventional) of what i did to recover my system
template ?

Best Regards, Benoit

Le jeu. 28 oct. 2021 à 10:49, benoit lair  a écrit :

> I tried also to recreate another system vm template with GUI
> I followed this link :
> https://docs.cloudstack.apache.org/en/latest/adminguide/systemvm.html#changing-the-default-system-vm-template
> I changed the value in Database with type SYSTEM for the ne entry in
> templates
> I changed the router.template.xenserver value with the name of the new
> template
> I launched on ACS mgmt server : cloud-install-sys-tmplt, it created the
> directory with id 225 in tmpl/1/225 and downloaded the vhd template file
> into it
> But the template is still not available in GUI and Database
>
> How could i restore system vm template ?
>
> Best, Benoit
>
> Le jeu. 28 oct. 2021 à 00:51, benoit lair  a
> écrit :
>
>> I tried to free my SR of tags
>> I restarted ACS
>>
>> Here is the log generated about systems vms after the reboot :
>>
>> https://pastebin.com/xJNfA23u
>>
>> The parts of the log which are curious for me :
>>
>> 2021-10-28 00:31:04,462 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
>> (DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Catch Exception
>> com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
>> 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
>> was invalid.
>> 2021-10-28 00:31:04,462 WARN  [c.c.h.x.r.XenServerStorageProcessor]
>> (DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Unable to create volume;
>> Pool=volumeTO[uuid=e4347562-9454-453d-be04-29dc746aaf33|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]];
>> Disk:
>> com.cloud.utils.exception.CloudRuntimeException: Catch Exception
>> com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
>> 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
>> was invalid.
>> at
>> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.getVDIbyUuid(XenServerStorageProcessor.java:655)
>> at
>> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.cloneVolumeFromBaseTemplate(XenServerStorageProcessor.java:843)
>> at
>> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:99)
>> at
>> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:59)
>> at
>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:36)
>> at
>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:30)
>> at
>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
>> at
>> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1763)
>> at
>> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
>> at
>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
>> at
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
>> at
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
>> at
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext

Re: ACS 4.15 - Disaster recovery after secondary storage issue

2021-10-28 Thread benoit lair
I tried also to recreate another system vm template with GUI
I followed this link :
https://docs.cloudstack.apache.org/en/latest/adminguide/systemvm.html#changing-the-default-system-vm-template
I changed the value in Database with type SYSTEM for the ne entry in
templates
I changed the router.template.xenserver value with the name of the new
template
I launched on ACS mgmt server : cloud-install-sys-tmplt, it created the
directory with id 225 in tmpl/1/225 and downloaded the vhd template file
into it
But the template is still not available in GUI and Database

How could i restore system vm template ?

Best, Benoit

Le jeu. 28 oct. 2021 à 00:51, benoit lair  a écrit :

> I tried to free my SR of tags
> I restarted ACS
>
> Here is the log generated about systems vms after the reboot :
>
> https://pastebin.com/xJNfA23u
>
> The parts of the log which are curious for me :
>
> 2021-10-28 00:31:04,462 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
> (DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Catch Exception
> com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
> 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
> was invalid.
> 2021-10-28 00:31:04,462 WARN  [c.c.h.x.r.XenServerStorageProcessor]
> (DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Unable to create volume;
> Pool=volumeTO[uuid=e4347562-9454-453d-be04-29dc746aaf33|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]];
> Disk:
> com.cloud.utils.exception.CloudRuntimeException: Catch Exception
> com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
> 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
> was invalid.
> at
> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.getVDIbyUuid(XenServerStorageProcessor.java:655)
> at
> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.cloneVolumeFromBaseTemplate(XenServerStorageProcessor.java:843)
> at
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:99)
> at
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:59)
> at
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:36)
> at
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:30)
> at
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
> at
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1763)
> at
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
> at
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
> at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
> at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
> at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
> at
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: The uuid you supplied was invalid.
> at com.xensource.xenapi.Types.checkResponse(Types.java:1491)
> at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
> at
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
> ... 21 more
>
> Is this normal to have this : Unable to create volume;
> Pool=volumeTO[uuid=e4347562-9454-453d-be04-29dc746aaf33|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]]
> with values null ?
>
> Best, Benoit
>
> Le je

Re: ACS 4.15 - Disaster recovery after secondary storage issue

2021-10-27 Thread benoit lair
I tried to free my SR of tags
I restarted ACS

Here is the log generated about systems vms after the reboot :

https://pastebin.com/xJNfA23u

The parts of the log which are curious for me :

2021-10-28 00:31:04,462 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
was invalid.
2021-10-28 00:31:04,462 WARN  [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Unable to create volume;
Pool=volumeTO[uuid=e4347562-9454-453d-be04-29dc746aaf33|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]];
Disk:
com.cloud.utils.exception.CloudRuntimeException: Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
was invalid.
at
com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.getVDIbyUuid(XenServerStorageProcessor.java:655)
at
com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.cloneVolumeFromBaseTemplate(XenServerStorageProcessor.java:843)
at
com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:99)
at
com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:59)
at
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:36)
at
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:30)
at
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
at
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1763)
at
com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: The uuid you supplied was invalid.
at com.xensource.xenapi.Types.checkResponse(Types.java:1491)
at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
at
com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
... 21 more

Is this normal to have this : Unable to create volume;
Pool=volumeTO[uuid=e4347562-9454-453d-be04-29dc746aaf33|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]]
with values null ?

Best, Benoit

Le jeu. 28 oct. 2021 à 00:46, benoit lair  a écrit :

> Hello Andrija,
>
> Well seen :)
>
> 2021-10-27 17:59:22,100 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
> FirstFitRoutingAllocator) (logid:ce3ac740) Host name: xcp-cluster1-01,
> hostId: 1 is in avoid set, skipping this and trying other available hosts
> 2021-10-27 17:59:22,109 DEBUG [c.c.c.CapacityManagerImpl]
> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
> FirstFitRoutingAllocator) (logid:ce3ac740) Host: 3 has cpu capability
> (cpu:48, speed:2593) to support requested CPU: 1 and requested speed: 500
> 2021-10-27 17:59:22,109 DEBUG [c.c.c.CapacityManagerImpl]
> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
> FirstFitRoutingAllocator) (logid:ce3ac740) Checking if host: 3 has enough
> capacity for requested CPU: 500 and requested RAM: (512.00 MB) 536870912 ,
> cpuOverprovisioningFactor: 1.0
> 2021-10-27 17:59:22,112 DEBUG [

Re: ACS 4.15 - Disaster recovery after secondary storage issue

2021-10-27 Thread benoit lair
-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) StoragePool is in avoid set, skipping this pool
2021-10-27 17:59:22,125 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) Checking if storage pool is suitable, name: null ,poolId: 2
2021-10-27 17:59:22,125 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) StoragePool is in avoid set, skipping this pool
2021-10-27 17:59:22,125 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) ClusterScopeStoragePoolAllocator returning 0 suitable
storage pools
2021-10-27 17:59:22,125 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) ZoneWideStoragePoolAllocator to find storage pool
2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) No suitable pools found for volume: Vol[211|vm=206|ROOT]
under cluster: 1
2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) No suitable pools found
2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) No suitable storagePools found under this Cluster: 1

It says that there is any StoragePool available
However, i have enough space (under the 0.85 value with a
overprovisionning factor of 1.0), i have enough Cpu and Ram
I do not understand what is blocking the provisionning for the system Vms

Best regards
Benoit

Le mer. 27 oct. 2021 à 18:19, Andrija Panic  a
écrit :

>  No suitable storagePools found under this Cluster: 1
>
> Can you check the mgmt log lines BEFORE this line above - there should be
> clear indication WHY no suitable storage pools are found (this is Primary
> Storage pool)
>
> Best,
>
> On Wed, 27 Oct 2021 at 18:04, benoit lair  wrote:
>
> > Hello guys,
> >
> > I have a important issue with secondary storage
> >
> > I have 2 nfs secondary storage and a ACS Mgmt server
> > I lost the system template vm id1 on both of Nfs sec storage servers
> > The ssvm and cpvm are destroyed
> > The template routing-1 has been deleted on all SR of hypervisors (xcp-ng)
> >
> > I am trying to recover the ACS system template workflow
> >
> > I have tried to reinstall the system vm template from ACS Mgmt server
> with
> > :
> >
> >
> >
> /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
> > -m /mnt/secondary -u
> >
> >
> https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2
> > -h
> > <
> https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2-h
> >
> > xenserver -s  -F
> >
> > It has recreated on NFS1 the directory tmpl/1/1 and uploaded the vhd file
> > and created the template.properties file
> >
> > I made the same on NFS2
> > on ACS Gui, it says me the template SystemVM Template (XenServer)  is
> ready
> > On nfs the vhd is present
> > But even after restarting the ACS mgmt server, it fails to restart the
> > system vm template with the following error in mgmt log file :
> >
> > 2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
> > (logid:ce3ac740) No suitable storagePools found under this Cluster: 1
> > 2021-10-27 17:59:22,129 DEBUG [c.c.a.t.Request]
> > (Work-Job-Executor-94:ctx-58cb275b job-2553/job-2649 ctx-fa7b1ea6)
> > (logid:02bb9549) Seq 1-873782770202889: Executing:  { Cmd , MgmtId:
> > 161064792470736, via: 1(xcp-cluster1-01), Ver: v1, Flags: 100111,
> >
> >
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.
> > cloudstack.storage.to
> >
> .TemplateObjectTO":{"path":"159e620a-575d-43a8-9a57-f3c7f57a1c8a","origUrl":"
> >
> >
> https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2
> >
> ","uuid":"a9151f22-f4bb-4f7a-983e-c8abd01f745b","id":"1","format":"VHD","accountId":"1","checksum":"{MD5}86373992740b1eca8aff8b08ebf3aea5","hvm":"false","displayText":"SystemVM
> > Template
> >
> >
> (XenServer)","imageDataStore":{"org.apach

ACS 4.15 - Disaster recovery after secondary storage issue

2021-10-27 Thread benoit lair
Hello guys,

I have a important issue with secondary storage

I have 2 nfs secondary storage and a ACS Mgmt server
I lost the system template vm id1 on both of Nfs sec storage servers
The ssvm and cpvm are destroyed
The template routing-1 has been deleted on all SR of hypervisors (xcp-ng)

I am trying to recover the ACS system template workflow

I have tried to reinstall the system vm template from ACS Mgmt server with
:

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
-m /mnt/secondary -u
https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2
-h xenserver -s  -F

It has recreated on NFS1 the directory tmpl/1/1 and uploaded the vhd file
and created the template.properties file

I made the same on NFS2
on ACS Gui, it says me the template SystemVM Template (XenServer)  is ready
On nfs the vhd is present
But even after restarting the ACS mgmt server, it fails to restart the
system vm template with the following error in mgmt log file :

2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) No suitable storagePools found under this Cluster: 1
2021-10-27 17:59:22,129 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-94:ctx-58cb275b job-2553/job-2649 ctx-fa7b1ea6)
(logid:02bb9549) Seq 1-873782770202889: Executing:  { Cmd , MgmtId:
161064792470736, via: 1(xcp-cluster1-01), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"159e620a-575d-43a8-9a57-f3c7f57a1c8a","origUrl":"
https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2","uuid":"a9151f22-f4bb-4f7a-983e-c8abd01f745b","id":"1","format":"VHD","accountId":"1","checksum":"{MD5}86373992740b1eca8aff8b08ebf3aea5","hvm":"false","displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","id":"2","poolType":"PreSetup","host":"localhost","path":"/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","port":"0","url":"PreSetup://localhost/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9/?ROLE=Primary=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","isManaged":"false"}},"name":"routing-1","size":"(2.44
GB)
262144","hypervisorType":"XenServer","bootable":"false","uniqueName":"routing-1","directDownload":"false","deployAsIs":"false"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"edb85ea0-d786-44f3-901b-e530bb2e6030","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","id":"2","poolType":"PreSetup","host":"localhost","path":"/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","port":"0","url":"PreSetup://localhost/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9/?ROLE=Primary=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","isManaged":"false"}},"name":"ROOT-207","size":"(2.45
GB)
2626564608","volumeId":"212","vmName":"v-207-VM","accountId":"1","format":"VHD","provisioningType":"THIN","id":"212","deviceId":"0","hypervisorType":"XenServer","directDownload":"false","deployAsIs":"false"}},"executeInSequence":"true","options":{},"options2":{},"wait":"0","bypassHostMaintenance":"false"}}]
}
2021-10-27 17:59:22,129 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-221:ctx-737e97d0) (logid:7a1a71eb) Seq 1-873782770202889:
Executing request
2021-10-27 17:59:22,132 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) Could not find suitable Deployment Destination for this VM
under any clusters, returning.
2021-10-27 17:59:22,133 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) Searching all possible resources under this Zone: 1
2021-10-27 17:59:22,134 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) Listing clusters in order of aggregate capacity, that have
(at least one host with) enough CPU and RAM capacity under this Zone: 1
2021-10-27 17:59:22,137 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
(logid:ce3ac740) Removing from the clusterId list these clusters from avoid
set: [1]
2021-10-27 17:59:22,138 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-221:ctx-737e97d0) (logid:02bb9549) Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
was invalid.
2021-10-27 17:59:22,138 WARN  [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-221:ctx-737e97d0) (logid:02bb9549) Unable to create volume;
Pool=volumeTO[uuid=edb85ea0-d786-44f3-901b-e530bb2e6030|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]];
Disk:

Re: ACS 4.15.1 Migration between NFS Secondary storage servers interrupted

2021-10-27 Thread benoit lair
I found why the other templates didn't were migrated on 2nd sec storage
server
Mgmt Server was blocked on template vm id1 (system vm template)

In fact in initialised sys template vm from ACS mgmt server to 2nd sec
storage server with : cloud-install-sys-tmplt

So it was blocked to this operation
Finally i tried to help ACS server, moving the template id1 folder from 1st
sec storage to 2nd sec storage

I finally got the other templates downloaded onto my 2nd sec storage server

But after restarting ACS mgmt server, the template id1 folder has been
deleted on both secondary storage servers
I still have entries on BDD for template id1 on store_id 1 and store_id 2,
but the folder does not exist anymore ! :/

How can i get back my system vm template ? i cant start any system vm
routers

Thanks for your help

Regards, Benoit

Le mar. 26 oct. 2021 à 16:56, Pearl d'Silva  a
écrit :

> One way to identify it would be to check the vm_template table for
> templates that are marked as public but do not have a url (i.e., null) -
> such templates should be migrated, but maybe skipped, as in 4.15, public
> templates aren't migrated as they get downloaded on all stores in a zone.
> However, such templates i.e., templates created from volumes / snapshots
> that are marked as public do not get synced. This was addressed in
> https://github.com/apache/cloudstack/pull/5404
>
>
> Thanks,
> ____
> From: benoit lair 
> Sent: Tuesday, October 26, 2021 8:01 PM
> To: users@cloudstack.apache.org 
> Subject: Re: ACS 4.15.1 Migration between NFS Secondary storage servers
> interrupted
>
> Hi Pearl,
>
> I am checking the logs of the mgmt server
> About the possibility the template came from a volume, is there way to
> check this in database ?
>
> Regards, Benoit
>
> Le mar. 26 oct. 2021 à 14:53, Pearl d'Silva  a
> écrit :
>
> > Hi Benoit,
> >
> > Can you please check the logs to see if the specific data objects were
> > skipped from being migrated because they couldn't be accomodated on the
> > destination store. Also, were these templates that were left behind
> created
> > from volumes / snapshots - in that case, in 4.15, it is a known issue to
> > skip those files, and has been addressed in 4.16.
> >
> > Thanks,
> > Pearl
> > 
> > From: benoit lair 
> > Sent: Tuesday, October 26, 2021 5:35 PM
> > To: users@cloudstack.apache.org ;
> > d...@cloudstack.apache.org 
> > Subject: Re: ACS 4.15.1 Migration between NFS Secondary storage servers
> > interrupted
> >
> > Hello Guys,
> >
> > I have still the problem on ACS 4.15
> > I am trying to migrate my first nfs secondary storage server to another
> nfs
> > server
> > ACS says in the events the migration is IMAGE.STORE.MIGRATE.DATA :
> > Successfully
> > completed migrating Image store data. Migrating files/data objects from :
> > NFS Secondary storage 001 to: [NFS Secondary storage 002]
> >
> > However, there are still templates hosted on the primary nfs server
> >
> > any ideas why the migration does not work as expected ?
> >
> > Regards, Benoit
> > Le mer. 20 oct. 2021 à 15:24, benoit lair  a
> écrit
> > :
> >
> > > Hello,
> > >
> > > I am trying to migrate my first NFS secondary storage to a second NFS
> one
> > > I asked for a migration with a migration policy "complete"
> > > The job is working but finishes before migrating all the data
> > >
> > > I had to relaunch the migration which continues
> > >
> > > Any ideas ?
> > >
> > > Regards, Benoit
> > >
> >
> >
> >
> >
>
>
>
>


Re: ACS 4.15.1 Migration between NFS Secondary storage servers interrupted

2021-10-26 Thread benoit lair
Hi Pearl,

I am checking the logs of the mgmt server
About the possibility the template came from a volume, is there way to
check this in database ?

Regards, Benoit

Le mar. 26 oct. 2021 à 14:53, Pearl d'Silva  a
écrit :

> Hi Benoit,
>
> Can you please check the logs to see if the specific data objects were
> skipped from being migrated because they couldn't be accomodated on the
> destination store. Also, were these templates that were left behind created
> from volumes / snapshots - in that case, in 4.15, it is a known issue to
> skip those files, and has been addressed in 4.16.
>
> Thanks,
> Pearl
> ____
> From: benoit lair 
> Sent: Tuesday, October 26, 2021 5:35 PM
> To: users@cloudstack.apache.org ;
> d...@cloudstack.apache.org 
> Subject: Re: ACS 4.15.1 Migration between NFS Secondary storage servers
> interrupted
>
> Hello Guys,
>
> I have still the problem on ACS 4.15
> I am trying to migrate my first nfs secondary storage server to another nfs
> server
> ACS says in the events the migration is IMAGE.STORE.MIGRATE.DATA :
> Successfully
> completed migrating Image store data. Migrating files/data objects from :
> NFS Secondary storage 001 to: [NFS Secondary storage 002]
>
> However, there are still templates hosted on the primary nfs server
>
> any ideas why the migration does not work as expected ?
>
> Regards, Benoit
> Le mer. 20 oct. 2021 à 15:24, benoit lair  a écrit
> :
>
> > Hello,
> >
> > I am trying to migrate my first NFS secondary storage to a second NFS one
> > I asked for a migration with a migration policy "complete"
> > The job is working but finishes before migrating all the data
> >
> > I had to relaunch the migration which continues
> >
> > Any ideas ?
> >
> > Regards, Benoit
> >
>
>
>
>


Re: ACS 4.15.1 Migration between NFS Secondary storage servers interrupted

2021-10-26 Thread benoit lair
Hello Guys,

I have still the problem on ACS 4.15
I am trying to migrate my first nfs secondary storage server to another nfs
server
ACS says in the events the migration is IMAGE.STORE.MIGRATE.DATA : Successfully
completed migrating Image store data. Migrating files/data objects from :
NFS Secondary storage 001 to: [NFS Secondary storage 002]

However, there are still templates hosted on the primary nfs server

any ideas why the migration does not work as expected ?

Regards, Benoit
Le mer. 20 oct. 2021 à 15:24, benoit lair  a écrit :

> Hello,
>
> I am trying to migrate my first NFS secondary storage to a second NFS one
> I asked for a migration with a migration policy "complete"
> The job is working but finishes before migrating all the data
>
> I had to relaunch the migration which continues
>
> Any ideas ?
>
> Regards, Benoit
>


Re: [!!Mass Mail]Re: ACS 4.15.1 / XCP-NG 8.2

2021-10-20 Thread benoit lair
Have you the CS parameter  enable.dynamic.scale.vm set to true ?
Is your compute offering got the dynamic checkbox selected ?
Have you the XS tools installed in your vm ?

This works for Centos7 and Coreos not for Debian at the moment

Regards, Benoit


Le mer. 20 oct. 2021 à 12:59, Дикевич Евгений Александрович <
evgeniy.dikev...@becloud.by> a écrit :

> Hi!
>
> If you find a solution, please share with community :)
> I have same problem :(
>
>
>
> -Original Message-
> From: Florian Noel [mailto:f.n...@webetsolutions.com]
> Sent: Wednesday, October 20, 2021 11:10 AM
> To: 'users@cloudstack.apache.org' 
> Subject: [!!Mass Mail]Re: ACS 4.15.1 / XCP-NG 8.2
>
> Hello,
> I had the same issue as Benoit with Cloudstack 4.15.1 and XCP-NG 8.2.
> I have read the link below and follow the recommendations but it's still
> impossible to scale up a VM.
> When we deploy a VM, memory-static-min and memory-static-max = memory set
> on the compute offering.
> Has anyone had the same issue and how did you solve it ?
> Thanks for your help.
> Best regards, Florian
>
> On 2021/10/19 14:17:22, nu...@li.nux.ro wrote:
> > Hello,>
> >
> > Give this a read, see if helps.>
> >
> > https://users.cloudstack.apache.narkive.com/f1vzARuw/dynamic-scaling-o
> > f-cpu-and-ram-not-working#post8>
> >
> > HTH>
> >
> > On 2021-10-19 14:42, benoit lair wrote:>
> > > Hello,>
> > > >
> > > I am trying to scale up a Debian VM from 4Go to 8Go ram> I ran into
> > > the following error :>
> > > >
> > > 2021-10-19 15:33:05,803 DEBUG >
> > > [c.c.h.x.r.w.x.CitrixScaleVmCommandWrapper]>
> > > (DirectAgent-156:ctx-62880cee) (logid:e5db9099) Catch exception>
> > > com.cloud.utils.exception.CloudRuntimeException when scaling >
> > > VM:i-2-83-VM> due to
> > > com.cloud.utils.exception.CloudRuntimeException: Cannot scale up >
> > > the>
> > > vm because of memory constraint violation: 0 <=>
> > > memory-static-min(4294967296) <= memory-dynamic-min(8589934592) <=>
> > > memory-dynamic-max(8589934592) <= memory-static-max(4294967296)>
> > > >
> > > Any ideas ?>
> > > >
> > > Regards, Benoit Lair>
> >
> Внимание!
> Это электронное письмо и все прикрепленные к нему файлы являются
> конфиденциальными и предназначены исключительно для использования лицом
> (лицами), которому (которым) оно предназначено. Если Вы не являетесь лицом
> (лицами), которому (которым) предназначено это письмо, не копируйте и не
> разглашайте его содержимое и удалите это сообщение и все вложения из Вашей
> почтовой системы. Любое несанкционированное использование, распространение,
> раскрытие, печать или копирование этого электронного письма и прикрепленных
> к нему файлов, кроме как лицом (лицами) которому (которым) они
> предназначены, является незаконным и запрещено. Принимая во внимание, что
> передача данных посредством Интернет не является безопасной, мы не несем
> никакой ответственности за любой потенциальный ущерб, причиненный в
> результате ошибок при передаче данных или этим сообщением и прикрепленными
> к нему файлами.
>
> Attention!
> This email and all attachments to it are confidential and are intended
> solely for use by the person (or persons) referred to (mentioned) as the
> intended recipient (recipients). If you are not the intended recipient of
> this email, do not copy or disclose its contents and delete the message and
> any attachments to it from your e-mail system. Any unauthorized use,
> dissemination, disclosure, printing or copying of this e-mail and files
> attached to it, except by the intended recipient, is illegal and is
> prohibited. Taking into account that data transmission via Internet is not
> secure, we assume no responsibility for any potential damage caused by data
> transmission errors or this message and the files attached to it.
>


Re: ACS 4.15.1 / XCP-NG 8.2

2021-10-20 Thread benoit lair
Thanks Nux,

We had already configured the parameter for enabling scaling vm

However, it seems this works with Centos 7 and CoreOS but not with Debian
10 and Debian 7

Regards, Benoit

Le mar. 19 oct. 2021 à 16:17,  a écrit :

> Hello,
>
> Give this a read, see if helps.
>
>
> https://users.cloudstack.apache.narkive.com/f1vzARuw/dynamic-scaling-of-cpu-and-ram-not-working#post8
>
> HTH
>
> On 2021-10-19 14:42, benoit lair wrote:
> > Hello,
> >
> > I am trying to scale up a Debian VM from 4Go to 8Go ram
> > I ran into the following error :
> >
> > 2021-10-19 15:33:05,803 DEBUG
> > [c.c.h.x.r.w.x.CitrixScaleVmCommandWrapper]
> > (DirectAgent-156:ctx-62880cee) (logid:e5db9099) Catch exception
> > com.cloud.utils.exception.CloudRuntimeException when scaling
> > VM:i-2-83-VM
> > due to com.cloud.utils.exception.CloudRuntimeException: Cannot scale up
> > the
> > vm because of memory constraint violation: 0 <=
> > memory-static-min(4294967296) <= memory-dynamic-min(8589934592) <=
> > memory-dynamic-max(8589934592) <= memory-static-max(4294967296)
> >
> > Any ideas ?
> >
> > Regards, Benoit Lair
>


Re: [!!Mass Mail]Re: ACS 4.15.1 / XCP-NG 8.2

2021-10-20 Thread benoit lair
Hello evgeniy,

On which Guest OS did your tried to scale your VM ?

Regards, Benoit Lair

Le mer. 20 oct. 2021 à 15:27, benoit lair  a écrit :

> Hello,
>
> Scaling up seems not working with Debian 10 (tested too with Debian 7)
> Scaling is working on CP-NG 8.2 with Centos 7 and CoreOS although without
> installing the xen-tools
>
> Regards, Benoit Lair
>
> Le mer. 20 oct. 2021 à 12:59, Дикевич Евгений Александрович <
> evgeniy.dikev...@becloud.by> a écrit :
>
>> Hi!
>>
>> If you find a solution, please share with community :)
>> I have same problem :(
>>
>>
>>
>> -Original Message-
>> From: Florian Noel [mailto:f.n...@webetsolutions.com]
>> Sent: Wednesday, October 20, 2021 11:10 AM
>> To: 'users@cloudstack.apache.org' 
>> Subject: [!!Mass Mail]Re: ACS 4.15.1 / XCP-NG 8.2
>>
>> Hello,
>> I had the same issue as Benoit with Cloudstack 4.15.1 and XCP-NG 8.2.
>> I have read the link below and follow the recommendations but it's still
>> impossible to scale up a VM.
>> When we deploy a VM, memory-static-min and memory-static-max = memory set
>> on the compute offering.
>> Has anyone had the same issue and how did you solve it ?
>> Thanks for your help.
>> Best regards, Florian
>>
>> On 2021/10/19 14:17:22, nu...@li.nux.ro wrote:
>> > Hello,>
>> >
>> > Give this a read, see if helps.>
>> >
>> > https://users.cloudstack.apache.narkive.com/f1vzARuw/dynamic-scaling-o
>> > f-cpu-and-ram-not-working#post8>
>> >
>> > HTH>
>> >
>> > On 2021-10-19 14:42, benoit lair wrote:>
>> > > Hello,>
>> > > >
>> > > I am trying to scale up a Debian VM from 4Go to 8Go ram> I ran into
>> > > the following error :>
>> > > >
>> > > 2021-10-19 15:33:05,803 DEBUG >
>> > > [c.c.h.x.r.w.x.CitrixScaleVmCommandWrapper]>
>> > > (DirectAgent-156:ctx-62880cee) (logid:e5db9099) Catch exception>
>> > > com.cloud.utils.exception.CloudRuntimeException when scaling >
>> > > VM:i-2-83-VM> due to
>> > > com.cloud.utils.exception.CloudRuntimeException: Cannot scale up >
>> > > the>
>> > > vm because of memory constraint violation: 0 <=>
>> > > memory-static-min(4294967296) <= memory-dynamic-min(8589934592) <=>
>> > > memory-dynamic-max(8589934592) <= memory-static-max(4294967296)>
>> > > >
>> > > Any ideas ?>
>> > > >
>> > > Regards, Benoit Lair>
>> >
>> Внимание!
>> Это электронное письмо и все прикрепленные к нему файлы являются
>> конфиденциальными и предназначены исключительно для использования лицом
>> (лицами), которому (которым) оно предназначено. Если Вы не являетесь лицом
>> (лицами), которому (которым) предназначено это письмо, не копируйте и не
>> разглашайте его содержимое и удалите это сообщение и все вложения из Вашей
>> почтовой системы. Любое несанкционированное использование, распространение,
>> раскрытие, печать или копирование этого электронного письма и прикрепленных
>> к нему файлов, кроме как лицом (лицами) которому (которым) они
>> предназначены, является незаконным и запрещено. Принимая во внимание, что
>> передача данных посредством Интернет не является безопасной, мы не несем
>> никакой ответственности за любой потенциальный ущерб, причиненный в
>> результате ошибок при передаче данных или этим сообщением и прикрепленными
>> к нему файлами.
>>
>> Attention!
>> This email and all attachments to it are confidential and are intended
>> solely for use by the person (or persons) referred to (mentioned) as the
>> intended recipient (recipients). If you are not the intended recipient of
>> this email, do not copy or disclose its contents and delete the message and
>> any attachments to it from your e-mail system. Any unauthorized use,
>> dissemination, disclosure, printing or copying of this e-mail and files
>> attached to it, except by the intended recipient, is illegal and is
>> prohibited. Taking into account that data transmission via Internet is not
>> secure, we assume no responsibility for any potential damage caused by data
>> transmission errors or this message and the files attached to it.
>>
>


Re: [!!Mass Mail]Re: ACS 4.15.1 / XCP-NG 8.2

2021-10-20 Thread benoit lair
Hello,

Scaling up seems not working with Debian 10 (tested too with Debian 7)
Scaling is working on CP-NG 8.2 with Centos 7 and CoreOS although without
installing the xen-tools

Regards, Benoit Lair

Le mer. 20 oct. 2021 à 12:59, Дикевич Евгений Александрович <
evgeniy.dikev...@becloud.by> a écrit :

> Hi!
>
> If you find a solution, please share with community :)
> I have same problem :(
>
>
>
> -Original Message-
> From: Florian Noel [mailto:f.n...@webetsolutions.com]
> Sent: Wednesday, October 20, 2021 11:10 AM
> To: 'users@cloudstack.apache.org' 
> Subject: [!!Mass Mail]Re: ACS 4.15.1 / XCP-NG 8.2
>
> Hello,
> I had the same issue as Benoit with Cloudstack 4.15.1 and XCP-NG 8.2.
> I have read the link below and follow the recommendations but it's still
> impossible to scale up a VM.
> When we deploy a VM, memory-static-min and memory-static-max = memory set
> on the compute offering.
> Has anyone had the same issue and how did you solve it ?
> Thanks for your help.
> Best regards, Florian
>
> On 2021/10/19 14:17:22, nu...@li.nux.ro wrote:
> > Hello,>
> >
> > Give this a read, see if helps.>
> >
> > https://users.cloudstack.apache.narkive.com/f1vzARuw/dynamic-scaling-o
> > f-cpu-and-ram-not-working#post8>
> >
> > HTH>
> >
> > On 2021-10-19 14:42, benoit lair wrote:>
> > > Hello,>
> > > >
> > > I am trying to scale up a Debian VM from 4Go to 8Go ram> I ran into
> > > the following error :>
> > > >
> > > 2021-10-19 15:33:05,803 DEBUG >
> > > [c.c.h.x.r.w.x.CitrixScaleVmCommandWrapper]>
> > > (DirectAgent-156:ctx-62880cee) (logid:e5db9099) Catch exception>
> > > com.cloud.utils.exception.CloudRuntimeException when scaling >
> > > VM:i-2-83-VM> due to
> > > com.cloud.utils.exception.CloudRuntimeException: Cannot scale up >
> > > the>
> > > vm because of memory constraint violation: 0 <=>
> > > memory-static-min(4294967296) <= memory-dynamic-min(8589934592) <=>
> > > memory-dynamic-max(8589934592) <= memory-static-max(4294967296)>
> > > >
> > > Any ideas ?>
> > > >
> > > Regards, Benoit Lair>
> >
> Внимание!
> Это электронное письмо и все прикрепленные к нему файлы являются
> конфиденциальными и предназначены исключительно для использования лицом
> (лицами), которому (которым) оно предназначено. Если Вы не являетесь лицом
> (лицами), которому (которым) предназначено это письмо, не копируйте и не
> разглашайте его содержимое и удалите это сообщение и все вложения из Вашей
> почтовой системы. Любое несанкционированное использование, распространение,
> раскрытие, печать или копирование этого электронного письма и прикрепленных
> к нему файлов, кроме как лицом (лицами) которому (которым) они
> предназначены, является незаконным и запрещено. Принимая во внимание, что
> передача данных посредством Интернет не является безопасной, мы не несем
> никакой ответственности за любой потенциальный ущерб, причиненный в
> результате ошибок при передаче данных или этим сообщением и прикрепленными
> к нему файлами.
>
> Attention!
> This email and all attachments to it are confidential and are intended
> solely for use by the person (or persons) referred to (mentioned) as the
> intended recipient (recipients). If you are not the intended recipient of
> this email, do not copy or disclose its contents and delete the message and
> any attachments to it from your e-mail system. Any unauthorized use,
> dissemination, disclosure, printing or copying of this e-mail and files
> attached to it, except by the intended recipient, is illegal and is
> prohibited. Taking into account that data transmission via Internet is not
> secure, we assume no responsibility for any potential damage caused by data
> transmission errors or this message and the files attached to it.
>


ACS 4.15.1 Migration between NFS Secondary storage servers interrupted

2021-10-20 Thread benoit lair
Hello,

I am trying to migrate my first NFS secondary storage to a second NFS one
I asked for a migration with a migration policy "complete"
The job is working but finishes before migrating all the data

I had to relaunch the migration which continues

Any ideas ?

Regards, Benoit


ACS 4.15.1 / XCP-NG 8.2

2021-10-19 Thread benoit lair
Hello,

I am trying to scale up a Debian VM from 4Go to 8Go ram
I ran into the following error :

2021-10-19 15:33:05,803 DEBUG [c.c.h.x.r.w.x.CitrixScaleVmCommandWrapper]
(DirectAgent-156:ctx-62880cee) (logid:e5db9099) Catch exception
com.cloud.utils.exception.CloudRuntimeException when scaling VM:i-2-83-VM
due to com.cloud.utils.exception.CloudRuntimeException: Cannot scale up the
vm because of memory constraint violation: 0 <=
memory-static-min(4294967296) <= memory-dynamic-min(8589934592) <=
memory-dynamic-max(8589934592) <= memory-static-max(4294967296)

Any ideas ?

Regards, Benoit Lair


Re: Size of the snapshots volume

2021-10-19 Thread benoit lair
Hello Yordan,

I had same results with xcp-ng 8.2 and ACS 4.15.1

The max filled during the life of the disk will be the size of the snapshot

That's why i looking towards SDS with a solution giving me possibility to
do some thin provisionning with XCP-NG
I was thinking about an SDS which could give me block storage or at least
file storage and acting as a proxy between my iscsi array and my xcp-ng

Linstor could be a solution, but for the moment i don't know if the plugin
will be compatible with xcp-ng

Regards, Benoit

Le mar. 19 oct. 2021 à 11:46, Yordan Kostov  a écrit :

> Hello Benoit,
>
> Here are some results - 4.15.2 + XCP-NG. I made 2 VMs from
> template - Centos 7, 46 GB hdd, 4% full
> - VM1 - root disk is as full as template.
> - VM2 - root disk is made full up to ~90%  ( cat /dev/zero >
> test_file1 )then the file was removed so the used space is again 4%.
> - scheduled backup goes through both VMs. First snapshot size is
> - VM1 -  2.3G
> - VM2 -  41G
> - Then on VM2 this script was run to fill and empty the disk again
> - cat /dev/zero > /opt/test_file1; sync; rm /opt/ test_file1.
> - scheduled backup goes through both VMs. All snapshots size is:
> - VM1 - 2.3G
> - VM2 - 88G
>
> Once the disk is filled you will get a snapshot with size no less
> than the size of the whole disk.
> May be there is a way to shrink it but I could not find it.
>
> Best regards,
> Jordan
>
> -Original Message-
> From: Yordan Kostov 
> Sent: Tuesday, October 12, 2021 3:58 PM
> To: users@cloudstack.apache.org
> Subject: RE: Size of the snapshots volume
>
>
> [X] This message came from outside your organization
>
>
> Hello Benoit,
>
> Unfortunately no.
> When I do it I will make sure to drop a line here.
>
> Best regards,
> Jordan
>
> -Original Message-
> From: benoit lair 
> Sent: Tuesday, October 12, 2021 3:40 PM
> To: users@cloudstack.apache.org
> Subject: Re: Size of the snapshots volume
>
>
> [X] This message came from outside your organization
>
>
> Hello Jordan,
>
> Could you proceed to your tests ? Have you got the same results ?
>
> Regards, Benoit Lair
>
> Le lun. 4 oct. 2021 à 17:59, Yordan Kostov  a écrit
> :
>
> > Here are a few considerations:
> >
> > - First snapshot of volume is always full snap.
> > - XenServer/XCP-NG backups are always thin.
> > - Thin provisioning calculations never go down. Even if you delete
> > data from disk.
> >
> > As you filled the disk of the VM to top the thin provisioning threats
> > it as full VM from that moment on even if data is deleted. So the full
> > snap that will be migrated to NFS will always be of max size.
> >
> > I am not 100% certain as I am yet to start running backup tests.
> >
> > Best regards,
> > Jordan
> >
> > -Original Message-
> > From: Florian Noel 
> > Sent: Monday, October 4, 2021 6:22 PM
> > To: 'users@cloudstack.apache.org' 
> > Subject: Size of the snapshots volume
> >
> >
> > [X] This message came from outside your organization
> >
> >
> > Hi,
> >
> > I've a question about the snapshots volume in Cloudstack
> >
> > When we take a snapshot of a volume, this create a VHD file on the
> > secondary storage.
> > Snapshot size doesn't match volume size used.
> >
> > Imagine a volume of 20GB, we fill the volume and empty it just after.
> > We take a snapshot of the volume from Cloudstack frontend and its size
> > is 20GB on the secondary storage while the volume is empty.
> >
> > We've made the same test with volume provisioning in thin, sparse and
> fat.
> > The results are the same.
> >
> > We use Cloudstack 4.15.1 with XCP-NG 8.1. The LUNs are connected in
> > iSCSI on the hypervisors XCP.
> >
> > Thanks for your help.
> >
> > Best regards.
> >
> >
> > [Logo Web et Solutions]<
> > https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/60e
> > 5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_e
> > v9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDzCgMKxZAoq
> > vlt4NqCVlovo0bn9PcMUWFMak1jGIGRgGg==__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mf
> > o8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCye42DjJ$
> > >
> >
> > [Facebook]<
> > https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/60e
> > 5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_e
&

Re: Size of the snapshots volume

2021-10-12 Thread benoit lair
Hello Jordan,

Could you proceed to your tests ? Have you got the same results ?

Regards, Benoit Lair

Le lun. 4 oct. 2021 à 17:59, Yordan Kostov  a écrit :

> Here are a few considerations:
>
> - First snapshot of volume is always full snap.
> - XenServer/XCP-NG backups are always thin.
> - Thin provisioning calculations never go down. Even if you delete data
> from disk.
>
> As you filled the disk of the VM to top the thin provisioning threats it
> as full VM from that moment on even if data is deleted. So the full snap
> that will be migrated to NFS will always be of max size.
>
> I am not 100% certain as I am yet to start running backup tests.
>
> Best regards,
> Jordan
>
> -Original Message-
> From: Florian Noel 
> Sent: Monday, October 4, 2021 6:22 PM
> To: 'users@cloudstack.apache.org' 
> Subject: Size of the snapshots volume
>
>
> [X] This message came from outside your organization
>
>
> Hi,
>
> I've a question about the snapshots volume in Cloudstack
>
> When we take a snapshot of a volume, this create a VHD file on the
> secondary storage.
> Snapshot size doesn't match volume size used.
>
> Imagine a volume of 20GB, we fill the volume and empty it just after.
> We take a snapshot of the volume from Cloudstack frontend and its size is
> 20GB on the secondary storage while the volume is empty.
>
> We've made the same test with volume provisioning in thin, sparse and fat.
> The results are the same.
>
> We use Cloudstack 4.15.1 with XCP-NG 8.1. The LUNs are connected in iSCSI
> on the hypervisors XCP.
>
> Thanks for your help.
>
> Best regards.
>
>
> [Logo Web et Solutions]<
> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/60e5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_ev9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDzCgMKxZAoqvlt4NqCVlovo0bn9PcMUWFMak1jGIGRgGg==__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCye42DjJ$
> >
>
> [Facebook]<
> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/60e5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_ev9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDyIo6EwBskR6pg3M12nuwExu8D-tkYDv5BE1h2dA1rTOfbHIEta8XTaUC0Et-KgDBM=__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHC9_zSGk3$
> >
>
> [Twitter]<
> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/60e5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_ev9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDxVGISVA_RnJl21WVuzHCTH_v3e4PfK5YBq_Q228Kqxog==__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHC36OFkHl$
> >
>
> [LinkedIn]<
> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/60e5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_ev9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDz5UNyOTEm_EvRFXdshn5-xaylm0Ysa1fuL9vCg5uDKfouGPQSgwbQq28Nl7_fXFIA=__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCzzSDj-d$
> >
>
> [Youtube]<
> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/60e5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_ev9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDyEop3qI2i2HFrm2U65Sd5oXm55IjnZsXt1s4eREvsJGMpsgNaX2L3OdByrUM3b4Xg=__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHC3f1vTjU$
> >
>
> Florian Noel
>
> Administrateur Systèmes Et Réseaux
>
> [
> https://urldefense.com/v3/__https://storage.letsignit.com/icons/designer/v2/phone-1.png__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCxqW91pG$
> ] 02 35 78 11 90
>
> 705 Avenue Isaac Newton
>
> 76800 Saint-Etienne-Du-Rouvray
>
> [Payneo]<
> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/b/60ed92296e8c02bf93d4f9aa?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_ev9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDx4rIKe6rk374sFS07v0YLIvIF68SXTHzNmGDb3XO6dLQ==__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCyft4U9I$
> >
>
>
>
>


Feature Cloudstack 4.15

2021-09-03 Thread benoit lair
Hi ,

I am trying to use Backup and Recovery Framework with ACS 4.15.1

I would like to implement it with Xcp-NG servers
What i see is that only Veeam with Vmware is ready

Would it be possible to have an interface in order to define a custom
External Provider (3rd Party Backup Solutions like bacula, amanda or
backuppc ) like described here :

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Backup+and+Recovery+Framework

I was thinking about a form giving the commands to execute for each type of
Backup API Call of the framework


Thanks for your help and ideas

Regards, Benoit


Re: XCP-ng Backup Cloudstack 4.15

2021-09-02 Thread benoit lair
Is there a way to implement ourself a custom external provider in order to
backup VMs ?

Le jeu. 2 sept. 2021 à 15:52, benoit lair  a écrit :

> Hello,
>
> I am interested too in doing Backup VM for Xcp-NG
> Would you have a solution for using Veeam like Yordan aims ?
>
> Le lun. 12 juil. 2021 à 13:30, Abishek Budhathoki  a
> écrit :
>
>> Thank You for the response. Really apricated.
>>
>> On 2021/07/12 09:41:07, Rohit Yadav  wrote:
>> > Hi Abishek,
>> >
>> > That's right, the current Backup & Recovery framework only supports
>> Veeam provider on VMware.
>> >
>> > For XenServer/xcpng, we don't have a plugin/provider, however volume
>> snapshots can be used to backup snapshots on secondary storage.
>> >
>> > Regards.
>> >
>> > Regards,
>> > Rohit Yadav
>> >
>> > 
>> > From: Abishek Budhathoki 
>> > Sent: Saturday, July 10, 2021 7:42:12 PM
>> > To: users@cloudstack.apache.org 
>> > Subject: XCP-ng Backup Cloudstack 4.15
>> >
>> > Hello EveryOne,
>> >
>> > I am trying cloudstack with xen environment. I was trying out the
>> backup feature of the cloudstack and was not able to achieve it. Does the
>> backup work in xen environment or it strictly only works with vmware only.
>> >
>> >
>> >
>> >
>> >
>> >
>>
>


Re: instance backup designs?

2021-09-02 Thread benoit lair
Hello Rohit,

How could it be possible to implement a custom backup provider ?

Le ven. 18 juin 2021 à 09:39, Yordan Kostov  a écrit :

> Thank you Rohit,
>
> I am all over it .
>
> Regards,
> Jordan
>
> -Original Message-
> From: Rohit Yadav 
> Sent: Thursday, June 17, 2021 6:21 PM
> To: users@cloudstack.apache.org
> Subject: Re: instance backup designs?
>
>
> [X] This message came from outside your organization
>
>
> Hi Yordan,
>
> We do have a backup & recovery framework which can be extended to
> implement support for new solutions, the current provider/plugin is
> available only for Vmware/Veeam and which can be used to implement support
> for other backup solutions for other hypervisors.
>
> While there is no choice now, for XenServer/XCP-NG you can use volume
> snapshots as a way to have backups volumes on secondary storage.
>
>
> Regards.
>
> 
> From: Yordan Kostov 
> Sent: Wednesday, June 16, 2021 18:46
> To: users@cloudstack.apache.org 
> Subject: instance backup designs?
>
> Hey everyone,
>
> I was wondering what choice does one have for backup when
> underlying hypervisor is XenServer/XCP-NG?
> Any high level ideas or just sharing any doc that may
> exist will be great!
>
> Best regards,
> Jordan
>
>
>
>


Re: XCP-ng Backup Cloudstack 4.15

2021-09-02 Thread benoit lair
Hello,

I am interested too in doing Backup VM for Xcp-NG
Would you have a solution for using Veeam like Yordan aims ?

Le lun. 12 juil. 2021 à 13:30, Abishek Budhathoki  a
écrit :

> Thank You for the response. Really apricated.
>
> On 2021/07/12 09:41:07, Rohit Yadav  wrote:
> > Hi Abishek,
> >
> > That's right, the current Backup & Recovery framework only supports
> Veeam provider on VMware.
> >
> > For XenServer/xcpng, we don't have a plugin/provider, however volume
> snapshots can be used to backup snapshots on secondary storage.
> >
> > Regards.
> >
> > Regards,
> > Rohit Yadav
> >
> > 
> > From: Abishek Budhathoki 
> > Sent: Saturday, July 10, 2021 7:42:12 PM
> > To: users@cloudstack.apache.org 
> > Subject: XCP-ng Backup Cloudstack 4.15
> >
> > Hello EveryOne,
> >
> > I am trying cloudstack with xen environment. I was trying out the backup
> feature of the cloudstack and was not able to achieve it. Does the backup
> work in xen environment or it strictly only works with vmware only.
> >
> >
> >
> >
> >
> >
>


Re: ACS 4.15.2 with xcp-ng 8.2

2021-08-26 Thread benoit lair
Hi Rohit,

I change my locale from fr to us and it worked after rebooting my mgmt
server

Thanks a lot

Regards

Le mer. 25 août 2021 à 18:57, Rohit Yadav  a
écrit :

> Hi Benoit,
>
> Do you have non-English locale set on the management server? I think we've
> some similar things fixed that'll come in 4.15.2/4.16 cc @Nicolas
> Vazquez<mailto:nicolas.vazq...@shapeblue.com> can confirm.
>
> Regards.
> ________
> From: benoit lair 
> Sent: Wednesday, August 25, 2021 6:49:30 PM
> To: users@cloudstack.apache.org 
> Subject: ACS 4.15.2 with xcp-ng 8.2
>
> Hi,
>
> I have installed an ACS 4.15.2 on Centos 7
> I created my zone, pod and first cluster
> I tried to ad my xcp-ng 8.2 cluster to ACS with Xenserver hypervisor type
>
> It fails when i try to list servers with error : "For input string: "0,01""
>
> Also it recognizes the CPUs, ram and storage but no way to have access to
> the hypervisors
>
> Any idea ?
>
> Regards, Benoit
>
>
>
>


ACS 4.15.2 with xcp-ng 8.2

2021-08-25 Thread benoit lair
Hi,

I have installed an ACS 4.15.2 on Centos 7
I created my zone, pod and first cluster
I tried to ad my xcp-ng 8.2 cluster to ACS with Xenserver hypervisor type

It fails when i try to list servers with error : "For input string: "0,01""

Also it recognizes the CPUs, ram and storage but no way to have access to
the hypervisors

Any idea ?

Regards, Benoit


Re: Upgrading XenServer Clusters managed by ACS...

2021-03-31 Thread benoit lair
Hello David,

We have an ACS 4.3 install with somes XS 6.2.0 clusters.
Would you think we could perform an upgrade from these XS 6.2.0 to ACS 4.11
with the same way ?

The finality would be to move our ACS 4.3 to ACS 4.15 in order to convert
in fine our XS 6.2 to XCP 8.2

@Andrija, would you think this could be achieved ?*

Thanks a lot

Le mar. 14 juil. 2020 à 04:34, David Merrill  a
écrit :

> Reporting in on this - turns out was fairly painless (I never ended up
> goofing around with host tags at all).
>
>
>
> Here's an updated to-do list (with some observations):
>
>
>
>   1.  In XenCenter – if HA is enabled for the XenServer pool, disable it
>   2.  Stop ACS management/usage services
>   3.  Do MySQL database backups
>   4.  Start ACS management/usage services
>   5.  Start with the pool master
>   *   In ACS – Put the pool master into maintenance mode (this
> migrate all guest VMs to other hosts in the cluster)
>   *   In ACS – Un-manage the cluster (this keeps any activity
> from happening in the pool)
>   *   NOW – Upgrade the XenServer pool master to the latest
> release
>
>   i.  Do this by picking up the “correct”
> ISO from Citrix, burning it to a CD/DVD/USB-stick, booting the host with it
> & performing a manual upgrade to the version of XenServer you’re going to
>
> ii.  Before upgrading the installer should
> make a backup of the existing installation
>
>iii.  When the upgrade is complete you’ll
> be prompted to reboot the host
>
>iv.  OBSERVATION (after booting):
>
> *   The XenServer console (the ncurses interface)
> still said XenServer 6.5 in the upper-left-hand corner (which led me to
> believe that the upgrade hadn’t worked)
> *   However when I reconnected with XenCenter it
> reported XenServer 7.1 CU2 was installed
> *   So…OK, fine?
> *   There were no host tags for that newly upgraded
> host, so my to-do to remove the tag wasn’t necessary
>   *   In ACS – Re-manage the cluster
>   *   In ACS – Exit maintenance-mode for the newly upgraded
> host
>   *   In ACS – Observe that the newly upgraded host is
> “Enabled” and “Up” in the UI (Infrastructure > Hosts)
>   *   OBSERVATION:
>
>   i.  After finishing the 2 steps above on
> checking in XenCenter the host now had the host tag:
> vmops-version-com.cloud.hypervisor.xenerver.resource.XenServer650Resource-4.11.3.0
>
> ii.  Scripts in /opt/cloud/bin have a
> timestamp that coincides with the cluster being re-managed and the pool
> master coming out of maintenance mode
>
>   1.  In ACS – Testing (e.g. move an existing router/VM to the upgraded
> host, create new networks/VMs on the upgraded host)
>
>   i.  OBSERVATION:
>
> *   Moving existing router/VMs to the upgraded host
> worked
> *   I was not able to create new VMs until all pool
> members were at the same level of XenServer
>   1.  Rinse & repeat with the remaining XenServer pool members in the ACS
> cluster
>   *   Follow the same steps as the pool master EXCEPT do not
> un-manage/re-manage the cluster in ACS (no need to do so really although
> from the perspective of operators new VM creation is clearly not possible
> until were done and who knows maybe you don’t really want folks trying to
> take actions while you’re in the middle of all this?)
>   *   OBSERVATION (unexpected):
>
>   i.  I noticed that even before I had
> brought a newly upgraded pool member out of maintenance in ACS that the
> following host tag
> vmops-version-com.cloud.hypervisor.xenerver.resource.XenServer650Resource-4.11.3.0
> was already there
>
> ii.  AND that the Scripts in
> /opt/cloud/bin had a timestamp that coincides with the pool member’s recent
> reboot
>
>   1.  In XenCenter – if HA was enabled at the start, re-enable it
>
>
>
> So my lab pool is up and running upgraded from XenServer 6.5 to XenServer
> 7.1.2 CU2 LTSR and so far, CloudStack 4.11.3 seems to be happy with it.
>
>
>
> Next steps are to apply the latest XenServer hotfixes (following the same
> recipe above) and re-test activities in ACS.
>
>
>
> Thanks,
>
> David
>
>
>
> David Merrill
>
> Senior Systems Engineer,
>
> Managed and Private/Hybrid Cloud Services
>
> OTELCO
>
> 92 Oak Street, Portland ME 04101
>
> office 207.772.5678 
>
> http://www.otelco.com/cloud-and-managed-services
>
> Confidentiality Message
>
> The information contained in this e-mail transmission may be confidential
> and legally privileged. If you are not the intended recipient, you are
> notified that any dissemination, distribution, copying or other use 

Re: integrated cloudstack with XCP-ng 7.6

2019-09-12 Thread benoit lair
Hello,

So was this ok to get xcp 7.6 with acs 4.12  ?

Le jeu. 14 févr. 2019 à 18:46, Dag Sonstebo  a
écrit :

> Fabio - you can just use the same procedure as for XenServer.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
>
> On 14/02/2019, 17:18, "Fabio Cesario"  wrote:
>
> I'm using 4.12.0.0, but the integration with xcp-ng is not clear in
> the documentation.
> thanks
>
>
> -Mensagem original-
> De: Rafael Weingärtner 
> Enviada em: quinta-feira, 14 de fevereiro de 2019 15:08
> Para: users 
> Assunto: Re: integrated cloudstack with XCP-ng 7.6
>
> I am not sure if ACS is supporting this version of XCP-NG already.
>
> What ACS version are you using?
>
> On Thu, Feb 14, 2019 at 2:47 PM Fabio Cesario 
> wrote:
>
> > Hi,
> >
> > I'm trying to implement the cloudstack integrated with XCP-NG 7.6,
> but
> > in none documentation of CloudStack have the procedure. Can someone
> > that can send to me?
> >
> > Thanks
> >
> >
> >
> > [image: assinatura_fabio]
> >
> >
> >
> >
> >
> >
> >
>
>
> --
> Rafael Weingärtner
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>


Re: Container Service

2017-12-11 Thread benoit lair
Okay, Thanks Daan.
Going to look at it asap.

2017-12-01 16:05 GMT+01:00 Daan Hoogland <daan.hoogl...@shapeblue.com>:

> If I recall correctly there must be an older version for 4.5 somewhere in
> that repo. I’m not sure though. You can try that one but no warranty beyond
> the end of this sentence. None was given anyway ;)
> All in all you will have to code some, probably. Send us a pull request if
> you get it to work.
>
> On 01/12/2017, 15:56, "benoit lair" <kurushi4...@gmail.com> wrote:
>
> Hello Daan,
>
> Do you know if it could pass with a ACS 4.3 ? Do i have to upgrade
> towards
> ACS 4.6 ?
>
> Thanks
>
> 2017-12-01 15:34 GMT+01:00 Daan Hoogland <daan.hoogl...@shapeblue.com
> >:
>
> > benoit, it is at the shapeblue website.
> >
> > There released version is for 4.6 found at the site
> > http://www.shapeblue.com/cloudstack-container-service/
> > and there is a branch in the repo for 4.10
> https://github.com/shapeblue/
> > ccs/pull/39 but this has not been released.
> >
> > On 01/12/2017, 15:01, "benoit lair" <kurushi4...@gmail.com> wrote:
> >
> > Hello,
> >
> > I would like to know where can i find the ccs plugin.
> > Also i would like to know if it can work with ACS 4.3 ?
> >
> > Thanks
> >
> >
> > 2017-07-25 14:56 GMT+02:00 Simon Weller <swel...@ena.com.invalid
> >:
> >
> > > Grégoire,
> > >
> > > We have tested it on 4.8, but not 4.9.
> > >
> > > - Si
> > > <http://www.linkedin.com/company/15330>
> > >
> > >
> > >
> > > 
> > > From: Grégoire Lamodière <g.lamodi...@dimsi.fr>
> > > Sent: Tuesday, July 25, 2017 2:31 AM
> > > To: users@cloudstack.apache.org
> > > Subject: RE: Container Service
> > >
> > > Hi Simon,
> > >
> > > Thanks a lot, I'll have a look.
> > > Have you implement CCS on 4.9.2 ?
> > >
> > > I'll make a try before we start production on the new zone.
> > >
> > > Grégoire
> > >
> > > ---
> > > Grégoire Lamodière
> > > T/ + 33 6 76 27 03 31
> > > F/ + 33 1 75 43 89 71
> > >
> > >
> > > -Message d'origine-
> > > De : Simon Weller [mailto:swel...@ena.com.INVALID]
> > > Envoyé : lundi 24 juillet 2017 23:10
> > > À : users@cloudstack.apache.org
> > > Objet : Re: Container Service
> > >
> > > Grégoire,
> > >
> > >
> > > Take a look at the URLs below:
> > >
> > >
> > > Code and Docs: https://github.com/shapeblue/ccs
> > >
> > >
> > > Packages: http://packages.shapeblue.com/ccs/
> > >
> > > - Si
> > >
> > > 
> > > From: Grégoire Lamodière <g.lamodi...@dimsi.fr>
> > > Sent: Monday, July 24, 2017 2:36 PM
> > > To: users@cloudstack.apache.org
> > > Subject: Container Service
> > >
> > > Dear All,
> > >
> > > Does anyone know the current status of Container Server ?
> > > I remember Gilles talking about this in Berlin last year, but
> all
> > links
> > > sound down (Except the homepage of the module).
> > > I cannot find install guide / any technical docs, nor packages.
> > >
> > > I would really like making some tries on this since we are now
> almost
> > > working on 4.9.2.
> > >
> > > Cheers.
> > >
> > > ---
> > > Grégoire Lamodière
> > > T/ + 33 6 76 27 03 31
> > > F/ + 33 1 75 43 89 71
> > >
> > >
> >
> >
> >
> > daan.hoogl...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
>
>
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: Container Service

2017-12-01 Thread benoit lair
Hello Daan,

Do you know if it could pass with a ACS 4.3 ? Do i have to upgrade towards
ACS 4.6 ?

Thanks

2017-12-01 15:34 GMT+01:00 Daan Hoogland <daan.hoogl...@shapeblue.com>:

> benoit, it is at the shapeblue website.
>
> There released version is for 4.6 found at the site
> http://www.shapeblue.com/cloudstack-container-service/
> and there is a branch in the repo for 4.10 https://github.com/shapeblue/
> ccs/pull/39 but this has not been released.
>
> On 01/12/2017, 15:01, "benoit lair" <kurushi4...@gmail.com> wrote:
>
> Hello,
>
> I would like to know where can i find the ccs plugin.
> Also i would like to know if it can work with ACS 4.3 ?
>
> Thanks
>
>
> 2017-07-25 14:56 GMT+02:00 Simon Weller <swel...@ena.com.invalid>:
>
> > Grégoire,
> >
> > We have tested it on 4.8, but not 4.9.
> >
> > - Si
> > <http://www.linkedin.com/company/15330>
> >
> >
> >
> > 
> > From: Grégoire Lamodière <g.lamodi...@dimsi.fr>
> > Sent: Tuesday, July 25, 2017 2:31 AM
> > To: users@cloudstack.apache.org
> > Subject: RE: Container Service
> >
> > Hi Simon,
> >
> > Thanks a lot, I'll have a look.
> > Have you implement CCS on 4.9.2 ?
> >
> > I'll make a try before we start production on the new zone.
> >
> > Grégoire
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> >
> > -Message d'origine-
> > De : Simon Weller [mailto:swel...@ena.com.INVALID]
> > Envoyé : lundi 24 juillet 2017 23:10
> > À : users@cloudstack.apache.org
> > Objet : Re: Container Service
> >
> > Grégoire,
> >
> >
> > Take a look at the URLs below:
> >
> >
> > Code and Docs: https://github.com/shapeblue/ccs
> >
> >
> > Packages: http://packages.shapeblue.com/ccs/
> >
> > - Si
> >
> > 
> > From: Grégoire Lamodière <g.lamodi...@dimsi.fr>
> > Sent: Monday, July 24, 2017 2:36 PM
> > To: users@cloudstack.apache.org
> > Subject: Container Service
> >
> > Dear All,
> >
> > Does anyone know the current status of Container Server ?
> > I remember Gilles talking about this in Berlin last year, but all
> links
> > sound down (Except the homepage of the module).
> > I cannot find install guide / any technical docs, nor packages.
> >
> > I would really like making some tries on this since we are now almost
> > working on 4.9.2.
> >
> > Cheers.
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> >
>
>
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: Container Service

2017-12-01 Thread benoit lair
Hello,

I would like to know where can i find the ccs plugin.
Also i would like to know if it can work with ACS 4.3 ?

Thanks


2017-07-25 14:56 GMT+02:00 Simon Weller :

> Grégoire,
>
> We have tested it on 4.8, but not 4.9.
>
> - Si
> 
>
>
>
> 
> From: Grégoire Lamodière 
> Sent: Tuesday, July 25, 2017 2:31 AM
> To: users@cloudstack.apache.org
> Subject: RE: Container Service
>
> Hi Simon,
>
> Thanks a lot, I'll have a look.
> Have you implement CCS on 4.9.2 ?
>
> I'll make a try before we start production on the new zone.
>
> Grégoire
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>
> -Message d'origine-
> De : Simon Weller [mailto:swel...@ena.com.INVALID]
> Envoyé : lundi 24 juillet 2017 23:10
> À : users@cloudstack.apache.org
> Objet : Re: Container Service
>
> Grégoire,
>
>
> Take a look at the URLs below:
>
>
> Code and Docs: https://github.com/shapeblue/ccs
>
>
> Packages: http://packages.shapeblue.com/ccs/
>
> - Si
>
> 
> From: Grégoire Lamodière 
> Sent: Monday, July 24, 2017 2:36 PM
> To: users@cloudstack.apache.org
> Subject: Container Service
>
> Dear All,
>
> Does anyone know the current status of Container Server ?
> I remember Gilles talking about this in Berlin last year, but all links
> sound down (Except the homepage of the module).
> I cannot find install guide / any technical docs, nor packages.
>
> I would really like making some tries on this since we are now almost
> working on 4.9.2.
>
> Cheers.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>


Re: Quick 1 Question Survey

2017-10-10 Thread benoit lair
Hello guys,

CloudStack Management = CentOS release 6/ACS 4.3
KVM/XEN = XenServer 6.2

2017-10-02 12:56 GMT+02:00 Andrija Panic :

> Cloud1/2:
>
> Cloudstack Management = Ubuntu 14.04 (on top of Centos6 KVM :D )
> KVM= Ubuntu 14.04
>
> Best
>
> On 25 September 2017 at 13:52, Dag Sonstebo 
> wrote:
>
> > CloudStack Management = ACS 4.9 on CentOS 7.3
> > HV = VMware vSphere 6.5
> >
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > On 25/09/2017, 12:15, "Makrand"  wrote:
> >
> > 5 Zones
> >
> > ACS:- 4.3 to 4.4
> > XENserver:- 6.2 SP1
> >
> > --
> > Makrand
> >
> >
> >
> > dag.sonst...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> > On Thu, Sep 21, 2017 at 6:15 PM, Marty Godsey 
> wrote:
> >
> > > Clarification:
> > >
> > > XenServer 7.0
> > >
> > > Regards,
> > > Marty Godsey
> > > Principal Engineer
> > > nSource Solutions, LLC
> > >
> > > -Original Message-
> > > From: Rene Moser [mailto:m...@renemoser.net]
> > > Sent: Tuesday, September 12, 2017 8:13 AM
> > > To: users@cloudstack.apache.org
> > > Subject: Quick 1 Question Survey
> > >
> > > What Linux OS and release are you running below your:
> > >
> > > * CloudStack/Cloudplatform Management
> > > * KVM/XEN Hypvervisor Host
> > >
> > > Possible answer example
> > >
> > > Cloudstack Management = centos6
> > > KVM/XEN = None, No KVM/XEN
> > >
> > > Thanks in advance
> > >
> > > Regards
> > > René
> > >
> > >
> >
> >
> >
>
>
> --
>
> Andrija Panić
>


Re: Export VMs

2014-12-12 Thread benoit lair
In practice, it works.

I had to do this in order to get back a template that i want to use with a
xencenter's managed xenserver pool.

2014-12-12 8:26 GMT+01:00 Vadim Kimlaychuk vadim.kimlayc...@elion.ee:

 For XenServer it will be VHD files.  Make Template from VM, download it,
 import it and re-create VM from template. Should work theoretically.

 Vadim.

 -Original Message-
 From: Billy Ramsay [mailto:bram...@dynamicquest.com]
 Sent: Thursday, December 11, 2014 10:35 PM
 To: users@cloudstack.apache.org
 Subject: Export VMs

 Greetings all!

 We currently have a XenServer 6.1 pool being managed by a CloudStack 4.1.1
 deployment. We have a client that would like to export a number of VMs so
 they can import them into their own XenServer pool. What would be the best
 way to accomplish this? I know that I can download individual volumes. Can
 CloudStack export VMs as OVFs or similar? The only reference to exporting I
 can find in the documentation is for templates. The section on exporting
 templates does not state what format they can be exported in.

 Thanks in advance,


 Billy Ramsay





Re: [ACS43][MGMT Server][Load Balancing]

2014-11-04 Thread benoit lair
Do you find any wrong config with this ?

Now it seems the 2 mgmt servers are going fine in cluster config (seeing
mgmt logs servers) but the problem i have is with the cpvm vm.

It seems it keeps a hard link with node02 while trying to access it with
the web ui (at this moment using the node01).

There is no problem with the cpvm when trying to access to it with the web
ui of the node02 (0.11:8080/client)

Thanks for your responses.

2014-11-03 17:55 GMT+01:00 benoit lair kurushi4...@gmail.com:

 Hi Rohit,

 I have two managements (java) servers :

 First node called node01 : 192.168.0.10
 Second node called node02 : 192.168.0.11

 The database is 192.168.0.200 (not vip of a netscaler, but a fully
 active/active cluster with a ha-ip), the mysql server is well joining from
 the 192.168.0.10 or from the 192.168.0.11, all of two have the rights
 privileges onto the mysql-server.
 For information, my mysql server cluster 192.168.0.200 is replicated on
 two others slaves.

 I want to contact my mgmt server with 192.168.0.100 (vip on netscaler vpx)
 I have this config :

 
 192.168.0.100 port type HTTP 8080 : with LB Methode Least connection and
 persistence with sourceip, 2 mins of timeout, netmask of 255.255.255.255

 
 192.168.0.100 port type TCP 8250 : with LB Methode Roun-robin and
 persistence with sourceip, 5 mins of timeout, netmask of 255.255.255.255

 
 192.168.0.100 port type TCP 8096 : with LB Methode Least connections and
 persistence with sourceip, 5 mins of timeout, netmask of 255.255.255.255

 
 The rule with the port 8096 does not serve for the moment.

 I have deployed the node01 with :
 cloudstack-setup-databases user:password@192.168.0.200
 --deploy-as=root:password -e file -m mypassphrase -k mypassphrase -i
 192.168.0.10

 I have deployed the node02 with :
 cloudstack-setup-databases user:password@192.168.0.200 -e file -m
 mypassphrase -k mypassphrase -i 192.168.0.11

 I executed on node01 the command :
 /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
 -m /mnt/secondary -u
 http://cloudstack.apt-get.eu/systemvm/systemvm64template-2014-01-14-master-xen.vhd.bz2
 -h xenserver -s mypassphase -F

 I changed the global host parameter accessing with
 http://192.168.0.100:8080/client with the value : 192.168.0.100
 I changed the global parameter agent.lb to true with the same way.

 I restarted my two management servers, using
 /etc/init.d/cloudstack-management restart, so all of twos seems to well
 communicate mutually (grepping the logs entries with HeartBeat tag says me
 the two servers are seing them)

 Also i created a zone with local storage enabled, a pod, a cluster, a host
 and some public networks, without activating the zone (for local storage
 trick). Terminated the wizard of creation of the zone.
 I went to primary storage of my zone, adding a nfs primary storage server,
 activating the zone.

 I see my 2 systems vms spawning :
  the seconday storage vm is ok, state running and agent running
  the console proxy vm is not all ok, state running but the agent as a -
 status.

 If i try to reboot the ssvm, it is ok, but if i try to reboot the cpvm,
 the webui says me it is ok, but if i look at xencenter i do not see my cpvm
 rebooting.
 Here are my mgmt logs entries about this reboot order :

 2014-11-03 17:49:54,187 DEBUG [c.c.a.m.ClusteredAgentAttache]
 (StatsCollector-2:ctx-9c4fb510) Seq 1-1490813094: Unable to forward null
 2014-11-03 17:49:54,187 DEBUG [c.c.s.StorageManagerImpl]
 (StatsCollector-2:ctx-9c4fb510) Unable to send storage pool command to
 Pool[2|NetworkFilesystem] via 1
 com.cloud.exception.AgentUnavailableException: Resource [Host:1] is
 unreachable: Host 1: Unable to reach the peer that the agent is connected
 at
 com.cloud.agent.manager.ClusteredAgentAttache.send(ClusteredAgentAttache.java:220)
 at com.cloud.agent.manager.AgentAttache.send(AgentAttache.java:398)
 at com.cloud.agent.manager.AgentManagerImpl.send(AgentManagerImpl.java:394)
 at com.cloud.agent.manager.AgentManagerImpl.send(AgentManagerImpl.java:347)
 at
 com.cloud.storage.StorageManagerImpl.sendToPool(StorageManagerImpl.java:964)

 If i connect to http://192.168.0.11:8080/client and go to
 infrastructuresystems vms, is the 2 systems vms with the same states, but
 here if i try to reboot the cpvm it is working.

 I think there is a problem with my 2 management nodes installation (maybe
 the agent does not communicate well between the two mgmt servers).

 How can i troubleshoot this ?

 Thanks for your

Re: [ACS43][MGMT Server][Load Balancing]

2014-11-03 Thread benoit lair
Any advice ?

the 192.168.0.100 is the VIP of the netscaler (so not configured onto any
of the 2 mgmt servers).

How should i setup and deploy database with the first node 192.168.0.10 ?
How for the second 192.168.0.11 ?

Thanks for your help and advices.

Regards, Benoit.

2014-10-31 16:27 GMT+01:00 benoit lair kurushi4...@gmail.com:

 Hi,

 i retried an install of LB Mgmt servers :

 On the first mgmt server 192.168.0.10 i tried a :
 cloudstack-setup-databases cloud:password@192.168.0.200
 cloud%3Apassword@192.168.0.10 --deploy-as=root:mypassroot -e file -m
 mypassphrase -k mypassphrase -i 192.168.0.100

 192.168.0.100 is the vip of the netscaler
 the 192.168.0.10 if the first mgmt server node (192.168.0.11 is the second)

 Whe i try to start the mgmt server i have this error :

 2014-10-31 16:05:14,246 DEBUG [c.c.s.ConfigurationServerImpl] (main:null)
 Execution is successful.
 2014-10-31 16:05:14,285 INFO  [c.c.c.ClusterManagerImpl] (main:null) Start
 configuring cluster manager : ClusterManagerImpl
 2014-10-31 16:05:14,285 INFO  [c.c.c.ClusterManagerImpl] (main:null)
 Cluster node IP : 192.168.0.100
 2014-10-31 16:05:14,292 ERROR [o.a.c.s.l.CloudStackExtendedLifeCycle]
 (main:null) Failed to configure ClusterManagerImpl
 javax.naming.ConfigurationException: cluster node IP should be valid local
 address where the server is running, please check your configuration

 And the management server does not start.

 What did i miss ?

 Thanks for your responses.

 Regards, Benoit.


 2014-10-30 10:36 GMT+01:00 Rohit Yadav rohit.ya...@shapeblue.com:

 Hi,

  On 30-Oct-2014, at 2:52 pm, benoit lair kurushi4...@gmail.com wrote:
 
  So when doing a cloudstack-setup-databases with -i 192.168.0.100 i
 have
  to specify the vip of the netscaler ? As well i am on the first node or
 on
  the second node or N node ?

 The -i ip is used as the host IP by CloudStack management server, among
 other things this is especially used by systemvms to connect to mgmt
 server. So, if you’re load balancing using netscaler, use the netscaler IP.
 Make sure to configure ports 8080, 8250 appropriately. Make sure all the
 ACS mgmt servers (primary/first one and others) can connect to mysql server
 from their respective IPs.

  Yes the management server was crashed, for the moment i don't have the
  access to this management server pool.
  As soon i'm getting back to this mgmt server pool, i give you more
 infos.
 
  I'm going to reinstall some fresh mgmt servers and give you more
 feedback.

 Sure.

  Concerning the mgmt server log entries Resp: Routing to peer, is it
  normal that the mgmt server is producing so much log entries with this
  message (seeing my mgmt log file growing very fast).

 Since, both the management servers are loadbalancing internal calls,
 you’ll see these a lot. You may plan your log storage appropriately or
 configure log4j xml to not log INFO/DEBUG etc.

 Regards,
 Rohit Yadav
 Software Architect, ShapeBlue
 M. +91 88 262 30892 | rohit.ya...@shapeblue.com
 Blog: bhaisaab.org | Twitter: @_bhaisaab

 Find out more about ShapeBlue and our range of CloudStack related services

 IaaS Cloud Design  Build
 http://shapeblue.com/iaas-cloud-design-and-build//
 CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
 CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
 CloudStack Infrastructure Support
 http://shapeblue.com/cloudstack-infrastructure-support/
 CloudStack Bootcamp Training Courses
 http://shapeblue.com/cloudstack-training/

 This email and any attachments to it may be confidential and are intended
 solely for the use of the individual to whom it is addressed. Any views or
 opinions expressed are solely those of the author and do not necessarily
 represent those of Shape Blue Ltd or related companies. If you are not the
 intended recipient of this email, you must neither take any action based
 upon its contents, nor copy or show it to anyone. Please contact the sender
 if you believe you have received this email in error. Shape Blue Ltd is a
 company incorporated in England  Wales. ShapeBlue Services India LLP is a
 company incorporated in India and is operated under license from Shape Blue
 Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil
 and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is
 a company registered by The Republic of South Africa and is traded under
 license from Shape Blue Ltd. ShapeBlue is a registered trademark.





Re: [ACS43][MGMT Server][Load Balancing]

2014-11-03 Thread benoit lair
Hi Rohit,

I have two managements (java) servers :

First node called node01 : 192.168.0.10
Second node called node02 : 192.168.0.11

The database is 192.168.0.200 (not vip of a netscaler, but a fully
active/active cluster with a ha-ip), the mysql server is well joining from
the 192.168.0.10 or from the 192.168.0.11, all of two have the rights
privileges onto the mysql-server.
For information, my mysql server cluster 192.168.0.200 is replicated on two
others slaves.

I want to contact my mgmt server with 192.168.0.100 (vip on netscaler vpx)
I have this config :

192.168.0.100 port type HTTP 8080 : with LB Methode Least connection and
persistence with sourceip, 2 mins of timeout, netmask of 255.255.255.255

192.168.0.100 port type TCP 8250 : with LB Methode Roun-robin and
persistence with sourceip, 5 mins of timeout, netmask of 255.255.255.255

192.168.0.100 port type TCP 8096 : with LB Methode Least connections and
persistence with sourceip, 5 mins of timeout, netmask of 255.255.255.255

The rule with the port 8096 does not serve for the moment.

I have deployed the node01 with :
cloudstack-setup-databases user:password@192.168.0.200
--deploy-as=root:password -e file -m mypassphrase -k mypassphrase -i
192.168.0.10

I have deployed the node02 with :
cloudstack-setup-databases user:password@192.168.0.200 -e file -m
mypassphrase -k mypassphrase -i 192.168.0.11

I executed on node01 the command :
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
-m /mnt/secondary -u
http://cloudstack.apt-get.eu/systemvm/systemvm64template-2014-01-14-master-xen.vhd.bz2
-h xenserver -s mypassphase -F

I changed the global host parameter accessing with
http://192.168.0.100:8080/client with the value : 192.168.0.100
I changed the global parameter agent.lb to true with the same way.

I restarted my two management servers, using
/etc/init.d/cloudstack-management restart, so all of twos seems to well
communicate mutually (grepping the logs entries with HeartBeat tag says me
the two servers are seing them)

Also i created a zone with local storage enabled, a pod, a cluster, a host
and some public networks, without activating the zone (for local storage
trick). Terminated the wizard of creation of the zone.
I went to primary storage of my zone, adding a nfs primary storage server,
activating the zone.

I see my 2 systems vms spawning :
 the seconday storage vm is ok, state running and agent running
 the console proxy vm is not all ok, state running but the agent as a -
status.

If i try to reboot the ssvm, it is ok, but if i try to reboot the cpvm, the
webui says me it is ok, but if i look at xencenter i do not see my cpvm
rebooting.
Here are my mgmt logs entries about this reboot order :

2014-11-03 17:49:54,187 DEBUG [c.c.a.m.ClusteredAgentAttache]
(StatsCollector-2:ctx-9c4fb510) Seq 1-1490813094: Unable to forward null
2014-11-03 17:49:54,187 DEBUG [c.c.s.StorageManagerImpl]
(StatsCollector-2:ctx-9c4fb510) Unable to send storage pool command to
Pool[2|NetworkFilesystem] via 1
com.cloud.exception.AgentUnavailableException: Resource [Host:1] is
unreachable: Host 1: Unable to reach the peer that the agent is connected
at
com.cloud.agent.manager.ClusteredAgentAttache.send(ClusteredAgentAttache.java:220)
at com.cloud.agent.manager.AgentAttache.send(AgentAttache.java:398)
at com.cloud.agent.manager.AgentManagerImpl.send(AgentManagerImpl.java:394)
at com.cloud.agent.manager.AgentManagerImpl.send(AgentManagerImpl.java:347)
at
com.cloud.storage.StorageManagerImpl.sendToPool(StorageManagerImpl.java:964)

If i connect to http://192.168.0.11:8080/client and go to
infrastructuresystems vms, is the 2 systems vms with the same states, but
here if i try to reboot the cpvm it is working.

I think there is a problem with my 2 management nodes installation (maybe
the agent does not communicate well between the two mgmt servers).

How can i troubleshoot this ?

Thanks for your lights Rohit.







2014-11-03 14:58 GMT+01:00 Rohit Yadav rohit.ya...@shapeblue.com:

 Hi Benoit,

 You got the db deployment right, not sure why it’s failing for you. It
 would require some debugging of your deployment environment.

 The only advice I have for you is that - you make sure the cloud user is
 able to log in to the mysql server from all the mgmt server IPs and
 netscaler VIPs (not sure if you’re load-balancing for port 3306/mysql).

  On 03-Nov-2014, at 6:26 pm, benoit lair kurushi4...@gmail.com wrote:
 
  Any advice ?
 
  the 192.168.0.100 is the VIP

Re: [ACS43][MGMT Server][Load Balancing]

2014-10-31 Thread benoit lair
Hi,

i retried an install of LB Mgmt servers :

On the first mgmt server 192.168.0.10 i tried a :
cloudstack-setup-databases cloud:password@192.168.0.200
cloud%3Apassword@192.168.0.10 --deploy-as=root:mypassroot -e file -m
mypassphrase -k mypassphrase -i 192.168.0.100

192.168.0.100 is the vip of the netscaler
the 192.168.0.10 if the first mgmt server node (192.168.0.11 is the second)

Whe i try to start the mgmt server i have this error :

2014-10-31 16:05:14,246 DEBUG [c.c.s.ConfigurationServerImpl] (main:null)
Execution is successful.
2014-10-31 16:05:14,285 INFO  [c.c.c.ClusterManagerImpl] (main:null) Start
configuring cluster manager : ClusterManagerImpl
2014-10-31 16:05:14,285 INFO  [c.c.c.ClusterManagerImpl] (main:null)
Cluster node IP : 192.168.0.100
2014-10-31 16:05:14,292 ERROR [o.a.c.s.l.CloudStackExtendedLifeCycle]
(main:null) Failed to configure ClusterManagerImpl
javax.naming.ConfigurationException: cluster node IP should be valid local
address where the server is running, please check your configuration

And the management server does not start.

What did i miss ?

Thanks for your responses.

Regards, Benoit.


2014-10-30 10:36 GMT+01:00 Rohit Yadav rohit.ya...@shapeblue.com:

 Hi,

  On 30-Oct-2014, at 2:52 pm, benoit lair kurushi4...@gmail.com wrote:
 
  So when doing a cloudstack-setup-databases with -i 192.168.0.100 i
 have
  to specify the vip of the netscaler ? As well i am on the first node or
 on
  the second node or N node ?

 The -i ip is used as the host IP by CloudStack management server, among
 other things this is especially used by systemvms to connect to mgmt
 server. So, if you’re load balancing using netscaler, use the netscaler IP.
 Make sure to configure ports 8080, 8250 appropriately. Make sure all the
 ACS mgmt servers (primary/first one and others) can connect to mysql server
 from their respective IPs.

  Yes the management server was crashed, for the moment i don't have the
  access to this management server pool.
  As soon i'm getting back to this mgmt server pool, i give you more infos.
 
  I'm going to reinstall some fresh mgmt servers and give you more
 feedback.

 Sure.

  Concerning the mgmt server log entries Resp: Routing to peer, is it
  normal that the mgmt server is producing so much log entries with this
  message (seeing my mgmt log file growing very fast).

 Since, both the management servers are loadbalancing internal calls,
 you’ll see these a lot. You may plan your log storage appropriately or
 configure log4j xml to not log INFO/DEBUG etc.

 Regards,
 Rohit Yadav
 Software Architect, ShapeBlue
 M. +91 88 262 30892 | rohit.ya...@shapeblue.com
 Blog: bhaisaab.org | Twitter: @_bhaisaab

 Find out more about ShapeBlue and our range of CloudStack related services

 IaaS Cloud Design  Build
 http://shapeblue.com/iaas-cloud-design-and-build//
 CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
 CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
 CloudStack Infrastructure Support
 http://shapeblue.com/cloudstack-infrastructure-support/
 CloudStack Bootcamp Training Courses
 http://shapeblue.com/cloudstack-training/

 This email and any attachments to it may be confidential and are intended
 solely for the use of the individual to whom it is addressed. Any views or
 opinions expressed are solely those of the author and do not necessarily
 represent those of Shape Blue Ltd or related companies. If you are not the
 intended recipient of this email, you must neither take any action based
 upon its contents, nor copy or show it to anyone. Please contact the sender
 if you believe you have received this email in error. Shape Blue Ltd is a
 company incorporated in England  Wales. ShapeBlue Services India LLP is a
 company incorporated in India and is operated under license from Shape Blue
 Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil
 and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is
 a company registered by The Republic of South Africa and is traded under
 license from Shape Blue Ltd. ShapeBlue is a registered trademark.



Re: [ACS43][MGMT Server][Load Balancing]

2014-10-30 Thread benoit lair
Hello Rohit,

So when doing a cloudstack-setup-databases with -i 192.168.0.100 i have
to specify the vip of the netscaler ? As well i am on the first node or on
the second node or N node ?

Yes the management server was crashed, for the moment i don't have the
access to this management server pool.
As soon i'm getting back to this mgmt server pool, i give you more infos.

I'm going to reinstall some fresh mgmt servers and give you more feedback.

Concerning the mgmt server log entries Resp: Routing to peer, is it
normal that the mgmt server is producing so much log entries with this
message (seeing my mgmt log file growing very fast).

Thanks for your response.

Regards, Benoit.

2014-10-29 16:04 GMT+01:00 Rohit Yadav rohit.ya...@shapeblue.com:

 Hi Benoit,

 Please see my reply in-line;

  On 29-Oct-2014, at 7:56 pm, benoit lair kurushi4...@gmail.com wrote:
 
  I'm going to redeploy my management nodes and retest my load balacning
  confgiuration.
 
  Also concerning the commands i used in order to deploy the management
  servers :
 
  
  For the first node (192.168.0.10) i've done :
 
  cloudstack-setup-databases cloud:password@192.168.0.200
  cloud%3Apassword@192.168.0.10 --deploy-as=root:mypassroot -e file -m
  mypassphrase -k mypassphrase -i 192.168.0.100
 
  The second node (192.168.0.11) was deployed with
  cloudstack-setup-databases cloud:password@192.168.0.200
  cloud%3Apassword@192.168.0.10 -e file -m mypassphrase -k mypassphrase
 -i
  192.168.0.100
 
  Is it alright, or is this what did a split brain scenario ?
  
 
  Was i in the right way ?

 This is correct. The first management server is deployed with —deploy-as
 which sets up your database. More management server nodes need to use the
 same script without the —deploy-as which only configures db options (host,
 port, ip, username and password) but this won’t redeploy your database.
 Just make sure to use the same password/ip and db as before, and make sure
 that the cloud user (or other username) can connect to the MySQL server
 from both the IPs. (do something like grant user@% for host or 192.168.%
 etc.).

  Concerning my split brain scenario, i had a management server whom
 detected
  (after being up for few minutes, deployed a zone, cluster, host and first
  virtual machine) there was split brain and so it killed itself, if i
 tried

 When you say it killed itself, did the management server crash. What did
 the logs say?

  to reup this management server, i got full entries in the logs like  :
 
  DEBUG [c.c.a.m.
  ClusteredAgentManagerImpl] (AgentManager-Handler-8:null) Seq 1-289341441:
  MgmtId 104425904713066: Resp: Routing to peer

 This is fine, this simply means it is load balancing requests (both
 api-mgmt server and mgmt server-systemvms).

 Regards,
 Rohit Yadav
 Software Architect, ShapeBlue
 M. +91 88 262 30892 | rohit.ya...@shapeblue.com
 Blog: bhaisaab.org | Twitter: @_bhaisaab

 Find out more about ShapeBlue and our range of CloudStack related services

 IaaS Cloud Design  Build
 http://shapeblue.com/iaas-cloud-design-and-build//
 CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
 CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
 CloudStack Infrastructure Support
 http://shapeblue.com/cloudstack-infrastructure-support/
 CloudStack Bootcamp Training Courses
 http://shapeblue.com/cloudstack-training/

 This email and any attachments to it may be confidential and are intended
 solely for the use of the individual to whom it is addressed. Any views or
 opinions expressed are solely those of the author and do not necessarily
 represent those of Shape Blue Ltd or related companies. If you are not the
 intended recipient of this email, you must neither take any action based
 upon its contents, nor copy or show it to anyone. Please contact the sender
 if you believe you have received this email in error. Shape Blue Ltd is a
 company incorporated in England  Wales. ShapeBlue Services India LLP is a
 company incorporated in India and is operated under license from Shape Blue
 Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil
 and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is
 a company registered by The Republic of South Africa and is traded under
 license from Shape Blue Ltd. ShapeBlue is a registered trademark.



Re: [ACS43][MGMT Server][Load Balancing]

2014-10-29 Thread benoit lair
Hello Rohit,

I'm going to redeploy my management nodes and retest my load balacning
confgiuration.

Also concerning the commands i used in order to deploy the management
servers :


For the first node (192.168.0.10) i've done :

cloudstack-setup-databases cloud:password@192.168.0.200
cloud%3Apassword@192.168.0.10 --deploy-as=root:mypassroot -e file -m
mypassphrase -k mypassphrase -i 192.168.0.100

The second node (192.168.0.11) was deployed with
cloudstack-setup-databases cloud:password@192.168.0.200
cloud%3Apassword@192.168.0.10 -e file -m mypassphrase -k mypassphrase -i
192.168.0.100

Is it alright, or is this what did a split brain scenario ?


Was i in the right way ?

Concerning my split brain scenario, i had a management server whom detected
(after being up for few minutes, deployed a zone, cluster, host and first
virtual machine) there was split brain and so it killed itself, if i tried
to reup this management server, i got full entries in the logs like  :

DEBUG [c.c.a.m.
ClusteredAgentManagerImpl] (AgentManager-Handler-8:null) Seq 1-289341441:
MgmtId 104425904713066: Resp: Routing to peer

Fulling the logs files all the times


Thanks for your responses

2014-10-27 10:15 GMT+01:00 Rohit Yadav rohit.ya...@shapeblue.com:

 Hi Benoit,

 Can you describe your split brain issues you’re seeing?

 By setting agent.lb.enable to true, all requests to the management servers
 will be internally balanced by the clustered management servers. In case of
 split-brain, you would get different API results, for example listing of
 resources such as users, hosts etc.

 Just make sure that you’re MySQL configuration works well with both the
 management servers individually. A possible case could be that one of the
 servers can reach the mysql server but the other one fails.

  On 27-Oct-2014, at 2:30 pm, benoit lair kurushi4...@gmail.com wrote:
 
  Hi Geoff,
 
 
  Thanks a lot for your explanations.
 
  So when testing with a 2 nodes management servers, i had split brain
  scenario :/
 
  When installing the management server, did i do the rights things ?
 
  In my case 192.168.0.100 (VIP in netscaler,  load balancing the ports you
  listed above) with a service group including the 2 management server
 nodes
  (192.168.0.10 for the first and 192.168.0.11 for the secondary) and a
 mysql
  server with 192.168.0.200
 
  For the first node (192.168.0.10) i've done :
 
  cloudstack-setup-databases cloud:password@192.168.0.200
  cloud%3Apassword@192.168.0.10 --deploy-as=root:mypassroot -e file -m
  mypassphrase -k mypassphrase -i 192.168.0.100
 
  The second node (192.168.0.11) was deployed with
  cloudstack-setup-databases cloud:password@192.168.0.200
  cloud%3Apassword@192.168.0.10 -e file -m mypassphrase -k mypassphrase
 -i
  192.168.0.100
 
  Is it alright, or is this what did a split brain scenario ?
 
 
  Thanks a lot four your responses.
 
  Regards, Benoit.
 
  2014-10-23 10:43 GMT+02:00 Geoff Higginbottom 
  geoff.higginbot...@shapeblue.com:
 
  Hi Benoit
 
  When running more than one CloudStack Management Server you need to put
 a
  load balancer in front of them.  This load balancer (or ideally pair of
 HA
  load balancers) need to manage the following ports
 
  8250 - This is the port the System VMs use when communicating with the
  management servers, and they use the address specified in the 'host'
 global
  settings.
 
  8080 - Access to the Web UI
 
  8096 (or a port of your choosing) - Optional - only required is you are
  using the unauthenticated API port
 
  You can look in the 'cloud.hosts' table in the DB and check out the
  'status' and 'mgmt_server_id' columns, the latter is the ID of the
  management server which is responsible for managing the System VM or
  Hypervisor.
 
  One last point, you need to ensure you have set global settings
  agent.lb.enable to true to enable the load balancing of System VMs and
  Hypervisors across multiple Management Servers.
 
  Regards
 
  Geoff Higginbottom
 
  D: +44 20 3603 0542 | S: +44 20 3603 0540 | M: +447968161581
 
  geoff.higginbot...@shapeblue.com
 
  -Original Message-
  From: benoit lair [mailto:kurushi4...@gmail.com]
  Sent: 23 October 2014 08:49
  To: users@cloudstack.apache.org
  Subject: Re: [ACS43][MGMT Server][Load Balancing]
 
  Any advice ? Any feedback ? Does somebody already did that ?
 
  Thanks for your responses.
 
  2014-10-13 10:51 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Hello,
 
 
  Is there somebody already accomplished some load balancing on the
  management app server ?
 
 
 
  Thanks for your responses.
 
  Regards.
 
  2014-10-09 16:24 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Here are the logs entries i have in the other mgmt node :
 
  DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
  (AgentManager-Handler-8:null) Seq 1-289341441: MgmtId
  104425904713066: Resp: Routing to peer
 
  2014-10-09 16:21 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  What i was thinking of..
 
  Here are the logs entries in one

Re: [ACS43][MGMT Server][Load Balancing]

2014-10-27 Thread benoit lair
Hi Geoff,


Thanks a lot for your explanations.

So when testing with a 2 nodes management servers, i had split brain
scenario :/

When installing the management server, did i do the rights things ?

In my case 192.168.0.100 (VIP in netscaler,  load balancing the ports you
listed above) with a service group including the 2 management server nodes
(192.168.0.10 for the first and 192.168.0.11 for the secondary) and a mysql
server with 192.168.0.200

For the first node (192.168.0.10) i've done :

cloudstack-setup-databases cloud:password@192.168.0.200
cloud%3Apassword@192.168.0.10 --deploy-as=root:mypassroot -e file -m
mypassphrase -k mypassphrase -i 192.168.0.100

The second node (192.168.0.11) was deployed with
cloudstack-setup-databases cloud:password@192.168.0.200
cloud%3Apassword@192.168.0.10 -e file -m mypassphrase -k mypassphrase -i
192.168.0.100

Is it alright, or is this what did a split brain scenario ?


Thanks a lot four your responses.

Regards, Benoit.

2014-10-23 10:43 GMT+02:00 Geoff Higginbottom 
geoff.higginbot...@shapeblue.com:

 Hi Benoit

 When running more than one CloudStack Management Server you need to put a
 load balancer in front of them.  This load balancer (or ideally pair of HA
 load balancers) need to manage the following ports

 8250 - This is the port the System VMs use when communicating with the
 management servers, and they use the address specified in the 'host' global
 settings.

 8080 - Access to the Web UI

 8096 (or a port of your choosing) - Optional - only required is you are
 using the unauthenticated API port

 You can look in the 'cloud.hosts' table in the DB and check out the
 'status' and 'mgmt_server_id' columns, the latter is the ID of the
 management server which is responsible for managing the System VM or
 Hypervisor.

 One last point, you need to ensure you have set global settings
 agent.lb.enable to true to enable the load balancing of System VMs and
 Hypervisors across multiple Management Servers.

 Regards

 Geoff Higginbottom

 D: +44 20 3603 0542 | S: +44 20 3603 0540 | M: +447968161581

 geoff.higginbot...@shapeblue.com

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: 23 October 2014 08:49
 To: users@cloudstack.apache.org
 Subject: Re: [ACS43][MGMT Server][Load Balancing]

 Any advice ? Any feedback ? Does somebody already did that ?

 Thanks for your responses.

 2014-10-13 10:51 GMT+02:00 benoit lair kurushi4...@gmail.com:

  Hello,
 
 
  Is there somebody already accomplished some load balancing on the
  management app server ?
 
 
 
  Thanks for your responses.
 
  Regards.
 
  2014-10-09 16:24 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Here are the logs entries i have in the other mgmt node :
 
  DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
  (AgentManager-Handler-8:null) Seq 1-289341441: MgmtId
  104425904713066: Resp: Routing to peer
 
  2014-10-09 16:21 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  What i was thinking of..
 
  Here are the logs entries in one of my mgmt server :
 
  2014-10-09 15:47:27,451 ERROR [c.c.c.ClusterManagerImpl]
  (Cluster-Heartbeat-1:cx-71073990) We have detected that at least one
  management server peer reports tat this management server is down,
  perform active fencing to avoid split-brain ituation
 
  So this one has been shutdown. How to recover my 2 mgmt nodes in
  normal state ?
 
  Thanks.
  Regards, Benoit
 
  2014-10-09 16:20 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Hello,
 
 
  I would like to do some load balancing on the mgmt server, i would
  like to know if i am on the good way :
 
  I installed Two management servers.
 
  The first with the deploy-as in the command
  cloudstack-setup-databases with the netscalerlbip as the management
  server
  :
 
  cloudstack-setup-databases cloud:password@192.168.0.10
  --deploy-as=root:mypassroot -e file -m mypassphrase -k mypassphrase
  -i netscalerlbip
 
  The second node was deployed with
  cloudstack-setup-databases cloud:password@192.168.0.10 -e file -m
  mypassphrase -k mypassphrase -i netscalerlbip
 
  So my netscaler is doing load balancing on the 2 nodes availables
  accross netscalerlbip.
 
  I have confgured the host global parameter with the netscalerlbip
  as ip and restarted the 2 mgmt servers.
 
  How to be sure that my mgmts servers are working correctly ? Am i
  sure not to have split brain ?
 
  Thanks for your responses.
 
  Regards,
  Benoit
 
 
 
 
 
 Find out more about ShapeBlue and our range of CloudStack related services

 IaaS Cloud Design  Build
 http://shapeblue.com/iaas-cloud-design-and-build//
 CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
 CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
 CloudStack Infrastructure Support
 http://shapeblue.com/cloudstack-infrastructure-support/
 CloudStack Bootcamp Training Courses
 http://shapeblue.com/cloudstack-training/

 This email and any attachments to it may be confidential and are intended

Re: [ACS43][MGMT Server][Load Balancing]

2014-10-27 Thread benoit lair
So is the agent.lb.enable parameter is the element who avoids split brain ?

Benoit.


2014-10-27 10:00 GMT+01:00 benoit lair kurushi4...@gmail.com:

 Hi Geoff,


 Thanks a lot for your explanations.

 So when testing with a 2 nodes management servers, i had split brain
 scenario :/

 When installing the management server, did i do the rights things ?

 In my case 192.168.0.100 (VIP in netscaler,  load balancing the ports you
 listed above) with a service group including the 2 management server nodes
 (192.168.0.10 for the first and 192.168.0.11 for the secondary) and a mysql
 server with 192.168.0.200

 For the first node (192.168.0.10) i've done :

 cloudstack-setup-databases cloud:password@192.168.0.200
 cloud%3Apassword@192.168.0.10 --deploy-as=root:mypassroot -e file -m
 mypassphrase -k mypassphrase -i 192.168.0.100

 The second node (192.168.0.11) was deployed with
 cloudstack-setup-databases cloud:password@192.168.0.200
 cloud%3Apassword@192.168.0.10 -e file -m mypassphrase -k mypassphrase
 -i 192.168.0.100

 Is it alright, or is this what did a split brain scenario ?


 Thanks a lot four your responses.

 Regards, Benoit.

 2014-10-23 10:43 GMT+02:00 Geoff Higginbottom 
 geoff.higginbot...@shapeblue.com:

 Hi Benoit

 When running more than one CloudStack Management Server you need to put a
 load balancer in front of them.  This load balancer (or ideally pair of HA
 load balancers) need to manage the following ports

 8250 - This is the port the System VMs use when communicating with the
 management servers, and they use the address specified in the 'host' global
 settings.

 8080 - Access to the Web UI

 8096 (or a port of your choosing) - Optional - only required is you are
 using the unauthenticated API port

 You can look in the 'cloud.hosts' table in the DB and check out the
 'status' and 'mgmt_server_id' columns, the latter is the ID of the
 management server which is responsible for managing the System VM or
 Hypervisor.

 One last point, you need to ensure you have set global settings
 agent.lb.enable to true to enable the load balancing of System VMs and
 Hypervisors across multiple Management Servers.

 Regards

 Geoff Higginbottom

 D: +44 20 3603 0542 | S: +44 20 3603 0540 | M: +447968161581

 geoff.higginbot...@shapeblue.com

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: 23 October 2014 08:49
 To: users@cloudstack.apache.org
 Subject: Re: [ACS43][MGMT Server][Load Balancing]

 Any advice ? Any feedback ? Does somebody already did that ?

 Thanks for your responses.

 2014-10-13 10:51 GMT+02:00 benoit lair kurushi4...@gmail.com:

  Hello,
 
 
  Is there somebody already accomplished some load balancing on the
  management app server ?
 
 
 
  Thanks for your responses.
 
  Regards.
 
  2014-10-09 16:24 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Here are the logs entries i have in the other mgmt node :
 
  DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
  (AgentManager-Handler-8:null) Seq 1-289341441: MgmtId
  104425904713066: Resp: Routing to peer
 
  2014-10-09 16:21 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  What i was thinking of..
 
  Here are the logs entries in one of my mgmt server :
 
  2014-10-09 15:47:27,451 ERROR [c.c.c.ClusterManagerImpl]
  (Cluster-Heartbeat-1:cx-71073990) We have detected that at least one
  management server peer reports tat this management server is down,
  perform active fencing to avoid split-brain ituation
 
  So this one has been shutdown. How to recover my 2 mgmt nodes in
  normal state ?
 
  Thanks.
  Regards, Benoit
 
  2014-10-09 16:20 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Hello,
 
 
  I would like to do some load balancing on the mgmt server, i would
  like to know if i am on the good way :
 
  I installed Two management servers.
 
  The first with the deploy-as in the command
  cloudstack-setup-databases with the netscalerlbip as the management
  server
  :
 
  cloudstack-setup-databases cloud:password@192.168.0.10
  --deploy-as=root:mypassroot -e file -m mypassphrase -k mypassphrase
  -i netscalerlbip
 
  The second node was deployed with
  cloudstack-setup-databases cloud:password@192.168.0.10 -e file -m
  mypassphrase -k mypassphrase -i netscalerlbip
 
  So my netscaler is doing load balancing on the 2 nodes availables
  accross netscalerlbip.
 
  I have confgured the host global parameter with the netscalerlbip
  as ip and restarted the 2 mgmt servers.
 
  How to be sure that my mgmts servers are working correctly ? Am i
  sure not to have split brain ?
 
  Thanks for your responses.
 
  Regards,
  Benoit
 
 
 
 
 
 Find out more about ShapeBlue and our range of CloudStack related services

 IaaS Cloud Design  Build
 http://shapeblue.com/iaas-cloud-design-and-build//
 CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
 CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
 CloudStack Infrastructure Support
 http://shapeblue.com/cloudstack-infrastructure

Re: [ACS 43] [MGMT Server][High Availability]

2014-10-13 Thread benoit lair
Ok, you are using the Xen hypervisor, not Xenserver.

I would like to achieve my replication with drbd and xenserver.

But i was not thinking about doing it at the hypervisor level but in the vm
level.



2014-10-10 15:40 GMT+02:00 France mailingli...@isg.si:

 For ACS we are using XS 6.0.2+hotfixes.
 For the “old cloud” where management servers reside in a form of XEN PV
 machines we are using RHEL.
 OCFS2 is not needed when virtual machine uses DRBD device directly. We
 have one DRBD device per each VM.

 Regards,
 F.

 On 09 Oct 2014, at 16:13, benoit lair kurushi4...@gmail.com wrote:

  Hello,
 
  Ok, bu i would like to have a separated mysql db server and mgmt servers.
  Are you using KVM or xenserver ?
  With your active/active drbd cluster, are you using ocfs2 ?
 
  2014-10-08 12:34 GMT+02:00 France mailingli...@isg.si:
 
  Put the ACS java app in the same server and it will always get working
  mysql server when it is working.
  Also I suggest you start using pacemaker, corosync or cman.
 
  My management server is actually a virtual instance on RHEL
 active/active
  drbd cluster (so live migration works).
 
  If you want to test how it behaves, stop mysql and check for yourself. I
  highly doubt anyone has tested it yet.
 
  Regards,
  F.
 
  On 08 Oct 2014, at 11:37, benoit lair kurushi4...@gmail.com wrote:
 
  Hello Folks,
 
 
  I'm trying new HA implementation of my mgmt server.
 
  I'm looking for HA for the mysql server.
 
  Could it be problematic if i install a drbd active/passive mysql
 cluster
  ?
  (drbd  heartbeat)
 
  The reason is because if i a have a fail of my primary server (and so
 the
  cluster doing a failover and transmiting its VIP (heartbeat) to the
 other
  node), the mysql server doesn't respond during few seconds (due to the
  deadtime parameter).
 
  So is this scenario problematic for the integrity of the management
  server ?
 
 
  Thanks for your responses.
 
 




Re: [ACS 43] [MGMT Server][High Availability]

2014-10-09 Thread benoit lair
Hello,

Ok, bu i would like to have a separated mysql db server and mgmt servers.
Are you using KVM or xenserver ?
With your active/active drbd cluster, are you using ocfs2 ?

2014-10-08 12:34 GMT+02:00 France mailingli...@isg.si:

 Put the ACS java app in the same server and it will always get working
 mysql server when it is working.
 Also I suggest you start using pacemaker, corosync or cman.

 My management server is actually a virtual instance on RHEL active/active
 drbd cluster (so live migration works).

 If you want to test how it behaves, stop mysql and check for yourself. I
 highly doubt anyone has tested it yet.

 Regards,
 F.

 On 08 Oct 2014, at 11:37, benoit lair kurushi4...@gmail.com wrote:

  Hello Folks,
 
 
  I'm trying new HA implementation of my mgmt server.
 
  I'm looking for HA for the mysql server.
 
  Could it be problematic if i install a drbd active/passive mysql cluster
 ?
  (drbd  heartbeat)
 
  The reason is because if i a have a fail of my primary server (and so the
  cluster doing a failover and transmiting its VIP (heartbeat) to the other
  node), the mysql server doesn't respond during few seconds (due to the
  deadtime parameter).
 
  So is this scenario problematic for the integrity of the management
 server ?
 
 
  Thanks for your responses.




[ACS43][MGMT Server][Load Balancing]

2014-10-09 Thread benoit lair
Hello,


I would like to do some load balancing on the mgmt server, i would like to
know if i am on the good way :

I installed Two management servers.

The first with the deploy-as in the command cloudstack-setup-databases
with the netscalerlbip as the management server :

cloudstack-setup-databases cloud:password@192.168.0.10
--deploy-as=root:mypassroot -e file -m mypassphrase -k mypassphrase -i
netscalerlbip

The second node was deployed with
cloudstack-setup-databases cloud:password@192.168.0.10 -e file -m
mypassphrase -k mypassphrase -i netscalerlbip

So my netscaler is doing load balancing on the 2 nodes availables accross
netscalerlbip.

I have confgured the host global parameter with the netscalerlbip as ip
and restarted the 2 mgmt servers.

How to be sure that my mgmts servers are working correctly ? Am i sure not
to have split brain ?

Thanks for your responses.

Regards,
Benoit


Re: [ACS43][MGMT Server][Load Balancing]

2014-10-09 Thread benoit lair
What i was thinking of..

Here are the logs entries in one of my mgmt server :

2014-10-09 15:47:27,451 ERROR [c.c.c.ClusterManagerImpl]
(Cluster-Heartbeat-1:cx-71073990) We have detected that at least one
management server peer reports tat this management server is down, perform
active fencing to avoid split-brain ituation

So this one has been shutdown. How to recover my 2 mgmt nodes in normal
state ?

Thanks.
Regards, Benoit

2014-10-09 16:20 GMT+02:00 benoit lair kurushi4...@gmail.com:

 Hello,


 I would like to do some load balancing on the mgmt server, i would like to
 know if i am on the good way :

 I installed Two management servers.

 The first with the deploy-as in the command cloudstack-setup-databases
 with the netscalerlbip as the management server :

 cloudstack-setup-databases cloud:password@192.168.0.10
 --deploy-as=root:mypassroot -e file -m mypassphrase -k mypassphrase -i
 netscalerlbip

 The second node was deployed with
 cloudstack-setup-databases cloud:password@192.168.0.10 -e file -m
 mypassphrase -k mypassphrase -i netscalerlbip

 So my netscaler is doing load balancing on the 2 nodes availables accross
 netscalerlbip.

 I have confgured the host global parameter with the netscalerlbip as ip
 and restarted the 2 mgmt servers.

 How to be sure that my mgmts servers are working correctly ? Am i sure not
 to have split brain ?

 Thanks for your responses.

 Regards,
 Benoit




Re: [ACS43][MGMT Server][Load Balancing]

2014-10-09 Thread benoit lair
Here are the logs entries i have in the other mgmt node :

DEBUG [c.c.a.m.ClusteredAgentManagerImpl] (AgentManager-Handler-8:null) Seq
1-289341441: MgmtId 104425904713066: Resp: Routing to peer

2014-10-09 16:21 GMT+02:00 benoit lair kurushi4...@gmail.com:

 What i was thinking of..

 Here are the logs entries in one of my mgmt server :

 2014-10-09 15:47:27,451 ERROR [c.c.c.ClusterManagerImpl]
 (Cluster-Heartbeat-1:cx-71073990) We have detected that at least one
 management server peer reports tat this management server is down, perform
 active fencing to avoid split-brain ituation

 So this one has been shutdown. How to recover my 2 mgmt nodes in normal
 state ?

 Thanks.
 Regards, Benoit

 2014-10-09 16:20 GMT+02:00 benoit lair kurushi4...@gmail.com:

 Hello,


 I would like to do some load balancing on the mgmt server, i would like
 to know if i am on the good way :

 I installed Two management servers.

 The first with the deploy-as in the command cloudstack-setup-databases
 with the netscalerlbip as the management server :

 cloudstack-setup-databases cloud:password@192.168.0.10
 --deploy-as=root:mypassroot -e file -m mypassphrase -k mypassphrase -i
 netscalerlbip

 The second node was deployed with
 cloudstack-setup-databases cloud:password@192.168.0.10 -e file -m
 mypassphrase -k mypassphrase -i netscalerlbip

 So my netscaler is doing load balancing on the 2 nodes availables accross
 netscalerlbip.

 I have confgured the host global parameter with the netscalerlbip as ip
 and restarted the 2 mgmt servers.

 How to be sure that my mgmts servers are working correctly ? Am i sure
 not to have split brain ?

 Thanks for your responses.

 Regards,
 Benoit





[ACS 43] [MGMT Server][High Availability]

2014-10-08 Thread benoit lair
Hello Folks,


I'm trying new HA implementation of my mgmt server.

I'm looking for HA for the mysql server.

Could it be problematic if i install a drbd active/passive mysql cluster ?
(drbd  heartbeat)

The reason is because if i a have a fail of my primary server (and so the
cluster doing a failover and transmiting its VIP (heartbeat) to the other
node), the mysql server doesn't respond during few seconds (due to the
deadtime parameter).

So is this scenario problematic for the integrity of the management server ?


Thanks for your responses.


Netscaler VPC and multiple inter tiers LB

2014-09-05 Thread benoit lair
Hello Folks,


I'm testing Netscaler VPX with acs 4.3. I have several VPCs deployed into
my cloud.

I would like to get my netscaler working with my vpcs.

So from what i have tested, it seems that :
- i can't share a VPX with more than one VPC ?
- in order to get my netscaler working with my vpc, i need to declare it
dedicated. So it can't be used both with vpc tiers and isolated networks ?
- i can use netscaler with a vpc only with public tier (means external
tier)

Can you confirm these limitations, or is it due to a misconfiguration of my
own networks offerings ?

So another question is :

How can i achieve ns-lb with several tiers in a vpc ?

I have a vpc with web-tier, app-tier and sql-tier :

how can i have in the same time, nslb between outside and web-tier, nslb
between web-tier and app-tier and nslb between app-tier and sql-tier,
having only one VPX ?

Thanks four your lights.

Regards, Benoit.


Re: Netscaler VPC and multiple inter tiers LB

2014-09-05 Thread benoit lair
Hi Francois,

Thanks for your response. So i do not need to do deeper tests in order to
confirm what i thought. You confirmed all that i feared. As you said it is
a very huge problem.

I can understand the problematic to have subnet overlapping with several
vpc (although you have several users and not just a sysops dept wanting to
manage several vpcs). Here again, it is strange, because vpc tiers are
vlans isolated, so you can have several times the same subnet present in
two differents vpcs, with 802.1Q isolation, even the netscaler could manage
this without trouble, isn't it ?

Now reducing the opportunities with my netscaler, i don't understand why i
can't do Ns-Lb ith all my tiers inside a vpc :

I want to host a web application according to the 3-tier model (web reverse
proxy, web app server, sql database server), how can i exploit correctly
the tcp multiplexing feature of the Netscaler if once passed the web
reverse proxy tier, i do pass my request to a simple (too ?) lb internal vm
in order to contact my web app server ?

However i imagined to do some internal lb for the external tier (web
reverse proxy) and to pass the http(s) request to my web app server through
the netscaler. But here again when using NS with VPC, it can only be used
for external LB and not internal LB.

Would you have another solution for this ? (a tweak into mgmt server db) Or
is it just impossible to realize ? Although i have 2 Netscaler into my VPC
? (one for the web reverse proxy tier, another for the web app tier)


Thanks for your responses.

Benoit.




2014-09-05 13:35 GMT+02:00 Francois Gaudreault fgaudrea...@cloudops.com:

 Hi Benoit,

 The limitations that you describes are exactly what the implementation is.
 Dedicated VPX per VPC, only public LB for one tier. However, there is a
 reasoning behind this. Since users can control their tier subnets, you may
 have overlapping. That's why you can't have a shared NetScaler for the VPCs.

 You can't do inter-tier load balancing using the NetScaler if you have it
 inside CloudStack. To be honest, we also feel this is a huge problem, and
 we will likely look at our options. You need to use the Internal LB for
 that piece.

 Hope it helps/confirms your thoughts :)

 Francois


 On 2014-09-05, 6:03 AM, benoit lair wrote:

 Hello Folks,


 I'm testing Netscaler VPX with acs 4.3. I have several VPCs deployed into
 my cloud.

 I would like to get my netscaler working with my vpcs.

 So from what i have tested, it seems that :
 - i can't share a VPX with more than one VPC ?
 - in order to get my netscaler working with my vpc, i need to declare it
 dedicated. So it can't be used both with vpc tiers and isolated
 networks ?
 - i can use netscaler with a vpc only with public tier (means external
 tier)

 Can you confirm these limitations, or is it due to a misconfiguration of
 my
 own networks offerings ?

 So another question is :

 How can i achieve ns-lb with several tiers in a vpc ?

 I have a vpc with web-tier, app-tier and sql-tier :

 how can i have in the same time, nslb between outside and web-tier, nslb
 between web-tier and app-tier and nslb between app-tier and sql-tier,
 having only one VPX ?

 Thanks four your lights.

 Regards, Benoit.



 --
 Francois Gaudreault
 Gestionnaire de Produit | Product Manager - Cloud Platform  Services
 t:514-629-6775

 CloudOps Votre partenaire infonuagique | Cloud Solutions Experts
 420 rue Guy | Montreal | Quebec | H3J 1S6
 w: cloudops.com | tw: @CloudOps_




[ACS 4.3][Netscaler] Any experience ? Which version of Netscaler ?

2014-06-16 Thread benoit lair
Hello Folks,


Anybody has already tested Netscaler with ACS 4.3 (or other version) ?

Do we need to have a special version of the netscaler in order to get it in
our cloudstack management server ? Any version, MPX, SDX, VPX preferred ?
Is it better to have a certain version or it doesn't matter ?


Thanks for yours feedback and advices.


Regards, Benoit Lair.


Re: [ACS 4.3][Netscaler] Any experience ? Which version of Netscaler ?

2014-06-16 Thread benoit lair
Hello Pierre,

Thanks for your response.

Okay, i've already compiled from source with noredist modules :), So i
shouldn't have problem with my acs 4.3 installation.

I'm looking at the 10.1 version and with vpx model too (I still hesitate
with MPX 5650 model).

I've heard there were multiples versions, with differents types of firmware
(sdx,mpx,vpx, enterprise, platinum, ...) So I was asking if there were some
pre requirements between the mgmt server and the netscaler.


Regards, Benoit Lair.


2014-06-16 14:38 GMT+02:00 Pierre-Luc Dion pd...@cloudops.com:

 You need Cloudstack build with noredist modules, so you will have to build
 it from source.

 For the Version of firmware of the Netscaler, you need at least 10.1 if I'm
 correct, proper version required is in the documentation.

 I'm currently testing Netscaler features on our side. So far I've run with
 latest 10.1 on VPX and got it working quite easily.


 Pierre-Luc Dion
 Architecte de Solution Cloud | Cloud Solutions Architect
 855-OK-CLOUD (855-652-5683) x1101
 - - -

 *CloudOps*420 rue Guy
 Montréal QC  H3J 1S6
 www.cloudops.com
 @CloudOps_


 On Mon, Jun 16, 2014 at 6:07 AM, benoit lair kurushi4...@gmail.com
 wrote:

  Hello Folks,
 
 
  Anybody has already tested Netscaler with ACS 4.3 (or other version) ?
 
  Do we need to have a special version of the netscaler in order to get it
 in
  our cloudstack management server ? Any version, MPX, SDX, VPX preferred ?
  Is it better to have a certain version or it doesn't matter ?
 
 
  Thanks for yours feedback and advices.
 
 
  Regards, Benoit Lair.
 



This is a test email

2014-05-12 Thread benoit lair
Sorry for the inconvenience, but don't receive anymore the mails of the
mailing list.


Re: Swift as Secondary Storage

2014-04-18 Thread benoit lair
Hello folks,


It is good for me! I had a wrong acl from my wan public network, so it
couldn't connect to my nfs staging store.

Opened http access and nfs access, and so now swift is mounted into my ssvm
vm.

Thanks for your lights.


Regards, Benoit.


2014-04-18 0:09 GMT+02:00 benoit lair kurushi4...@gmail.com:

 @Ilya,

 I don't have access to my poc (not at the office tonight), i will verify
 this tomorrow, but i remember not have seen any nfs share mounted on the
 ssvm vm.
 So the ssvm is supposed to mount the swift storage as an abstract nfs
 share, how is it achieved ? is cloudfuse used for this ?

 @Sanjeev,

 You mean that i need to re run cloud-install-sys-tmplt from mgmt server
 against my secondary staging nfs store ?


 Thanks a lot four your responses folks.





 2014-04-17 19:03 GMT+02:00 Sanjeev Neelarapu sanjeev.neelar...@citrix.com
 :

 Benoit,

 Since your system vms are ready, try to register a new template and see
 if it works. If you still face issues in uploading the template to Swift,
 please look at the ssvm logs /var/log/cloud.log

 -Sanjeev


 -Original Message-
 From: ilya musayev [mailto:ilya.mailing.li...@gmail.com]
 Sent: Thursday, April 17, 2014 8:06 AM
 To: users@cloudstack.apache.org
 Subject: Re: Swift as Secondary Storage

 Benoit,

 Few things i would have done.

 1) Take a look into mysql db there are some tables with template name in
 them. You can see the state and if they are referenced.
 2) Perhaps template is not marked as public and you cant see it?
 3) SSH to SSVM via a private key, goto /var/log/cloud/ and review the log
 file for any abnormalities. Also run a mount and df command on SSVM to see
 if Swift is abstracted and mounted through NFS.

 Regards
 ilya


 On 4/17/14, 11:01 AM, benoit lair wrote:
  The state of my acs 4.3 is the following :
 
  1 zone, 1 pod, 1 cluster, 1 xenserver 6.2, 1 nfs primary storage OK
  created 1 swift secondary storage pointing to my swift proxy node, OK,
  swift cli and cyberduck OK created a secondary staging store with a
  nfs server already containing the system vm template.
 
  The creation of the secondary staging store triggers the swift push
  from the acs mgmt server to the swift proxy node, the template is well
  uploaded, can see it on my object storage swift node.
 
  Now, i got on the web UI  templates, the 2 templates, system vm and
  centos
  5.6 vm template but there are both not available although i have my
  cpvm and ssvm vms created and launched...
 
  How to get my zone operationnal and being able to create a vm with
  centos
  5.6 vm template ?
 
 
  Thanks for your help.
 
  Regards, Benoit.
 
 
  2014-04-17 15:51 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Hi Folks,
 
 
  I have already some trouble with getting swift working as secondary
  storage.
 
  It has pushed the system vm template to my swift framework, i can see
  the system vm template on my object storage nodes, bu when i go to
  the web UI, on templates, sometimes i got my template available,
  sometimes it is no more available.
 
  Sanjeev, any idea ?
 
 
  Thanks.
 
  Regards, Benoit.
 
 
  2014-04-10 17:57 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  After solving the problem of the set perms, i could see the
  replication
  request for pushing the datas on swift.
 
  Now another problem and for information for those who whant to
  install their own swift : it is a requirement to have a swift public
  url with swift
  v1.0 and not a 2.0 one.
 
  If your swift endpoint is in v2 you won't be able to push your datas
  on swift.
 
  Troubleshooting in progress, have modified by gateway from v2 to v1,
  now waiting for mgmt cs to repush the data.
 
 
  Regards, Benoit.
 
 
  2014-04-10 16:34 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  I have more information for my issue  :
  2014-04-10 16:26:23,071 DEBUG
  [o.a.c.s.r.NfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Successfully mounted
  10.32.0.70:/export/secondary at
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
  2014-04-10 16:26:23,071 DEBUG
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Executing: sudo chmod 777
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
  2014-04-10 16:26:23,345 DEBUG
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Exit value is 1
  2014-04-10 16:26:23,358 DEBUG
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) chmod: modification des permissions
  de «
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
  Opération non permise
  2014-04-10 16:26:23,358 ERROR
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Unable to set permissions for
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
  due to
  chmod: modification des permissions de «
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8

Re: Swift as Secondary Storage

2014-04-17 Thread benoit lair
Hi Folks,


I have already some trouble with getting swift working as secondary storage.

It has pushed the system vm template to my swift framework, i can see the
system vm template on my object storage nodes, bu when i go to the web UI,
on templates, sometimes i got my template available, sometimes it is no
more available.

Sanjeev, any idea ?


Thanks.

Regards, Benoit.


2014-04-10 17:57 GMT+02:00 benoit lair kurushi4...@gmail.com:

 After solving the problem of the set perms, i could see the replication
 request for pushing the datas on swift.

 Now another problem and for information for those who whant to install
 their own swift : it is a requirement to have a swift public url with swift
 v1.0 and not a 2.0 one.

 If your swift endpoint is in v2 you won't be able to push your datas on
 swift.

 Troubleshooting in progress, have modified by gateway from v2 to v1, now
 waiting for mgmt cs to repush the data.


 Regards, Benoit.


 2014-04-10 16:34 GMT+02:00 benoit lair kurushi4...@gmail.com:

 I have more information for my issue  :

 2014-04-10 16:26:23,071 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Successfully mounted 
 10.32.0.70:/export/secondary
 at /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
 2014-04-10 16:26:23,071 DEBUG
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Executing: sudo chmod 777
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
 2014-04-10 16:26:23,345 DEBUG
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Exit value is 1
 2014-04-10 16:26:23,358 DEBUG
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 2014-04-10 16:26:23,358 ERROR
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Unable to set permissions for
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due to
 chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 2014-04-10 16:26:23,358 ERROR
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) GetRootDir for nfs://
 10.0.0.200/export/secondary failed due to
 com.cloud.utils.exception.CloudRuntimeException: Unable to set permissions
 for /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due
 to chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 com.cloud.utils.exception.CloudRuntimeException: Unable to set
 permissions for
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due to
 chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.mount(LocalNfsSecondaryStorageResource.java:111)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.mountUri(NfsSecondaryStorageResource.java:2310)
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.getRootDir(LocalNfsSecondaryStorageResource.java:85)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.downloadFromUrlToNfs(NfsSecondaryStorageResource.java:693)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.registerTemplateOnSwift(NfsSecondaryStorageResource.java:724)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSecondaryStorageResource.java:772)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeRequest(NfsSecondaryStorageResource.java:208)
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.executeRequest(LocalNfsSecondaryStorageResource.java:78)
 at
 org.apache.cloudstack.storage.LocalHostEndpoint.sendMessage(LocalHostEndpoint.java:93)
 at
 org.apache.cloudstack.storage.LocalHostEndpoint$CmdRunner.runInContext(LocalHostEndpoint.java:110)
 at
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at
 java.util.concurrent.FutureTask

Re: Swift as Secondary Storage

2014-04-17 Thread benoit lair
The state of my acs 4.3 is the following :

1 zone, 1 pod, 1 cluster, 1 xenserver 6.2, 1 nfs primary storage OK
created 1 swift secondary storage pointing to my swift proxy node, OK,
swift cli and cyberduck OK
created a secondary staging store with a nfs server already containing the
system vm template.

The creation of the secondary staging store triggers the swift push from
the acs mgmt server to the swift proxy node, the template is well uploaded,
can see it on my object storage swift node.

Now, i got on the web UI  templates, the 2 templates, system vm and centos
5.6 vm template but there are both not available although i have my cpvm
and ssvm vms created and launched...

How to get my zone operationnal and being able to create a vm with centos
5.6 vm template ?


Thanks for your help.

Regards, Benoit.


2014-04-17 15:51 GMT+02:00 benoit lair kurushi4...@gmail.com:

 Hi Folks,


 I have already some trouble with getting swift working as secondary
 storage.

 It has pushed the system vm template to my swift framework, i can see the
 system vm template on my object storage nodes, bu when i go to the web UI,
 on templates, sometimes i got my template available, sometimes it is no
 more available.

 Sanjeev, any idea ?


 Thanks.

 Regards, Benoit.


 2014-04-10 17:57 GMT+02:00 benoit lair kurushi4...@gmail.com:

 After solving the problem of the set perms, i could see the replication
 request for pushing the datas on swift.

 Now another problem and for information for those who whant to install
 their own swift : it is a requirement to have a swift public url with swift
 v1.0 and not a 2.0 one.

 If your swift endpoint is in v2 you won't be able to push your datas on
 swift.

 Troubleshooting in progress, have modified by gateway from v2 to v1, now
 waiting for mgmt cs to repush the data.


 Regards, Benoit.


 2014-04-10 16:34 GMT+02:00 benoit lair kurushi4...@gmail.com:

 I have more information for my issue  :

 2014-04-10 16:26:23,071 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Successfully mounted 
 10.32.0.70:/export/secondary
 at /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
 2014-04-10 16:26:23,071 DEBUG
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Executing: sudo chmod 777
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
 2014-04-10 16:26:23,345 DEBUG
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Exit value is 1
 2014-04-10 16:26:23,358 DEBUG
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 2014-04-10 16:26:23,358 ERROR
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Unable to set permissions for
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due to
 chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 2014-04-10 16:26:23,358 ERROR
 [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) GetRootDir for nfs://
 10.0.0.200/export/secondary failed due to
 com.cloud.utils.exception.CloudRuntimeException: Unable to set permissions
 for /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due
 to chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 com.cloud.utils.exception.CloudRuntimeException: Unable to set
 permissions for
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due to
 chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.mount(LocalNfsSecondaryStorageResource.java:111)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.mountUri(NfsSecondaryStorageResource.java:2310)
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.getRootDir(LocalNfsSecondaryStorageResource.java:85)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.downloadFromUrlToNfs(NfsSecondaryStorageResource.java:693)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.registerTemplateOnSwift(NfsSecondaryStorageResource.java:724)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSecondaryStorageResource.java:772)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeRequest(NfsSecondaryStorageResource.java:208)
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.executeRequest(LocalNfsSecondaryStorageResource.java:78

Re: Swift as Secondary Storage

2014-04-17 Thread benoit lair
@Ilya,

I don't have access to my poc (not at the office tonight), i will verify
this tomorrow, but i remember not have seen any nfs share mounted on the
ssvm vm.
So the ssvm is supposed to mount the swift storage as an abstract nfs
share, how is it achieved ? is cloudfuse used for this ?

@Sanjeev,

You mean that i need to re run cloud-install-sys-tmplt from mgmt server
against my secondary staging nfs store ?


Thanks a lot four your responses folks.





2014-04-17 19:03 GMT+02:00 Sanjeev Neelarapu sanjeev.neelar...@citrix.com:

 Benoit,

 Since your system vms are ready, try to register a new template and see if
 it works. If you still face issues in uploading the template to Swift,
 please look at the ssvm logs /var/log/cloud.log

 -Sanjeev


 -Original Message-
 From: ilya musayev [mailto:ilya.mailing.li...@gmail.com]
 Sent: Thursday, April 17, 2014 8:06 AM
 To: users@cloudstack.apache.org
 Subject: Re: Swift as Secondary Storage

 Benoit,

 Few things i would have done.

 1) Take a look into mysql db there are some tables with template name in
 them. You can see the state and if they are referenced.
 2) Perhaps template is not marked as public and you cant see it?
 3) SSH to SSVM via a private key, goto /var/log/cloud/ and review the log
 file for any abnormalities. Also run a mount and df command on SSVM to see
 if Swift is abstracted and mounted through NFS.

 Regards
 ilya


 On 4/17/14, 11:01 AM, benoit lair wrote:
  The state of my acs 4.3 is the following :
 
  1 zone, 1 pod, 1 cluster, 1 xenserver 6.2, 1 nfs primary storage OK
  created 1 swift secondary storage pointing to my swift proxy node, OK,
  swift cli and cyberduck OK created a secondary staging store with a
  nfs server already containing the system vm template.
 
  The creation of the secondary staging store triggers the swift push
  from the acs mgmt server to the swift proxy node, the template is well
  uploaded, can see it on my object storage swift node.
 
  Now, i got on the web UI  templates, the 2 templates, system vm and
  centos
  5.6 vm template but there are both not available although i have my
  cpvm and ssvm vms created and launched...
 
  How to get my zone operationnal and being able to create a vm with
  centos
  5.6 vm template ?
 
 
  Thanks for your help.
 
  Regards, Benoit.
 
 
  2014-04-17 15:51 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  Hi Folks,
 
 
  I have already some trouble with getting swift working as secondary
  storage.
 
  It has pushed the system vm template to my swift framework, i can see
  the system vm template on my object storage nodes, bu when i go to
  the web UI, on templates, sometimes i got my template available,
  sometimes it is no more available.
 
  Sanjeev, any idea ?
 
 
  Thanks.
 
  Regards, Benoit.
 
 
  2014-04-10 17:57 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  After solving the problem of the set perms, i could see the
  replication
  request for pushing the datas on swift.
 
  Now another problem and for information for those who whant to
  install their own swift : it is a requirement to have a swift public
  url with swift
  v1.0 and not a 2.0 one.
 
  If your swift endpoint is in v2 you won't be able to push your datas
  on swift.
 
  Troubleshooting in progress, have modified by gateway from v2 to v1,
  now waiting for mgmt cs to repush the data.
 
 
  Regards, Benoit.
 
 
  2014-04-10 16:34 GMT+02:00 benoit lair kurushi4...@gmail.com:
 
  I have more information for my issue  :
  2014-04-10 16:26:23,071 DEBUG
  [o.a.c.s.r.NfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Successfully mounted
  10.32.0.70:/export/secondary at
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
  2014-04-10 16:26:23,071 DEBUG
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Executing: sudo chmod 777
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
  2014-04-10 16:26:23,345 DEBUG
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Exit value is 1
  2014-04-10 16:26:23,358 DEBUG
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) chmod: modification des permissions
  de «
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
  Opération non permise
  2014-04-10 16:26:23,358 ERROR
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) Unable to set permissions for
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
  due to
  chmod: modification des permissions de «
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
  Opération non permise
  2014-04-10 16:26:23,358 ERROR
  [o.a.c.s.r.LocalNfsSecondaryStorageResource]
  (pool-10-thread-1:ctx-1a8aedc3) GetRootDir for nfs://
  10.0.0.200/export/secondary failed due to
  com.cloud.utils.exception.CloudRuntimeException: Unable to set
  permissions for
  /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922

Re: Swift as Secondary Storage

2014-04-10 Thread benoit lair
-8836fec2dda8 »:
Opération non permise


But that's what i don't understand, is that my nfs server 10.0.0.200 is
already mounted and active for primary storage, and have virtual machine
running on his storage.
I set the same options in /etc/exports both for primary and secondary and i
got the same rights and owner for primary and secondary parent directory.

Where am i wrong ?


Thanks for your help.

Regards, Benoit.


2014-04-09 17:01 GMT+02:00 Min Chen min.c...@citrix.com:

 Sanjeev did swift as secondary storage in 4.2, maybe he can shed some
 light on this.

 Thanks
 -min

 Sent from my iPhone

  On Apr 9, 2014, at 8:37 AM, benoit lair kurushi4...@gmail.com wrote:
 
  Hello Pierre,
 
  That's what i'm trying to, but no success.
 
  I can't find anything about this in the docs.
 
  I've already defined my zone, pod, cluster, server, primary storage,
 swift
  secondary storage, defined my secondary nfs staging store, but no way to
  have the templates available onto the secondary staging store (just the
  system vm one downloaded with the mgmt server cli commande line), i kept
 a
  look at my swift proxy server, but i never see any incoming connection
 from
  cloudstack..
 
 
  Anybody has an idea ?
 
  Thanks for any help.
 
  Regards, Benoit.
 
 
  2014-04-03 16:00 GMT+02:00 Pierre-Luc Dion pd...@cloudops.com:
 
  Does anyone tried and successfully use swift as Secondary Storage?
 
 
 
 
  Pierre-Luc Dion
  Architecte de Solution Cloud | Cloud Solutions Architect
  514-447-3456, 1101
  - - -
 
  *CloudOps*420 rue Guy
  Montréal QC  H3J 1S6
  www.cloudops.com
  @CloudOps_
 



Re: Swift as Secondary Storage

2014-04-10 Thread benoit lair
After solving the problem of the set perms, i could see the replication
request for pushing the datas on swift.

Now another problem and for information for those who whant to install
their own swift : it is a requirement to have a swift public url with swift
v1.0 and not a 2.0 one.

If your swift endpoint is in v2 you won't be able to push your datas on
swift.

Troubleshooting in progress, have modified by gateway from v2 to v1, now
waiting for mgmt cs to repush the data.


Regards, Benoit.


2014-04-10 16:34 GMT+02:00 benoit lair kurushi4...@gmail.com:

 I have more information for my issue  :

 2014-04-10 16:26:23,071 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Successfully mounted 
 10.32.0.70:/export/secondary
 at /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
 2014-04-10 16:26:23,071 DEBUG [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Executing: sudo chmod 777
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8
 2014-04-10 16:26:23,345 DEBUG [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Exit value is 1
 2014-04-10 16:26:23,358 DEBUG [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 2014-04-10 16:26:23,358 ERROR [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) Unable to set permissions for
 /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due to
 chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 2014-04-10 16:26:23,358 ERROR [o.a.c.s.r.LocalNfsSecondaryStorageResource]
 (pool-10-thread-1:ctx-1a8aedc3) GetRootDir for nfs://
 10.0.0.200/export/secondary failed due to
 com.cloud.utils.exception.CloudRuntimeException: Unable to set permissions
 for /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due
 to chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 com.cloud.utils.exception.CloudRuntimeException: Unable to set permissions
 for /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 due
 to chmod: modification des permissions de
 « /var/cloudstack/mnt/secStorage/7b0ceb7f-ae60-3922-80e7-8836fec2dda8 »:
 Opération non permise
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.mount(LocalNfsSecondaryStorageResource.java:111)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.mountUri(NfsSecondaryStorageResource.java:2310)
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.getRootDir(LocalNfsSecondaryStorageResource.java:85)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.downloadFromUrlToNfs(NfsSecondaryStorageResource.java:693)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.registerTemplateOnSwift(NfsSecondaryStorageResource.java:724)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSecondaryStorageResource.java:772)
 at
 org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeRequest(NfsSecondaryStorageResource.java:208)
 at
 org.apache.cloudstack.storage.resource.LocalNfsSecondaryStorageResource.executeRequest(LocalNfsSecondaryStorageResource.java:78)
 at
 org.apache.cloudstack.storage.LocalHostEndpoint.sendMessage(LocalHostEndpoint.java:93)
 at
 org.apache.cloudstack.storage.LocalHostEndpoint$CmdRunner.runInContext(LocalHostEndpoint.java:110)
 at
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)
 at
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
 at
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146

Re: Swift as Secondary Storage

2014-04-09 Thread benoit lair
Hello Pierre,

That's what i'm trying to, but no success.

I can't find anything about this in the docs.

I've already defined my zone, pod, cluster, server, primary storage, swift
secondary storage, defined my secondary nfs staging store, but no way to
have the templates available onto the secondary staging store (just the
system vm one downloaded with the mgmt server cli commande line), i kept a
look at my swift proxy server, but i never see any incoming connection from
cloudstack..


Anybody has an idea ?

Thanks for any help.

Regards, Benoit.


2014-04-03 16:00 GMT+02:00 Pierre-Luc Dion pd...@cloudops.com:

 Does anyone tried and successfully use swift as Secondary Storage?




 Pierre-Luc Dion
 Architecte de Solution Cloud | Cloud Solutions Architect
 514-447-3456, 1101
 - - -

 *CloudOps*420 rue Guy
 Montréal QC  H3J 1S6
 www.cloudops.com
 @CloudOps_



Re: [ACS 4.3] Autoscale Button hidden ?

2014-04-09 Thread benoit lair
Hello Rajesh,

I think i had my response, i was trying to use autoscale without netscaler,
this feature does not exist at this time in 4.3
With 4.2 i discovered the autoscale wizard but with the netscaler service
enabled onto my network.
Now with the 4.3, netscaler has to be activated at the service provider
level.
So if no netscaler service provider activated, no such a network offering,
so no autoscaling wizard available.

I don't have the luck to possess such this appliance.


I saw this problem with Nguyen Anh Tu who said me that this feature was not
present in 4.3 (also in 4.4 master branch).


I had to wait for the next release in order to have this feature.

Regards,

Benoit.


2014-03-28 10:48 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

 When I was on 4.3 branch, I was able to  create autoscale policies.
 Can you share the screenshot and log a bug.

 Thanks
 Rajesh Battala

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: Friday, March 28, 2014 3:15 PM
 To: users@cloudstack.apache.org
 Subject: Re: [ACS 4.3] Autoscale Button hidden ?

 A fresh install Rajesh


 2014-03-28 10:42 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

  Is it upgrade setup or fresh install?
 
  -Original Message-
  From: benoit lair [mailto:kurushi4...@gmail.com]
  Sent: Friday, March 28, 2014 2:31 PM
  To: users@cloudstack.apache.org
  Subject: [ACS 4.3] Autoscale Button hidden ?
 
  Hello Folks,
 
 
  With acs 4.2, i had the button Autoscale in order to launch
  autoscale wizard on a lb rule set on the UI.
 
  With acs 4.3, i don't have this button anymore.
 
  Any ideas ?
 
  Regards,
 
  Benoit.
 



Re: [ACS 4.3] Autoscale Button hidden ?

2014-04-09 Thread benoit lair
@Rajesh, sorry for the late, i'm a bit overworked these last weeks

@Nguyen, cool, this will be a very huge feature.

Regards, Benoit


2014-04-09 22:50 GMT+02:00 Nguyen Anh Tu t...@apache.org:

 On Wed, Apr 9, 2014 at 11:50 PM, benoit lair kurushi4...@gmail.com
 wrote:

 
  I saw this problem with Nguyen Anh Tu who said me that this feature was
 not
  present in 4.3 (also in 4.4 master branch).
 

 Kurushi,

 This feature will be on 4.4 release.

 Thanks,
 --Tuna



Re: VPN for VPC feature in 4.3

2014-04-02 Thread benoit lair
Yes, i found exceptions in the log file, but the reason was because i ran
out of public ips.

Could you paste the complete mgmt log file onto pastebin.com ?

Have you already succeded with the creation of a shared network including
the creation of a vr vm ? Did it take a public ip ?
By the past, i had issue with network's labeling onto my hypervisors
(xenservers) and this is what caused this problem.


Regards,

Benoit Lair.


2014-04-02 13:19 GMT+02:00 Praveen Buravilli praveen.buravi...@citrix.com:

 Yes Benoit, I have 40% of free public IPs available. So, that should not
 be an issue.

 I don’t see any errors in log file too. Have you noticed any exceptions in
 log files by any chance when you encountered this issue?

 Thanks,
 Praveen Kumar

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: 02 April 2014 17:30
 To: users@cloudstack.apache.org
 Subject: Re: VPN for VPC feature in 4.3

 Hi Praveen,


 I already have this issue with vpc vr :
 Have you checked if you have some public ip adresses available on your
 zone ?


 Regards, Benoit.


 2014-04-02 12:24 GMT+02:00 Praveen Buravilli praveen.buravi...@citrix.com
 :

  Thanks Geoff. Actually, eth1 for VPC router is missing.
 
  When I looked at log file, surprisingly a request has been sent to
  create router VM with two NICs(one link local and other public)
  whereas, the router was created with only one NIC.
 
 
 
  Any thoughts? fyi, I'm running CloudStack 4.3 with KVM nodes.
 
 
 
  Here attached is log file snippet containing both request and response
  info on router start command:
 
  (Highlighted NIC entries in the log with red and green texts).
 
 
  ==
  
 
  2014-04-02 06:00:47,968 DEBUG [c.c.a.t.Request]
  (Job-Executor-35:ctx-544b3513 ctx-5d9c4b47) Seq 6-1545667825: Sending
  { Cmd , MgmtId: 52237010300, via: 6(localhost.localdomain), Ver: v1,
 Flags:
  100111,
  [{com.cloud.agent.api.StartCommand:{vm:{id:43,name:r-43-VM,
  type:DomainRouter,cpus:1,minSpeed:500,maxSpeed:500,minRam:1
  34217728,maxRam:134217728,arch:x86_64,os:Debian
  GNU/Linux 7(64-bit),bootArgs:
  vpccidr=10.201.0.0/16domain=cs7cloud.internal dns1=8.8.8.8
  template=domP name=r-43-VM
  eth0ip=169.254.1.131 eth0mask=255.255.0.0 type=vpcrouter
  disable_rp_filter=true,rebootOnCrash:false,enableHA:true,limitCp
  uUse:false,enableDynamicallyScaleVm:false,vncPassword:21a870dc77
  23830,params:{},uuid:05b714cf-a511-42d9-b24a-6d077342865f,disk
  s:[{data:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid
  :b61da4e1-121e-4e02-b345-35719deec994,volumeType:ROOT,dataStore
  :{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:90ff
  a1df-e8bd-3e46-893d-bb9b63e0b180,id:2,poolType:NetworkFilesystem
  ,host:172.20.105.2,path:/export/praveen/csprimary,port:2049
  ,url:NetworkFilesystem://
  172.20.105.2//export/praveen/csprimary/?ROLE=PrimarySTOREUUID=90ffa1d
  f-e8bd-3e46-893d-bb9b63e0b180
 
 }},name:ROOT-43,size:262144,path:b61da4e1-121e-4e02-b345-35719deec994,volumeId:46,vmName:r-43-VM,accountId:7,format:QCOW2,id:46,deviceId:0,hypervisorType:KVM}},diskSeq:0,path:b61da4e1-121e-4e02-b345-35719deec994,type:ROOT,_details:{managed:false,storagePort:2049,storageHost:172.20.105.2,volumeSize:262144}}],nics:[{deviceId:0,networkRateMbps:-1,defaultNic:false,uuid:2d4b2574-5e7d-45e7-bcbb-f64d1d9237c1,ip:169.254.1.131,netmask:255.255.0.0,gateway:169.254.0.1,mac:0e:00:a9:fe:01:83,broadcastType:LinkLocal,type:Control,isSecurityGroupEnabled:false}]},hostIp:172.20.210.7,executeInSequence:false,wait:0}},{com.cloud.agent.api.check.CheckSshCommand:{ip:169.254.1.131,port:3922,interval:6,retries:100,name:r-43-VM,wait:0}},{com.cloud.agent.api.GetDomRVersionCmd:{accessDetails:{router.ip:169.254.1.131,
  router.name
 
 :r-43-VM},wait:0}},{com.cloud.agent.api.PlugNicCommand:{nic:{deviceId:1,networkRateMbps:200,defaultNic:true,uuid:7f27078c-2123-4e53-9d4c-df2c6e4cb844,ip:172.20.211.132,netmask:255.255.255.0,gateway:172.20.211.1,mac:06:41:1a:00:00:20,broadcastType:Vlan,type:Public,broadcastUri:vlan://211,isolationUri:vlan://211,isSecurityGroupEnabled:false,name:cloudbr1},instanceName:r-43-VM,vmType:DomainRouter,wait:0}},{com.cloud.agent.api.routing.IpAssocVpcCommand:{ipAddresses:[{accountId:7,publicIp:172.20.211.132,sourceNat:true,add:true,oneToOneNat:false,firstIP:false,broadcastUri:211,vlanGateway:172.20.211.1,vlanNetmask:255.255.255.0,vifMacAddress:06:41:1a:00:00:20,networkRate:200,trafficType:Public,networkName:cloudbr1}],accessDetails:{router.guest.ip:172.20.211.132,zone.network.type:Advanced,router.ip:169.254.1.131,
  router.name
 
 :r-43-VM},wait:0}},{com.cloud.agent.api.routing.SetSourceNatCommand:{ipAddress:{accountId:7,publicIp:172.20.211.132,sourceNat:true,add:true,oneToOneNat:false,firstIP:false,broadcastUri:211,vlanGateway:172.20.211.1,vlanNetmask:255.255.255.0,vifMacAddress:06:41:1a:00:00:20,networkRate:200,trafficType:Public,networkName:cloudbr1},add:true

[ACS 4.3] ]Building non oss rpms

2014-03-28 Thread benoit lair
Hello Folks,


How can i build non oss rpms from acs sources ?

What is the procedure in order to accomplish this task ?


Regards,

Benoit Lair


[ACS 4.3] Autoscale Button hidden ?

2014-03-28 Thread benoit lair
Hello Folks,


With acs 4.2, i had the button Autoscale in order to launch autoscale
wizard on a lb rule set on the UI.

With acs 4.3, i don't have this button anymore.

Any ideas ?

Regards,

Benoit.


Re: [ACS 4.3] Autoscale Button hidden ?

2014-03-28 Thread benoit lair
A fresh install Rajesh


2014-03-28 10:42 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

 Is it upgrade setup or fresh install?

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: Friday, March 28, 2014 2:31 PM
 To: users@cloudstack.apache.org
 Subject: [ACS 4.3] Autoscale Button hidden ?

 Hello Folks,


 With acs 4.2, i had the button Autoscale in order to launch autoscale
 wizard on a lb rule set on the UI.

 With acs 4.3, i don't have this button anymore.

 Any ideas ?

 Regards,

 Benoit.



Re: [ACS 4.3] ]Building non oss rpms

2014-03-28 Thread benoit lair
Thanks a lot Rajesh.

But i would like to build my own rpms from 4.3 source


2014-03-28 10:43 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

 This Jenkins job will generate rpm's.
 Here is the link and recent build had generate rpm's you can use it.
 http://jenkins.buildacloud.org/view/4.4/job/cloudstack-4.4-package-rpm/

 Thanks
 Rajesh Battala

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: Friday, March 28, 2014 2:28 PM
 To: users@cloudstack.apache.org
 Subject: [ACS 4.3] ]Building non oss rpms

 Hello Folks,


 How can i build non oss rpms from acs sources ?

 What is the procedure in order to accomplish this task ?


 Regards,

 Benoit Lair



Re: [ACS 4.3] ]Building non oss rpms

2014-03-28 Thread benoit lair
Yes, exactly :)


2014-03-28 10:50 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

 You mean, to generate rpm's locally from your machine?


 https://cwiki.apache.org/confluence/display/CLOUDSTACK/How+to+build+CloudStack

 Thanks
 Rajesh Battala

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: Friday, March 28, 2014 3:16 PM
 To: users@cloudstack.apache.org
 Subject: Re: [ACS 4.3] ]Building non oss rpms

 Thanks a lot Rajesh.

 But i would like to build my own rpms from 4.3 source


 2014-03-28 10:43 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

  This Jenkins job will generate rpm's.
  Here is the link and recent build had generate rpm's you can use it.
  http://jenkins.buildacloud.org/view/4.4/job/cloudstack-4.4-package-rpm/
 
  Thanks
  Rajesh Battala
 
  -Original Message-
  From: benoit lair [mailto:kurushi4...@gmail.com]
  Sent: Friday, March 28, 2014 2:28 PM
  To: users@cloudstack.apache.org
  Subject: [ACS 4.3] ]Building non oss rpms
 
  Hello Folks,
 
 
  How can i build non oss rpms from acs sources ?
 
  What is the procedure in order to accomplish this task ?
 
 
  Regards,
 
  Benoit Lair
 



Re: [ACS 4.3] ]Building non oss rpms

2014-03-28 Thread benoit lair
So with
mvn clean install -P deps -Dnoredist; export ACS_BUILD_OPTS=-Dnoredist;
dpkg-buildpackage

I wil be able to compile non oss rpms ?


2014-03-28 10:54 GMT+01:00 benoit lair kurushi4...@gmail.com:

 Yes, exactly :)


 2014-03-28 10:50 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

 You mean, to generate rpm's locally from your machine?


 https://cwiki.apache.org/confluence/display/CLOUDSTACK/How+to+build+CloudStack

 Thanks
 Rajesh Battala

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: Friday, March 28, 2014 3:16 PM
 To: users@cloudstack.apache.org
 Subject: Re: [ACS 4.3] ]Building non oss rpms

 Thanks a lot Rajesh.

 But i would like to build my own rpms from 4.3 source


 2014-03-28 10:43 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

  This Jenkins job will generate rpm's.
  Here is the link and recent build had generate rpm's you can use it.
  http://jenkins.buildacloud.org/view/4.4/job/cloudstack-4.4-package-rpm/
 
  Thanks
  Rajesh Battala
 
  -Original Message-
  From: benoit lair [mailto:kurushi4...@gmail.com]
  Sent: Friday, March 28, 2014 2:28 PM
  To: users@cloudstack.apache.org
  Subject: [ACS 4.3] ]Building non oss rpms
 
  Hello Folks,
 
 
  How can i build non oss rpms from acs sources ?
 
  What is the procedure in order to accomplish this task ?
 
 
  Regards,
 
  Benoit Lair
 





Re: [ACS 4.3] ]Building non oss rpms

2014-03-28 Thread benoit lair
ok, thanks Rajesh


2014-03-28 11:01 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:

 I think so. Can you look at the Jenkins job log to figure out the command.

 -Original Message-
 From: benoit lair [mailto:kurushi4...@gmail.com]
 Sent: Friday, March 28, 2014 3:26 PM
 To: users@cloudstack.apache.org
 Subject: Re: [ACS 4.3] ]Building non oss rpms

 So with
 mvn clean install -P deps -Dnoredist; export ACS_BUILD_OPTS=-Dnoredist;
 dpkg-buildpackage

 I wil be able to compile non oss rpms ?


 2014-03-28 10:54 GMT+01:00 benoit lair kurushi4...@gmail.com:

  Yes, exactly :)
 
 
  2014-03-28 10:50 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:
 
  You mean, to generate rpm's locally from your machine?
 
 
  https://cwiki.apache.org/confluence/display/CLOUDSTACK/How+to+build+C
  loudStack
 
  Thanks
  Rajesh Battala
 
  -Original Message-
  From: benoit lair [mailto:kurushi4...@gmail.com]
  Sent: Friday, March 28, 2014 3:16 PM
  To: users@cloudstack.apache.org
  Subject: Re: [ACS 4.3] ]Building non oss rpms
 
  Thanks a lot Rajesh.
 
  But i would like to build my own rpms from 4.3 source
 
 
  2014-03-28 10:43 GMT+01:00 Rajesh Battala rajesh.batt...@citrix.com:
 
   This Jenkins job will generate rpm's.
   Here is the link and recent build had generate rpm's you can use it.
   http://jenkins.buildacloud.org/view/4.4/job/cloudstack-4.4-package-
   rpm/
  
   Thanks
   Rajesh Battala
  
   -Original Message-
   From: benoit lair [mailto:kurushi4...@gmail.com]
   Sent: Friday, March 28, 2014 2:28 PM
   To: users@cloudstack.apache.org
   Subject: [ACS 4.3] ]Building non oss rpms
  
   Hello Folks,
  
  
   How can i build non oss rpms from acs sources ?
  
   What is the procedure in order to accomplish this task ?
  
  
   Regards,
  
   Benoit Lair
  
 
 
 



  1   2   >