RE: FC STORAGE

2019-04-17 Thread Grégoire Lamodière
Hi Dominik,

I have no longer PreShared storage here, so I cannot be 100% sure, but I think 
you should add a / before your storage name : /STORAGE01

If still not working, can you share the mgmt logs whille trying to add the pr 
store ?

Cheers.

Grégoire

De : Dominik Czerepiński [mailto:dominato...@gmail.com]
Envoyé : mercredi 17 avril 2019 18:49
À : asen...@testlabs.com.au
Cc : users@cloudstack.apache.org
Objet : Re: FC STORAGE

Yes. Ok so I try one more time add FC storage as a primary storage to my 
cluster via cloustack manager and no luck. In the attachment I'm sending you 
screenshots of my configuration and log after adding storage.

wt., 16 kwi 2019 o 15:22 
mailto:asen...@testlabs.com.au>> napisał(a):
Are you selecting presetup with xcp-ing?

With xenserver when adding the FC storage within cloudstack you need to
set presetup as the type:

TypePreSetup
Path/DELL-SC4020

-Adrian Sender

On 2019-04-16 21:00, Dominik Czerepiński wrote:
> Storage is presented to hypervisor host via targetcli and on Xen
> manager is enabled multipath so on all host I see storage  via
> multiparty. CloudStack version is 4.11 but I try older version 4.10
> and older version xcp-ng 4.6. All combination give me this same result
> can’t connect storage to hosts. If I present NFS as a primary storage
> configuration complete successfully.
>
>> Wiadomość napisana przez Rafael Weingärtner
>> mailto:rafaelweingart...@gmail.com>> w dniu 
>> 16.04.2019, o godz. 12:52:
>>
>> No need to be connected to the management server (MS). How did you
>> introduce the storage to CloudStack? What is the version of your
>> hypervisor? The version of CloudStack? How did you configure/connect
>> the
>> storage with the hypervisors hosts?
>>
>> On Tue, Apr 16, 2019 at 7:50 AM 
>> dominato...@gmail.com
>> mailto:dominato...@gmail.com>>
>> wrote:
>>
>>> Hello,
>>>
>>> I would like to build a private cloud based on cloustack. The
>>> infrastructure is ready and when adding the primary storage gets an
>>> error:
>>> the cluster can't connect to the storage. The cluster is build on
>>> latest
>>> xcp-ng, and storage is on FC. The disk resource is visible on the
>>> clusters.
>>> My question is whether the first storage must also be connected to
>>> the
>>> management server?
>>>
>>
>>
>> --
>> Rafael Weingärtner


--
Pozdrawiam Dominik Czerepiński


RE: vSAN

2018-12-11 Thread Grégoire Lamodière
HI Fariborz, 

If you work in Xenserver world, this project might be interesting for your : 
https://xen-orchestra.com/blog/tag/xosan/

Cheers.

Grégoire

-Message d'origine-
De : Fariborz Navidan [mailto:mdvlinqu...@gmail.com] 
Envoyé : mardi 11 décembre 2018 21:37
À : users@cloudstack.apache.org
Objet : vSAN

Hello folks,
Do you know of any good vSAN software working with CloudStack?

Thanks


RE: System VM version - CS 4.11.1

2018-10-12 Thread Grégoire Lamodière
Hi Andrija, 

Yes, they both have the proper name (systemvm-kvm-4.11.1 and 
systemvm-xenserver-4.11.1).
The only thing that made it working was to change the type of Xen Template.

Cheers
Grégoire

-Message d'origine-
De : Andrija Panic [mailto:andrija.pa...@gmail.com] 
Envoyé : vendredi 12 octobre 2018 20:48
À : users 
Objet : Re: System VM version - CS 4.11.1

Hi,

check the global variables router.template.kvm and 
router.template.xenserver they should have the value of the exact name of 
the new systemVM templates as you registered them...

Let us know if this fixes the issue.
MGMT server will need to be restarted...

Cheers
Andrija

On Fri, 12 Oct 2018 at 19:32, Grégoire Lamodière 
wrote:

> Ok, I reply to myself.
> I think there is something to check about this on the way CS handle 
> template choice on systemvm templates creation.
>
> Both 4.11.1 KVM and Xen templates have been registered with UI.
> The KVM one is typed "SYSTEM", and the XEN "USER".
>
> So when systemvm were created, they were using old template on Xen.
>
> This points me to the following questions :
>
> 1/ Is it a systemvm issue (should not check the type when selecting 
> the template ?) 2/ Or is it a template registration issue - only set 
> SYSTEM to the first one, or KVM one, and not the second / Xen
>
> I think someone else already wrote the same workarround on this list 
> (UPDATE DB SET type='SYSTEM')
>
> Cheers.
>
> Grégoire
>
> -Message d'origine-
> De : Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr] Envoyé : 
> vendredi 12 octobre 2018 18:31 À : users@cloudstack.apache.org Objet : 
> System VM version - CS 4.11.1
>
> Hi All,
>
> I have a strange behavior on a CS 4.11.1 deployment (upgraded from 
> 4.11.0)
>
> This deployment has a mixed cluster (KVM / CXP-NG 7.4).
> Both systemvm templates (KVM / XEN) have been deployed with proper URL
>
> When systemvm are on one KVM host, they report proper version.
> On XCP, they report 4.11.0
>
> I check on a virtual router (/etc/cloudstack-release) and it reports 
> 4.11.0.
> On the vr start, it shows 4.11.1
>
> And if I try the « upgrade router » from UI, it breaks the vr.
>
> I will check the source to understand the init process and try to 
> understand what is happening.
>
> Anyone already got this issue ?
>
> Cheers
>
> Grégoire
>


-- 

Andrija Panić


RE: System VM version - CS 4.11.1

2018-10-12 Thread Grégoire Lamodière
Ok, I reply to myself.
I think there is something to check about this on the way CS handle template 
choice on systemvm templates creation.

Both 4.11.1 KVM and Xen templates have been registered with UI.
The KVM one is typed "SYSTEM", and the XEN "USER".

So when systemvm were created, they were using old template on Xen.

This points me to the following questions :

1/ Is it a systemvm issue (should not check the type when selecting the 
template ?)
2/ Or is it a template registration issue - only set SYSTEM to the first one, 
or KVM one, and not the second / Xen

I think someone else already wrote the same workarround on this list (UPDATE DB 
SET type='SYSTEM')

Cheers.

Grégoire

-----Message d'origine-
De : Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr] 
Envoyé : vendredi 12 octobre 2018 18:31
À : users@cloudstack.apache.org
Objet : System VM version - CS 4.11.1

Hi All,

I have a strange behavior on a CS 4.11.1 deployment (upgraded from 4.11.0)

This deployment has a mixed cluster (KVM / CXP-NG 7.4).
Both systemvm templates (KVM / XEN) have been deployed with proper URL

When systemvm are on one KVM host, they report proper version.
On XCP, they report 4.11.0

I check on a virtual router (/etc/cloudstack-release) and it reports 4.11.0.
On the vr start, it shows 4.11.1

And if I try the « upgrade router » from UI, it breaks the vr.

I will check the source to understand the init process and try to understand 
what is happening.

Anyone already got this issue ?

Cheers

Grégoire


System VM version - CS 4.11.1

2018-10-12 Thread Grégoire Lamodière
Hi All,

I have a strange behavior on a CS 4.11.1 deployment (upgraded from 4.11.0)

This deployment has a mixed cluster (KVM / CXP-NG 7.4).
Both systemvm templates (KVM / XEN) have been deployed with proper URL

When systemvm are on one KVM host, they report proper version.
On XCP, they report 4.11.0

I check on a virtual router (/etc/cloudstack-release) and it reports 4.11.0.
On the vr start, it shows 4.11.1

And if I try the « upgrade router » from UI, it breaks the vr.

I will check the source to understand the init process and try to understand 
what is happening.

Anyone already got this issue ?

Cheers

Grégoire


KVM Snapshot for backup

2018-08-19 Thread Grégoire Lamodière
Hi All,

Has anyone experienced the virsh snapshot-create-as command on a KVM based 
instance?
I am facing some blockers, and I cannot find workarounds.

1/ create a KVM instance from template

ð  Qemu chain is 2 nodes (1/ template, 2/ instance)
2/ make a snapshot (live)

ð  Qemu chain is 3 nodes (1/ template, 2/ base instance, 3/ snapshot)
3/ blockcommit and pivot

ð  Qemu chain is 1 node

This behavior sounds ok to me (blockcomit from top to base), but we lose 
benefit of sharing the template disk, and I am not sure there is no impact on 
cs side.

Am I searching the wrong direction to make efficient and live backup for my kvm 
instances ?
Any guidance would be much appreciated.

Best Regards.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



RE: QEMU dirty-bitmap

2018-04-23 Thread Grégoire Lamodière
Hi Paul, 

This is a very good news.
I am available for any work / help.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Paul Angus  
Envoyé : lundi 23 avril 2018 11:37
À : users@cloudstack.apache.org
Objet : RE: QEMU dirty-bitmap

Hi All,

Yes we will be implementing a 'generic' backup and recovery framework. We'll 
have the details in the wiki, this week or next.


paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 


-Original Message-
From: Grégoire Lamodière 
Sent: 23 April 2018 09:52
To: users@cloudstack.apache.org
Subject: RE: QEMU dirty-bitmap

Hi Simon, 

Thank you for your feedback.
I'll continue working on this.

In a more general perspective, this topic is related to backup process 
integrated into CS.
I remember an old thread (including @Paul I think) mentioning work on a backup 
API to give third part companies' ability to include CS in their supported 
environments.
Even if this approach sounds the easiest for us, as service provider (ie buy a 
licence, make 2 clicks, and let the proprietary tool make the job), I am not 
sure this is the best long term strategy, and so far, none of them support KVM.

Is there any interest of the community to integrate a backup solution inside CS 
? 
For us, the key features would be :
- Add backup menu entry for admins
- Create backup tasks inside CS, including destinations (nfs) and sources 
(host, pool, zone)
- Support full, incremental, and forever incremental backups
- Ability to restore instances directly to CS
- If dirty-bitmaps work fine, include the move of the bitmap when moving 
instance to a different host
- Support at least Xen and KVM

Right now, I think most of us use their own pieces of code.
Merging them into a proper feature would be a nice move.

We could also talk with the excellent XenOrchestra project (we use it for our 
Xen Pools) and offer assistance to support KVM and other hypervisor, but I am 
not sure this is their strategy, and with xenserver-ng, they might be quite 
busy.

Any opinion ?

Cheers.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Simon Weller  Envoyé : dimanche 22 avril 2018 
19:26 À : users  Objet : Re: QEMU dirty-bitmap

That looks pretty interesting. I'll read up on it. We've been more focused 
lately on working on Ceph replication, but this would be a nice way to handle 
it in a storage agnostic way.


- Si


____
From: Grégoire Lamodière 
Sent: Sunday, April 22, 2018 7:15 AM
To: users
Subject: QEMU dirty-bitmap

Dear List,

Has anyone tried the dirty-bitmap solution to provide incremental backups on 
KVM/QEMU based CS clusters ?
The solution sounds pretty nice, but instance migration scenarios need 
additional task to copy the bitmaps.

Should we dig this subject and share, or is there any existing work on 
incremental backup for CS / KVM (using qemu bitmaps or any other feature), that 
we might join ?

Cheers.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71




RE: QEMU dirty-bitmap

2018-04-23 Thread Grégoire Lamodière
Hi Simon, 

Thank you for your feedback.
I'll continue working on this.

In a more general perspective, this topic is related to backup process 
integrated into CS.
I remember an old thread (including @Paul I think) mentioning work on a backup 
API to give third part companies' ability to include CS in their supported 
environments.
Even if this approach sounds the easiest for us, as service provider (ie buy a 
licence, make 2 clicks, and let the proprietary tool make the job), I am not 
sure this is the best long term strategy, and so far, none of them support KVM.

Is there any interest of the community to integrate a backup solution inside CS 
? 
For us, the key features would be :
- Add backup menu entry for admins
- Create backup tasks inside CS, including destinations (nfs) and sources 
(host, pool, zone)
- Support full, incremental, and forever incremental backups
- Ability to restore instances directly to CS
- If dirty-bitmaps work fine, include the move of the bitmap when moving 
instance to a different host
- Support at least Xen and KVM

Right now, I think most of us use their own pieces of code.
Merging them into a proper feature would be a nice move.

We could also talk with the excellent XenOrchestra project (we use it for our 
Xen Pools) and offer assistance to support KVM and other hypervisor, but I am 
not sure this is their strategy, and with xenserver-ng, they might be quite 
busy.

Any opinion ?

Cheers.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Simon Weller  
Envoyé : dimanche 22 avril 2018 19:26
À : users 
Objet : Re: QEMU dirty-bitmap

That looks pretty interesting. I'll read up on it. We've been more focused 
lately on working on Ceph replication, but this would be a nice way to handle 
it in a storage agnostic way.


- Si


____
From: Grégoire Lamodière 
Sent: Sunday, April 22, 2018 7:15 AM
To: users
Subject: QEMU dirty-bitmap

Dear List,

Has anyone tried the dirty-bitmap solution to provide incremental backups on 
KVM/QEMU based CS clusters ?
The solution sounds pretty nice, but instance migration scenarios need 
additional task to copy the bitmaps.

Should we dig this subject and share, or is there any existing work on 
incremental backup for CS / KVM (using qemu bitmaps or any other feature), that 
we might join ?

Cheers.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



QEMU dirty-bitmap

2018-04-22 Thread Grégoire Lamodière
Dear List,

Has anyone tried the dirty-bitmap solution to provide incremental backups on 
KVM/QEMU based CS clusters ?
The solution sounds pretty nice, but instance migration scenarios need 
additional task to copy the bitmaps.

Should we dig this subject and share, or is there any existing work on 
incremental backup for CS / KVM (using qemu bitmaps or any other feature), that 
we might join ?

Cheers.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



RE: Community opinion regarding Apache events banner in CloudStack's website

2018-04-17 Thread Grégoire Lamodière
Hi Rafael, 

The second one is very well integrated inside the page, but I think Dag is 
right, the third option will be more readable / clickable for all visitors.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Rafael Weingärtner  
Envoyé : mardi 17 avril 2018 20:35
À : dev 
Cc : users 
Objet : Re: Community opinion regarding Apache events banner in CloudStack's 
website

Ah damm.. I forgot about the file stripping in our mailing list.
Sorry guys. Here they go.

- first one:
https://drive.google.com/open?id=1vSqni_GEj3YJjuGehxe-_dnrNqQP7x8y

- second one:
https://drive.google.com/open?id=1LEmt9g5ceAUeTuc2a1Cb4uctOwyz5eQ8

On Tue, Apr 17, 2018 at 3:31 PM, Dag Sonstebo 
wrote:

> The white one is quite nice ☺
>
> Joking aside – looks like they got stripped from your email Rafael.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> From: Rafael Weingärtner 
> Reply-To: "d...@cloudstack.apache.org" 
> Date: Tuesday, 17 April 2018 at 19:13
> To: users , dev 
> 
> Subject: Community opinion regarding Apache events banner in 
> CloudStack's website
>
> Hello folks,
> I am trying to work out something to put Apache events banner on our 
> website. So far I came up with two proposals. Which one of them do you 
> guys prefer?
> First one:
> [cid:ii_jg3zjco00_162d4ce7db0cd3da]
>
>
> Second:
> [cid:ii_jg3zk0e01_162d4cefaef3a1ce]
>
> --
> Rafael Weingärtner
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>


-- 
Rafael Weingärtner


RE: VHD import

2018-03-06 Thread Grégoire Lamodière
Hi Dag, All, 

I spent some time working on this matter, and here are the results of my tests.
It might be usefull in case anyone has to restore instances from nfs store.

I think I were facing 2 issues.

1/ You are right, in case vm crashed (ie was running at the time of network 
crash), you may assume around 50% luck to restore it easily.
2/ vhd-util version used in the xenserver seems to be the key of the process.

I made the following tries :

- take vhd from cs 4.9.3 / xenserver 7.0
- download VHD-UTIL from install guide url
- Import it to cs 4.11 / xenserver 7.0 => not working
- Upgrade the xenserver to 7.2 => import still not working
- Import directly to xenserver 7.0 / 7.2 => working 100%

- make exactly the same, copying vhd-util from original cs 4.9.3 management 
server to the new one
- Import it to cs 4.11 / xenserver 7.0 => working
- Upgrade the xenserver to 7.2 => import working

So from what I can see :
Vdi-import always works in a Xenserver perspective, but in some cases vm won't 
start properly if they were inconsistent.

When vhd is downloaded as a template in cs, the errors reported were mostly 
post-download errors, ex. In the ssvm script :
vhd-util set -n ${tmpltimg2} -f "hidden" -v "0" > /dev/null
rollback_if_needed $tmpltfs $? "vhd remove $tmpltimg2 hidden failed\n

That’s all for my feedback.
In case anyone needs info or scripts, I'll be happy to share.

Now it's time to continue my work moving to KVM, but that is another story.

Regards

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr] 
Envoyé : vendredi 23 février 2018 21:10
À : users@cloudstack.apache.org
Objet : RE: VHD import

Hi Dag, 

I am still trying to understand, but some instances crashed where some others 
shutdown properly.
So far, I can confirm that all vhd (after coalesce) can be imported to Xen, and 
most of them fail to import as VHD.
Some failure imports are successful after the Xen import step, but some still 
fail after the xe vhd-export (xen vm is working fine)

The strange thing is all vhd sound good (all steps of the cs script pass, but 
the process results in cs errors).
This is not a big deal, as we are moving to KVM, but I hope we won't have the 
same issues with kvm nfs based volumes.
I'll work on this a bit this weekend, so I can make a proper feedback.

Anyway, once again, thanks for your blog post, clear and usefull, as usual.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Envoyé : vendredi 23 février 2018 20:48
À : users@cloudstack.apache.org
Objet : Re: VHD import

Hi Gregoire,

One thing just struck me – what was the status of the VMs you were exporting 
from NFS? Were they cleanly shut down or did they simply crash when the 
hardware failed? If so could you have an issue where the disk you are initially 
exporting isn’t consistent, hence import fails? 

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 22/02/2018, 09:56, "Dag Sonstebo"  wrote:

Hi Gregoire,

Glad the blog post is of use. It’s a while since I wrote it so I had to 
re-read it myself.  I have not come across this problem – but as you can 
probably guess we luckily don’t have to do this recovery procedure very often.

I can only assume same as yourself that the vhd-util coalesce using the 
downloaded vhd-util binary must give a slightly different format header which 
the import in CloudStack doesn’t like but XS is quite happy about. Also keep in 
mind that the vhd-util from 
http://download.cloud.com.s3.amazonaws.com/tools/vhd-util is slightly different 
than the one you find on the XenServers, so this may also be a factor.

Please let us know how you get on with your investigation – if you find the 
root cause let me know and I’ll add it as a subnote to the original article 
(don’t worry you’ll get the credit ( ).

Regards,
    Dag Sonstebo
Cloud Architect
ShapeBlue

On 22/02/2018, 00:10, "Grégoire Lamodière"  wrote:

Dear All,

I recently lost a Xen 7 pool managed by CS 4.11 (hardware issue).

I tried to apply Dag's excellent article 
(http://www.shapeblue.com/recovery-of-vms-to-new-cloudstack-instance/), and 
found a strange behavior :


-  If I directly export the VHD from old NFS primary storage 
(after coalesce vhd with vhd-util) and import to CS using import Template, they 
always fail, with 2 types of error (mainly Failed post download script: vhd 
remove 
/mnt/SecStorage/2da072e7-a6fe-3e39-b07e-77ec4f34fd49/template/tmpl/2/258/dnld5954086457185109807tmp_
 hidden failed).

-  If I use the same VHD, impor

RE: VHD import

2018-02-23 Thread Grégoire Lamodière
Hi Dag, 

I am still trying to understand, but some instances crashed where some others 
shutdown properly.
So far, I can confirm that all vhd (after coalesce) can be imported to Xen, and 
most of them fail to import as VHD.
Some failure imports are successful after the Xen import step, but some still 
fail after the xe vhd-export (xen vm is working fine)

The strange thing is all vhd sound good (all steps of the cs script pass, but 
the process results in cs errors).
This is not a big deal, as we are moving to KVM, but I hope we won't have the 
same issues with kvm nfs based volumes.
I'll work on this a bit this weekend, so I can make a proper feedback.

Anyway, once again, thanks for your blog post, clear and usefull, as usual.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Envoyé : vendredi 23 février 2018 20:48
À : users@cloudstack.apache.org
Objet : Re: VHD import

Hi Gregoire,

One thing just struck me – what was the status of the VMs you were exporting 
from NFS? Were they cleanly shut down or did they simply crash when the 
hardware failed? If so could you have an issue where the disk you are initially 
exporting isn’t consistent, hence import fails? 

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 22/02/2018, 09:56, "Dag Sonstebo"  wrote:

Hi Gregoire,

Glad the blog post is of use. It’s a while since I wrote it so I had to 
re-read it myself.  I have not come across this problem – but as you can 
probably guess we luckily don’t have to do this recovery procedure very often.

I can only assume same as yourself that the vhd-util coalesce using the 
downloaded vhd-util binary must give a slightly different format header which 
the import in CloudStack doesn’t like but XS is quite happy about. Also keep in 
mind that the vhd-util from 
http://download.cloud.com.s3.amazonaws.com/tools/vhd-util is slightly different 
than the one you find on the XenServers, so this may also be a factor.

Please let us know how you get on with your investigation – if you find the 
root cause let me know and I’ll add it as a subnote to the original article 
(don’t worry you’ll get the credit ( ).

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

    On 22/02/2018, 00:10, "Grégoire Lamodière"  wrote:

Dear All,

I recently lost a Xen 7 pool managed by CS 4.11 (hardware issue).

I tried to apply Dag's excellent article 
(http://www.shapeblue.com/recovery-of-vms-to-new-cloudstack-instance/), and 
found a strange behavior :


-  If I directly export the VHD from old NFS primary storage 
(after coalesce vhd with vhd-util) and import to CS using import Template, they 
always fail, with 2 types of error (mainly Failed post download script: vhd 
remove 
/mnt/SecStorage/2da072e7-a6fe-3e39-b07e-77ec4f34fd49/template/tmpl/2/258/dnld5954086457185109807tmp_
 hidden failed).

-  If I use the same VHD, import-it to a xen server (xe 
vdi-import), and then export it (xe vdi-export), I can successfully import it 
to CS

I made tries with Windows instances, from 15 to 100 Gb, and always the 
same result.
And I also made the same to a CS 4.7 with same results.

A quick look inside createtmplt.sh did not show anything relevant.
I guess the header / footer might have some differences, I'll check 
this tomorrow.

Is anyone aware of this ?
Is there any solution to avoid the intermediate vdi-import / export.

I will dig a bit more to find a proper solution, as it sounds 
interesting to get a disaster migration script working fine.

    Regards.
    
---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71




dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 




dag.sonst...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 



VHD import

2018-02-21 Thread Grégoire Lamodière
Dear All,

I recently lost a Xen 7 pool managed by CS 4.11 (hardware issue).

I tried to apply Dag's excellent article 
(http://www.shapeblue.com/recovery-of-vms-to-new-cloudstack-instance/), and 
found a strange behavior :


-  If I directly export the VHD from old NFS primary storage (after 
coalesce vhd with vhd-util) and import to CS using import Template, they always 
fail, with 2 types of error (mainly Failed post download script: vhd remove 
/mnt/SecStorage/2da072e7-a6fe-3e39-b07e-77ec4f34fd49/template/tmpl/2/258/dnld5954086457185109807tmp_
 hidden failed).

-  If I use the same VHD, import-it to a xen server (xe vdi-import), 
and then export it (xe vdi-export), I can successfully import it to CS

I made tries with Windows instances, from 15 to 100 Gb, and always the same 
result.
And I also made the same to a CS 4.7 with same results.

A quick look inside createtmplt.sh did not show anything relevant.
I guess the header / footer might have some differences, I'll check this 
tomorrow.

Is anyone aware of this ?
Is there any solution to avoid the intermediate vdi-import / export.

I will dig a bit more to find a proper solution, as it sounds interesting to 
get a disaster migration script working fine.

Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



RE: host KVM unable to find cloudbr0

2018-02-07 Thread Grégoire Lamodière
Hi Nicolas / Dag, 

Well done !

I think this has already been mentioned in this list, but maybe we should 
update the install doc with up-to-date informations and advisories / tips, such 
as "if you use team, be sure to name them teamXXX", and probably a lot more.

I don't know if this work is currently handled by anyone ? 
If not, we will be more than happy to contribute.

Regards
---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Envoyé : mardi 6 février 2018 16:30
À : users@cloudstack.apache.org
Objet : Re: host KVM unable to find cloudbr0

Hi Nicolas,

Excellent, well done finding that - keep us in the loop on how you get on.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 06/02/2018, 15:14, "Nicolas Bouige"  wrote:

Hello,


We finally found the solution.


We have checked the source code to know exactly how the network settings 
are detected by cloudstack.


##

String [] _ifNamePatterns = {
   "^eth",
   "^bond",
   "^vlan",
   "^vx",
   "^em",
   "^ens",
   "^eno",
   "^enp",
   "^team",
   "^enx",
   "^p\\d+p\\d+"
   };
   /**
* @param fname
* @return
*/
   boolean isInterface(final String fname) {
   StringBuffer commonPattern = new StringBuffer();
   for (final String ifNamePattern : _ifNamePatterns) {
   commonPattern.append("|(").append(ifNamePattern).append(".*)");
   }
   if(fname.matches(commonPattern.toString())) {
   return true;
   }
  return false;
   }



As you can see cloudstack check only the list above of name so if your 
device name doesn't match it fails.


our device team name were MGMT and TRUNK, we just added team.

MGMT --> teamMGMT

TRUNK --> teamTRUNK


(team must be in minus and in first)


now KVM host is up on cloudstack GUI.


So,for the moment,  KVM works fine with teamed NIC configured with nmcli


Best regards,
N.B



De : Dag Sonstebo 
Envoyé : mardi 6 février 2018 13:55:28
À : users@cloudstack.apache.org
Objet : Re: host KVM unable to find cloudbr0

Hi Nicolas,

Yes I would do a double test with both bonding and teaming and see if the 
agent simply doesn’t like teaming at all.
You can obviously also change the agent logs to trace and see if that sheds 
more light on it.

With regards to naming convention I know this is a contested issue – we do 
the same as you and change it back to the legacy ethX naming convention to 
simplify our build scripts, but overall I would expect it to work with the new 
world naming convention.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 06/02/2018, 12:47, "Nicolas Bouige"  wrote:

Dag,


okay, i got it, thanks a lot for the details and your help.
As im  stuck with the current configuration with nmcli, im going to try 
without on an another host see if i have more success.


Do you know if someone success to set up KVM networking with the new 
naming convention on centOS7 ? (ensX, enpX..etc)


because i renamed the NICs with ethX but don't know if it was really 
necessary.


Best regards,


N.B

De : Dag Sonstebo 
Envoyé : mardi 6 février 2018 12:40:19
À : users@cloudstack.apache.org
Objet : Re: host KVM unable to find cloudbr0

Hi Nicolas

These two settings are mutually exclusive – you are controlling your 
networking with NetworkManager (NM) through nmcli. My personal preference is to 
leave NM out of the equation and do all configuration manually (or with 
Ansible, Chef or whatever tool you choose) – hence I mark the different 
interfaces with "NM_CONTROLLED=no" to stop NM ever trying to interfere if 
someone starts the NM service up.

So – if you want to use nmcli then remove "NM_CONTROLLED=no" from your 
config files.

As I said – this is a personal preference only though – you will 
probably manage to get it to work with NM, I just find it too intrusive.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 06/02/2018, 11:15, "Nico

RE: Not able to create the Secondary Storage and Console proxy Vms

2018-01-09 Thread Grégoire Lamodière
Can you please increase the loglevel to debug or trace for categories relevant 
(org.apache.cloudstack, etc.) ?
The conf file is log4j-cloud.xml
And then re-enable the zone to get fresh logs? 
You might get more information which will help to understand the issue.

If you do not see anything, please post updated logs.
I have no experience with CS / ESx, but the debug process should be the same as 
CS / Xen

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Dickson Lam (dilam) [mailto:di...@cisco.com] 
Envoyé : mardi 9 janvier 2018 23:40
À : users@cloudstack.apache.org
Objet : RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Gregoire:

Thanks for reply.
The secondary storage is located on the Management Server (CentOS7) too with 
nfs enable. The primary storage is on the ESxi host.
Yes, the primary storage is available from ESXi host.
I am not sure why storage is not available. I can see both storage on the Web 
page Infrastructure view.
Is any way to debug why the storage is not available?

Thanks
Dickson

-Original Message-
From: Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr] 
Sent: Tuesday, January 09, 2018 2:15 PM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Dickson, 

From what I can read on the logs, this is a problem related to storage.
"No storage pools available for shared volume allocation, returning"
This error occurs for cluserwide and zonewide pools.
How are designed your primary and secondary storage ? 
Is your prstore available from ESX ? 

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Dickson Lam (dilam) [mailto:di...@cisco.com] Envoyé : mardi 9 janvier 2018 
20:33 À : users@cloudstack.apache.org Objet : RE: Not able to create the 
Secondary Storage and Console proxy Vms

Hi Glenn/anyone:

Can someone provide help or my problem?

Thanks
Dickson

-Original Message-
From: Dickson Lam (dilam)
Sent: Monday, December 18, 2017 1:57 PM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Glenn:

Do you have chance to look at the log file that I upload on pastebin?

Regards
Dickson

-Original Message-
From: Dickson Lam (dilam)
Sent: Friday, December 01, 2017 11:37 AM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Glenn:

The log file is too big. I can only paste the first 2300 lines on the log to 
the paste bin. Hopefully, it will have enough information. If not, please let 
me know. The following is the link:
https://pastebin.com/twK6wnjr

The title is Create Secondary Storage Problem.

Thanks
Dickson

-Original Message-
From: Glenn Wagner [mailto:glenn.wag...@shapeblue.com]
Sent: Friday, December 01, 2017 9:04 AM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

H,

You can upload your logs to https://pastebin.com/ and then send us the link 
that gets generated once uploaded.

Regards
Glenn




glenn.wag...@shapeblue.com
www.shapeblue.com
Winter Suite, 1st Floor, The Avenues, Drama Street, Somerset West, Cape Town  
7129South Africa @shapeblue
  
 


-Original Message-
From: Dickson Lam (dilam) [mailto:di...@cisco.com]
Sent: Friday, 01 December 2017 5:00 PM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Glenn:

Thanks for your reply but sorry I am new and can you tell me how to upload the 
logs to pastebin. I look at the 
http://mail-archives.apache.org/mod_mbox/cloudstack-users/ site and did not see 
anything that can allow me to upload a file.
Yes, I have run the following on the Management server to prepare for the 
system vm template:

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt 
\ -m /export/secondary \ -u 
http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-vmware.ova \ 
-h vmware \ -F

Yes, the reserved system ip is on 10.89.98.x subnet and Management server is on 
10.89.118.x. These two subnet is routable communicate with each other through 
layer 3.

Regards
Dickson

-Original Message-
From: Glenn Wagner [mailto:glenn.wag...@shapeblue.com]
Sent: Friday, December 01, 2017 6:14 AM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi,

Could you upload your management server logs to pastebin so we can have a look 
Did you seed the System template before you started the cloudstack management 
service 

To answer your question the system VM's will use the Reserved system IP address 
and the management server will need to communicate with the systemVM's over ssh 
and the cloudsrtack agent on the System VM will communicate with the management 
serve

RE: Not able to create the Secondary Storage and Console proxy Vms

2018-01-09 Thread Grégoire Lamodière
Hi Dickson, 

From what I can read on the logs, this is a problem related to storage.
"No storage pools available for shared volume allocation, returning"
This error occurs for cluserwide and zonewide pools.
How are designed your primary and secondary storage ? 
Is your prstore available from ESX ? 

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Dickson Lam (dilam) [mailto:di...@cisco.com] 
Envoyé : mardi 9 janvier 2018 20:33
À : users@cloudstack.apache.org
Objet : RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Glenn/anyone:

Can someone provide help or my problem?

Thanks
Dickson

-Original Message-
From: Dickson Lam (dilam) 
Sent: Monday, December 18, 2017 1:57 PM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Glenn:

Do you have chance to look at the log file that I upload on pastebin?

Regards
Dickson

-Original Message-
From: Dickson Lam (dilam) 
Sent: Friday, December 01, 2017 11:37 AM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Glenn:

The log file is too big. I can only paste the first 2300 lines on the log to 
the paste bin. Hopefully, it will have enough information. If not, please let 
me know. The following is the link:
https://pastebin.com/twK6wnjr

The title is Create Secondary Storage Problem.

Thanks
Dickson

-Original Message-
From: Glenn Wagner [mailto:glenn.wag...@shapeblue.com] 
Sent: Friday, December 01, 2017 9:04 AM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

H,

You can upload your logs to https://pastebin.com/ and then send us the link 
that gets generated once uploaded.

Regards
Glenn




glenn.wag...@shapeblue.com
www.shapeblue.com
Winter Suite, 1st Floor, The Avenues, Drama Street, Somerset West, Cape Town  
7129South Africa @shapeblue
  
 


-Original Message-
From: Dickson Lam (dilam) [mailto:di...@cisco.com]
Sent: Friday, 01 December 2017 5:00 PM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi Glenn:

Thanks for your reply but sorry I am new and can you tell me how to upload the 
logs to pastebin. I look at the 
http://mail-archives.apache.org/mod_mbox/cloudstack-users/ site and did not see 
anything that can allow me to upload a file.
Yes, I have run the following on the Management server to prepare for the 
system vm template:

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt 
\ -m /export/secondary \ -u 
http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-vmware.ova \ 
-h vmware \ -F

Yes, the reserved system ip is on 10.89.98.x subnet and Management server is on 
10.89.118.x. These two subnet is routable communicate with each other through 
layer 3.

Regards
Dickson

-Original Message-
From: Glenn Wagner [mailto:glenn.wag...@shapeblue.com]
Sent: Friday, December 01, 2017 6:14 AM
To: users@cloudstack.apache.org
Subject: RE: Not able to create the Secondary Storage and Console proxy Vms

Hi,

Could you upload your management server logs to pastebin so we can have a look 
Did you seed the System template before you started the cloudstack management 
service 

To answer your question the system VM's will use the Reserved system IP address 
and the management server will need to communicate with the systemVM's over ssh 
and the cloudsrtack agent on the System VM will communicate with the management 
server on port 8250.

Regards
Glenn



glenn.wag...@shapeblue.com
www.shapeblue.com
Winter Suite, 1st Floor, The Avenues, Drama Street, Somerset West, Cape Town  
7129South Africa @shapeblue
  
 


-Original Message-
From: Dickson Lam (dilam) [mailto:di...@cisco.com]
Sent: Thursday, 30 November 2017 6:23 PM
To: users@cloudstack.apache.org
Subject: Not able to create the Secondary Storage and Console proxy Vms

Hi all:

I am new here and need some help to setup Cloudstack 4.9 to manage a VMWare 5.5 
Esxi hosts on a VCenter for demo. I got the management server up and running 
but Secondary Storage VM and Console proxy VM get creation failure. The 
following is the errors:

Secondary Storage Vm creation failure. zone: Zone-Orion, error details: null 
Console proxy creation failure. zone: Zone-Orion, error details: null

I have installed Cloudstack 4.9 Management Server on CentOS 7 VM. The setup 
that includes one Esxi host with VM DataStorage. The nfs mount secondary 
storage has 200G disk space  on the Management Server.
The Esxi host is on vlan 98 with ip 10.89.98.144 and it is located at OrionTest 
datacenter. The Management Server VM is on vlan118  with ip address 
10.89.118.109 which is on other Datacenter.

I have run the following on the Management server to prepare for the system vm 
template:

/usr/share/cloudsta

Re: KVM storage cluster

2018-01-07 Thread Grégoire Lamodière
Hi Vahric,

Thank you. I will have a look on it.

Grégoire



Envoyé depuis mon smartphone Samsung Galaxy.


 Message d'origine 
De : Vahric MUHTARYAN 
Date : 07/01/2018 21:08 (GMT+01:00)
À : users@cloudstack.apache.org
Objet : Re: KVM storage cluster

Hello Grégoire,

I suggest you to look EMC scaleio for block based operations. It has a free one 
too ! And as a block working better then Ceph ;)

Regards
VM

On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:

Hi Ivan,

Thank you for your quick reply.

I'll have a look on Ceph and related perfs.
As you mentionned, 2 DRDB nfs servers can do the job, but if I can avoid 
using 2 blades for just passing blocks to nfs, this is even better (and 
maintain them as well).

Thanks for pointing to ceph.

Grégoire




    ---
    Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
Envoyé : dimanche 7 janvier 2018 15:20
À : users@cloudstack.apache.org
Objet : Re: KVM storage cluster

Hi, Grégoire,
You could have
- local storage if you like, so every compute node could have own space 
(one lun per host)
- to have Ceph deployed on the same compute nodes (distribute raw devices 
among nodes)
- to dedicate certain node as NFS server (or two servers with DRBD)

I don't think that shared FS is a good option, even clustered LVM is a big 
pain.

    2018-01-07 21:08 GMT+07:00 Grégoire Lamodière :

> Dear all,
>
> Since Citrix changed deeply the free version of XenServer 7.3, I am in
> the process of Pocing moving our Xen clusters to KVM on Centos 7 I
> decided to use HP blades connected to HP P2000 over mutipath SAS links.
>
> The network part seems fine to me, not so far from what we used to do
> with Xen.
> About the storage, I am a little but confused about the shared
> mountpoint storage option offerds by CS.
>
> What would be the good option, in terms of CS, to create a cluster fs
> using my SAS array ?
> I read somewhere (a Dag SlideShare I think) that GFS2 is the only
> clustered FS supported by CS. Is it still correct ?
> Does it mean I have to create the GFS2 cluster, make identical mount
> conf on all host, and use it on CS as NFS ?
> I do not have to add the storage to KVM prior CS zone creation ?
>
> Thanks a lot for any help / information.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>


--
With best regards, Ivan Kudryavtsev
Bitworks Software, Ltd.
Cell: +7-923-414-1515
WWW: http://bitworks.software/ <http://bw-sw.com/>





RE: KVM storage cluster

2018-01-07 Thread Grégoire Lamodière
Hi Ivan, 

Thank you for your quick reply.

I'll have a look on Ceph and related perfs.
As you mentionned, 2 DRDB nfs servers can do the job, but if I can avoid using 
2 blades for just passing blocks to nfs, this is even better (and maintain them 
as well).

Thanks for pointing to ceph.

Grégoire




---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com] 
Envoyé : dimanche 7 janvier 2018 15:20
À : users@cloudstack.apache.org
Objet : Re: KVM storage cluster

Hi, Grégoire,
You could have
- local storage if you like, so every compute node could have own space (one 
lun per host)
- to have Ceph deployed on the same compute nodes (distribute raw devices among 
nodes)
- to dedicate certain node as NFS server (or two servers with DRBD)

I don't think that shared FS is a good option, even clustered LVM is a big pain.

2018-01-07 21:08 GMT+07:00 Grégoire Lamodière :

> Dear all,
>
> Since Citrix changed deeply the free version of XenServer 7.3, I am in 
> the process of Pocing moving our Xen clusters to KVM on Centos 7 I 
> decided to use HP blades connected to HP P2000 over mutipath SAS links.
>
> The network part seems fine to me, not so far from what we used to do 
> with Xen.
> About the storage, I am a little but confused about the shared 
> mountpoint storage option offerds by CS.
>
> What would be the good option, in terms of CS, to create a cluster fs 
> using my SAS array ?
> I read somewhere (a Dag SlideShare I think) that GFS2 is the only 
> clustered FS supported by CS. Is it still correct ?
> Does it mean I have to create the GFS2 cluster, make identical mount 
> conf on all host, and use it on CS as NFS ?
> I do not have to add the storage to KVM prior CS zone creation ?
>
> Thanks a lot for any help / information.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>


--
With best regards, Ivan Kudryavtsev
Bitworks Software, Ltd.
Cell: +7-923-414-1515
WWW: http://bitworks.software/ <http://bw-sw.com/>


KVM storage cluster

2018-01-07 Thread Grégoire Lamodière
Dear all,

Since Citrix changed deeply the free version of XenServer 7.3, I am in the 
process of Pocing moving our Xen clusters to KVM on Centos 7
I decided to use HP blades connected to HP P2000 over mutipath SAS links.

The network part seems fine to me, not so far from what we used to do with Xen.
About the storage, I am a little but confused about the shared mountpoint 
storage option offerds by CS.

What would be the good option, in terms of CS, to create a cluster fs using my 
SAS array ?
I read somewhere (a Dag SlideShare I think) that GFS2 is the only clustered FS 
supported by CS. Is it still correct ?
Does it mean I have to create the GFS2 cluster, make identical mount conf on 
all host, and use it on CS as NFS ?
I do not have to add the storage to KVM prior CS zone creation ?

Thanks a lot for any help / information.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



RE: [ANNOUNCE][CLOUDSTACK] Apache CloudStack 4.9.3.0 (LTS)

2017-09-13 Thread Grégoire Lamodière
Hi Rohit, 

Thank your for your answer.
My tests were done without upgrading systemvmtemplates, and I have few more 
results :

1/ It fails for both old (created on 4.9.2) and new (created on 4.9.3) VPC
2/ It fails for old (4.9.2) and new (4.9.3) instances
3/ As long as the virtual routeur (created on 4.9.2) is not destroyed, static 
nat / port forwarding is working
4/ Once a VR is re-created (or freshly created on 4.9.3), it stop working

>From what I can see. If no static nat nor port forward is set for an instance, 
>the instance can access Internet using the source-nat interface.
Once you assign a port-forward / static nat to a instance, you can no longer 
access the instance from outside, and the instance itself can no longer access 
Internet (but is able to ping the gateway).

On the VR, the 3 interfaces are properly configured and within VR, no problem 
to ping Internet.

Anybody else can reproduce this issue ? 

---
Grégoire Lamodière

-Message d'origine-
De : Rohit Yadav [mailto:rohit.ya...@shapeblue.com] 
Envoyé : mercredi 13 septembre 2017 09:53
À : users@cloudstack.apache.org
Objet : Re: [ANNOUNCE][CLOUDSTACK] Apache CloudStack 4.9.3.0 (LTS)

Hi Gregoire,


For 4.9, no systemvmtemplate upgrade is necessary. Please re-perform your tests 
without upgrading systemvmtemplate and let us know what failed and what steps 
were performed, thanks.


Regards.


From: Grégoire Lamodière 
Sent: Wednesday, September 13, 2017 3:27:29 AM
To: users@cloudstack.apache.org
Subject: RE: [ANNOUNCE][CLOUDSTACK] Apache CloudStack 4.9.3.0 (LTS)

Hi All,

Great job, thanks for this release.
I've made basic testing today ugrading 4.9.2 to 4.9.3.
Except VPC, my test were all successfull.

Is there a systemvm template upgrade to perform ? (I only found new templates 
for 4.10).

The main bug I found is related to iptable that stop allowing / forwarding 
traffic for instance inside VPC.
The use case is :

- Upgrade mgmt from 4.9.2 to 4.9.3
- Check instances connectivity (inside vpc using PAT or static NAT) => OK
- Restart VPC with clean option (or destroy router and let mgmt create a new 
one)
- Check instancs connectivity (still both PAT /SNAT) => KO
- Try to assign another ACL (to be sure not the same as 
https://issues.apache.org/jira/browse/CLOUDSTACK-9189) => still KO

If no systemvm template upgrade is required, I'll dig tomorrow this case, as it 
sounds a big blocker to a production upgrade.

Cheers.
---
Grégoire Lamodière

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 


-Message d'origine-
De : Rohit Yadav [mailto:bhais...@apache.org] Envoyé : mardi 12 septembre 2017 
08:40 À : d...@cloudstack.apache.org; users@cloudstack.apache.org Objet : 
[ANNOUNCE][CLOUDSTACK] Apache CloudStack 4.9.3.0 (LTS)

# Apache CloudStack LTS Maintenance Release 4.9.3.0

The Apache CloudStack project is pleased to announce the release of CloudStack 
4.9.3.0 as part of its LTS 4.9.x releases. The CloudStack
4.9.3.0 release contains more than 180 fixes since the CloudStack 4.9.2.0 
release. Cloudstack LTS branches are supported for 20 months, and will receive 
updates for first 14 months and only security updates in its last 6 months. The 
4.9 LTS branch is supported until  1 June 2018.

Apache CloudStack is an integrated Infrastructure-as-a-Service (IaaS) software 
platform that allows users to build feature-rich public and private cloud 
environments. CloudStack includes an intuitive user interface and rich API for 
managing the compute, networking, software, and storage resources. The project 
became an Apache top level project in March, 2013.

More information about Apache CloudStack can be found at:
http://cloudstack.apache.org/

# Documentation

What's new in CloudStack 4.9.3.0:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.9.3.0/about.html

The 4.9.3.0 release notes include a full list of issues fixed, as well as 
upgrade instructions from previous versions of Apache CloudStack, and can be 
found at:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.9.3.0

The official installation, administration and API documentation for each of the 
releases are available on our documentation page:
http://docs.cloudstack.apache.org/

# Downloads

The official source code for the 4.9.3.0 release can be downloaded from our 
downloads page:
http://cloudstack.apache.org/downloads.html

In addition to the official source code release, individual contributors have 
also made convenience binaries available on the Apache CloudStack download 
page, and can be found at:

http://www.shapeblue.com/packages/
http://cloudstack.apt-get.eu/ubuntu/dists/ (packages to be published soon) 
http://cloudstack.apt-get.eu/centos/6/ (packages to be published soon) 
http://cloudstack.apt-get.eu/centos/7/ (packages to be published soon)

Regards,
Rohit Yadav


RE: [ANNOUNCE][CLOUDSTACK] Apache CloudStack 4.9.3.0 (LTS)

2017-09-12 Thread Grégoire Lamodière
Hi All, 

Great job, thanks for this release.
I've made basic testing today ugrading 4.9.2 to 4.9.3.
Except VPC, my test were all successfull.

Is there a systemvm template upgrade to perform ? (I only found new templates 
for 4.10).

The main bug I found is related to iptable that stop allowing / forwarding 
traffic for instance inside VPC.
The use case is : 

- Upgrade mgmt from 4.9.2 to 4.9.3
- Check instances connectivity (inside vpc using PAT or static NAT) => OK
- Restart VPC with clean option (or destroy router and let mgmt create a new 
one)
- Check instancs connectivity (still both PAT /SNAT) => KO
- Try to assign another ACL (to be sure not the same as 
https://issues.apache.org/jira/browse/CLOUDSTACK-9189) => still KO

If no systemvm template upgrade is required, I'll dig tomorrow this case, as it 
sounds a big blocker to a production upgrade.

Cheers.
---
Grégoire Lamodière

-Message d'origine-
De : Rohit Yadav [mailto:bhais...@apache.org] 
Envoyé : mardi 12 septembre 2017 08:40
À : d...@cloudstack.apache.org; users@cloudstack.apache.org
Objet : [ANNOUNCE][CLOUDSTACK] Apache CloudStack 4.9.3.0 (LTS)

# Apache CloudStack LTS Maintenance Release 4.9.3.0

The Apache CloudStack project is pleased to announce the release of CloudStack 
4.9.3.0 as part of its LTS 4.9.x releases. The CloudStack
4.9.3.0 release contains more than 180 fixes since the CloudStack 4.9.2.0 
release. Cloudstack LTS branches are supported for 20 months, and will receive 
updates for first 14 months and only security updates in its last 6 months. The 
4.9 LTS branch is supported until  1 June 2018.

Apache CloudStack is an integrated Infrastructure-as-a-Service (IaaS) software 
platform that allows users to build feature-rich public and private cloud 
environments. CloudStack includes an intuitive user interface and rich API for 
managing the compute, networking, software, and storage resources. The project 
became an Apache top level project in March, 2013.

More information about Apache CloudStack can be found at:
http://cloudstack.apache.org/

# Documentation

What's new in CloudStack 4.9.3.0:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.9.3.0/about.html

The 4.9.3.0 release notes include a full list of issues fixed, as well as 
upgrade instructions from previous versions of Apache CloudStack, and can be 
found at:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.9.3.0

The official installation, administration and API documentation for each of the 
releases are available on our documentation page:
http://docs.cloudstack.apache.org/

# Downloads

The official source code for the 4.9.3.0 release can be downloaded from our 
downloads page:
http://cloudstack.apache.org/downloads.html

In addition to the official source code release, individual contributors have 
also made convenience binaries available on the Apache CloudStack download 
page, and can be found at:

http://www.shapeblue.com/packages/
http://cloudstack.apt-get.eu/ubuntu/dists/ (packages to be published soon) 
http://cloudstack.apt-get.eu/centos/6/ (packages to be published soon) 
http://cloudstack.apt-get.eu/centos/7/ (packages to be published soon)

Regards,
Rohit Yadav


RE: Container Service

2017-07-25 Thread Grégoire Lamodière
Hi Dag, 

Ok, now, I understand why my mgmt-service is down when I install ccs package:)

Do you need any contribution on ccs ? We can help if you need documentation, 
test, etc.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Envoyé : mardi 25 juillet 2017 11:05
À : users@cloudstack.apache.org
Objet : Re: Container Service

Hi Grégoire,

CCS is 4.6 only at this point – but we are working on a 4.10 version.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 25/07/2017, 08:31, "Grégoire Lamodière"  wrote:

Hi Simon, 

Thanks a lot, I'll have a look.
Have you implement CCS on 4.9.2 ? 

I'll make a try before we start production on the new zone.

Grégoire
    
---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Simon Weller [mailto:swel...@ena.com.INVALID] 
Envoyé : lundi 24 juillet 2017 23:10
À : users@cloudstack.apache.org
Objet : Re: Container Service

Grégoire,


Take a look at the URLs below:


Code and Docs: https://github.com/shapeblue/ccs


Packages: http://packages.shapeblue.com/ccs/

- Si

____
From: Grégoire Lamodière 
Sent: Monday, July 24, 2017 2:36 PM
To: users@cloudstack.apache.org
Subject: Container Service

Dear All,

Does anyone know the current status of Container Server ?
I remember Gilles talking about this in Berlin last year, but all links 
sound down (Except the homepage of the module).
I cannot find install guide / any technical docs, nor packages.

I would really like making some tries on this since we are now almost 
working on 4.9.2.

Cheers.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71




dag.sonst...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 



RE: Container Service

2017-07-25 Thread Grégoire Lamodière
Hi Simon, 

Thanks a lot, I'll have a look.
Have you implement CCS on 4.9.2 ? 

I'll make a try before we start production on the new zone.

Grégoire

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Simon Weller [mailto:swel...@ena.com.INVALID] 
Envoyé : lundi 24 juillet 2017 23:10
À : users@cloudstack.apache.org
Objet : Re: Container Service

Grégoire,


Take a look at the URLs below:


Code and Docs: https://github.com/shapeblue/ccs


Packages: http://packages.shapeblue.com/ccs/

- Si


From: Grégoire Lamodière 
Sent: Monday, July 24, 2017 2:36 PM
To: users@cloudstack.apache.org
Subject: Container Service

Dear All,

Does anyone know the current status of Container Server ?
I remember Gilles talking about this in Berlin last year, but all links sound 
down (Except the homepage of the module).
I cannot find install guide / any technical docs, nor packages.

I would really like making some tries on this since we are now almost working 
on 4.9.2.

Cheers.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



Container Service

2017-07-24 Thread Grégoire Lamodière
Dear All,

Does anyone know the current status of Container Server ?
I remember Gilles talking about this in Berlin last year, but all links sound 
down (Except the homepage of the module).
I cannot find install guide / any technical docs, nor packages.

I would really like making some tries on this since we are now almost working 
on 4.9.2.

Cheers.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



RE: Network architecture

2017-07-07 Thread Grégoire Lamodière
Hi Rubens, 

Thank you for your feedback.
Right now, we are not so happy with Xen in terms of stability, upgrade process 
and HA.

Moving to KVM is an important decision to us, as it makes big changes to our 
daily operations, but if it improved stability and performances, then we'll do.

Does anyone have any feedback for instances backup using kvm ? 
In Xen world, we had many options to perform live and incremental backup 
(backup solutions such as PHD, XenOrchestra, scripts using Snapshots, etc.).

About the snapshots, is the freeze behavior expected ? Does it means that each 
user performing snapshot will get his instance freezed during the snapshot ? if 
so, this is a huge issue, isnt'it ? 

Thanks all.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-Message d'origine-
De : Rubens Malheiro [mailto:rubens.malhe...@gmail.com] 
Envoyé : jeudi 6 juillet 2017 17:10
À : users@cloudstack.apache.org
Objet : Re: Network architecture

I'll give you an opniao excuse my English I use Translate.

I recently moved a whole pod with 6 Xen machines to KVM I'll say it was much 
quieter and seems to be more stable on both Windows and Vms in LINUX

But it is necessary to convert the machines in vhd to qcow before deploying.

Works well.

What is really bad are the snapshoots that can be enabled in CLOUDSTACK but it 
takes time and the VM is frozen.

I had to migrate XEN because no version recognizes my new 10GB cards

Sorry, my english is more of an opinion.

On Wed, Jul 5, 2017 at 7:36 PM, Grégoire Lamodière 
wrote:

> Dear Paul / Remi,
>
> Thank you for your feedback and the bounding advice.
> We'll go on this direction.
>
> @Remi, you are right about KVM.
> Right now, we still use XenServer because Snapshots and backup solutions.
> If KVM does the job properly, we might make a try on this new zone.
> Do you have any feedback migrating instances from a xenserver zone to 
> a kvm zone ? (should we only un-install xentools, export vm as 
> template and download in the new zone ? Or is it a more complexe 
> process  ?)
>
> Thanks again.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Paul Angus [mailto:paul.an...@shapeblue.com] Envoyé : mercredi 5 
> juillet 2017 21:05 À : users@cloudstack.apache.org Objet : RE: Network 
> architecture
>
> Hi Grégoire,
>
> With those NICs (and without any other background).  I'd go with 
> bonding your 1G NICs together and your 10G NICs together, put primary 
> and secondary storage over the 10G.  Mgmt traffic is minimal and 
> spread over all of your hosts, so would be public traffic, so these 
> would be fine over the bonded 1Gbs links.  Finally guest traffic, this 
> would normally be fine over the 1Gb links, especially if you throttle 
> the traffic a little, unless you know that you'll have especially high guest 
> traffic.
>
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr]
> Sent: 04 July 2017 21:15
> To: users@cloudstack.apache.org
> Subject: Network architecture
>
> Dear All,
>
> In the process of implementing a new CS advanced zone (4.9.2), I am 
> wondering about the best network architecture to implement.
> Any idea / advice would be highly appreciated.
>
> 1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR 
> Store is nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ 
> Maximum network offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ 
> Hardware Hp Blade c7000
>
> Right now, my choice would be :
>
> 1/ Bound the 2 gigabit networks cards and use the bound for mgmt + 
> public 2/ Use 1 10Gbe for storage network (operations on sec Store) 3/ 
> Use 1 10 Gbe for guest traffic (and pr store traffic by design)
>
> This architecture sounds good in terms of performance (using 10 Gbe 
> where it makes sense, redundancy on mgmt + public with bound).
>
> Another option would be to bound the 2 10 Gbe interfaces, and use Xen 
> Label to manage Storage and guest on the same physical network. This 
> choice would give us faileover on storage and guest traffic, but I am 
> wondering if performances would be badly affected.
>
> Do you have any feedback on this ?
>
> Thanks all.
>
> Best Regards.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>
>


RE: Network architecture

2017-07-05 Thread Grégoire Lamodière
Dear Paul / Remi, 

Thank you for your feedback and the bounding advice.
We'll go on this direction.

@Remi, you are right about KVM.
Right now, we still use XenServer because Snapshots and backup solutions.
If KVM does the job properly, we might make a try on this new zone.
Do you have any feedback migrating instances from a xenserver zone to a kvm 
zone ? (should we only un-install xentools, export vm as template and download 
in the new zone ? Or is it a more complexe process  ?)

Thanks again.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-Message d'origine-
De : Paul Angus [mailto:paul.an...@shapeblue.com] 
Envoyé : mercredi 5 juillet 2017 21:05
À : users@cloudstack.apache.org
Objet : RE: Network architecture

Hi Grégoire,

With those NICs (and without any other background).  I'd go with bonding your 
1G NICs together and your 10G NICs together, put primary and secondary storage 
over the 10G.  Mgmt traffic is minimal and spread over all of your hosts, so 
would be public traffic, so these would be fine over the bonded 1Gbs links.  
Finally guest traffic, this would normally be fine over the 1Gb links, 
especially if you throttle the traffic a little, unless you know that you'll 
have especially high guest traffic.



Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 


-Original Message-
From: Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr]
Sent: 04 July 2017 21:15
To: users@cloudstack.apache.org
Subject: Network architecture

Dear All,

In the process of implementing a new CS advanced zone (4.9.2), I am wondering 
about the best network architecture to implement.
Any idea / advice would be highly appreciated.

1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR Store is 
nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ Maximum network 
offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ Hardware Hp Blade c7000

Right now, my choice would be :

1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public 2/ 
Use 1 10Gbe for storage network (operations on sec Store) 3/ Use 1 10 Gbe for 
guest traffic (and pr store traffic by design)

This architecture sounds good in terms of performance (using 10 Gbe where it 
makes sense, redundancy on mgmt + public with bound).

Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to 
manage Storage and guest on the same physical network. This choice would give 
us faileover on storage and guest traffic, but I am wondering if performances 
would be badly affected.

Do you have any feedback on this ?

Thanks all.

Best Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71




Network architecture

2017-07-04 Thread Grégoire Lamodière
Dear All,

In the process of implementing a new CS advanced zone (4.9.2), I am wondering 
about the best network architecture to implement.
Any idea / advice would be highly appreciated.

1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe
2/ The PR Store is nfs based 10 Gbe
3/ The sec Store is nfs based 10 Gbe
4/ Maximum network offering is 1 Gbit to Internet
5/ Hypervisor Xen 7
6/ Hardware Hp Blade c7000

Right now, my choice would be :

1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public
2/ Use 1 10Gbe for storage network (operations on sec Store)
3/ Use 1 10 Gbe for guest traffic (and pr store traffic by design)

This architecture sounds good in terms of performance (using 10 Gbe where it 
makes sense, redundancy on mgmt + public with bound).

Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to 
manage Storage and guest on the same physical network. This choice would give 
us faileover on storage and guest traffic, but I am wondering if performances 
would be badly affected.

Do you have any feedback on this ?

Thanks all.

Best Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



RE: Billing for services

2016-11-25 Thread Grégoire Lamodière
Dear Audrey, 

Here is the man page of the usage service : 
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/usage.html

You have 2 settings that might help you :

1/ usage.stats.job.exec.time
2/ usage.stats.job.aggregation.range

Best Regards.

---
Grégoire Lamodière

-Message d'origine-
De : Audrey Roberto B Baldin [mailto:audrey.bal...@unitelco.com.br] 
Envoyé : vendredi 25 novembre 2016 13:57
À : users 
Objet : Re: Billing for services

I'm looking into the cloud_usage database, but it appears that some information 
are missing, like they are not sync'ed with the production system.

I look into the accounts table for example, and can't find all accounts that 
exists in the same table in the cloud database.

Is there any way to sync the data?



Att.,
Audrey

- Mensagem original -
De: "Grégoire Lamodière" 
Para: "users" 
Enviadas: Quinta-feira, 24 de novembro de 2016 19:59:29
Assunto: RE: Billing for services

Dear Audrey, 

The starting point would be to install and start the cloudstack usage service 
(I think this step is well documented inside CS installation guide).
Basically, this service will collect and agregate usage for most CS metrics 
(Instance running, storage, network, etc).
These metrics are then stored inside a MySQL DB (cloud_usage).

You can easily export these values directly from Mysql to Excel 
(http://support.en.ctx.org.cn/ctx132030.citrix).

The second option (recommanded) is to use cloudstack API to get usage and 
manage them with third part apps.

I personnaly tested Amysta few months ago, and it was pretty good.

Best Regards.
---
Grégoire Lamodière

-Message d'origine-
De : Audrey Roberto B Baldin [mailto:audrey.bal...@unitelco.com.br] 
Envoyé : jeudi 24 novembre 2016 15:02
À : users 
Objet : Re: Billing for services

Dag, thanks for your return.

I'd looked at usage service, but could not figure out how to work with it, yet. 
I've been reading some presentations from ShapeBlue about, but still coudn't 
understand how to use it.

But I'm not sure, for example, if the usage service can show bandwidth usage 
from each interface of the VPC. How can I bill for internet usage for example 
if I can't measure only the internet traffic a user is consuming from the 
public network?


Att.,
Audrey

- Mensagem original -
De: "Dag Sonstebo" 
Para: "users" 
Enviadas: Quinta-feira, 24 de novembro de 2016 11:35:22
Assunto: Re: Billing for services

Hi Audrey,

This is basically what the CloudStack usage service is for - suggest you 
install this and take a look at the data this provides.

Regards, 
Dag Sonstebo
Cloud Architect
ShapeBlue








On 24/11/2016, 13:14, "Audrey Roberto B Baldin"  
wrote:

>Hi there, 
>
>How can we get billing information from cloudstack? 
>
>If a customer is using a VPC, is it possible to get information of how much 
>traffic he is consuming in the public network and the private gateway? I don't 
>intend to bill for the traffic between VMs. 
>
>Thanks for your help! 
>
>Att., 
>Audrey 

dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue


RE: Billing for services

2016-11-24 Thread Grégoire Lamodière
Dear Audrey, 

The starting point would be to install and start the cloudstack usage service 
(I think this step is well documented inside CS installation guide).
Basically, this service will collect and agregate usage for most CS metrics 
(Instance running, storage, network, etc).
These metrics are then stored inside a MySQL DB (cloud_usage).

You can easily export these values directly from Mysql to Excel 
(http://support.en.ctx.org.cn/ctx132030.citrix).

The second option (recommanded) is to use cloudstack API to get usage and 
manage them with third part apps.

I personnaly tested Amysta few months ago, and it was pretty good.

Best Regards.
---
Grégoire Lamodière

-Message d'origine-
De : Audrey Roberto B Baldin [mailto:audrey.bal...@unitelco.com.br] 
Envoyé : jeudi 24 novembre 2016 15:02
À : users 
Objet : Re: Billing for services

Dag, thanks for your return.

I'd looked at usage service, but could not figure out how to work with it, yet. 
I've been reading some presentations from ShapeBlue about, but still coudn't 
understand how to use it.

But I'm not sure, for example, if the usage service can show bandwidth usage 
from each interface of the VPC. How can I bill for internet usage for example 
if I can't measure only the internet traffic a user is consuming from the 
public network?


Att.,
Audrey

- Mensagem original -
De: "Dag Sonstebo" 
Para: "users" 
Enviadas: Quinta-feira, 24 de novembro de 2016 11:35:22
Assunto: Re: Billing for services

Hi Audrey,

This is basically what the CloudStack usage service is for - suggest you 
install this and take a look at the data this provides.

Regards, 
Dag Sonstebo
Cloud Architect
ShapeBlue








On 24/11/2016, 13:14, "Audrey Roberto B Baldin"  
wrote:

>Hi there, 
>
>How can we get billing information from cloudstack? 
>
>If a customer is using a VPC, is it possible to get information of how much 
>traffic he is consuming in the public network and the private gateway? I don't 
>intend to bill for the traffic between VMs. 
>
>Thanks for your help! 
>
>Att., 
>Audrey 

dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue


RE: Zone edit - Physical network update

2016-11-21 Thread Grégoire Lamodière
Hi Dag, 

Thank you for this feedback.

I will check for a smooth migration (migration is not a big deal, but working 
with IP reservation, etc. could be time consumming).
Anyway, I think my understanding was not the good one regarding pr-storage, and 
situation is now very clear thanks to your last reply about the network labels.

Best Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31

-Message d'origine-
De : Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Envoyé : lundi 21 novembre 2016 18:31
À : users@cloudstack.apache.org
Objet : Re: Zone edit - Physical network update

Hi Grégoire,

Obviously hoping to work with you, but if you were a support customer we would 
advise against this approach – moving all CloudStack traffic to a new physical 
network is very much untested territory and we couldn’t guarantee success. We 
would probably advise you to consider a phased swing migration – where you 
potentially free up some of your existing hypervisors, rebuild these with the 
correct networking and then add them as a new cluster in CloudStack – before 
trying migrate your workload. As you then free up more resources on the 
original cluster you can evacuate more hosts, rebuild them and add to the new 
cluster, until you have all your hosts in the new cluster. 

With regards to you “cs-mgmt” traffic question  – keep in mind there is nothing 
stopping you just keeping this where it is on eth0. You don’t need to use 
labels for *primary storage*  – this simply requires connectivity and 
CloudStack doesn’t care how this works. The CloudStack storage network 
labelling is purely for secondary storage. So if you add the new cards as 
effectively IP based HBAs with an additional storage IP address and present new 
primary storage pools purely over this interface you can simply add new NFS 
shares as new primary storage pools. This means the XenServers will speak to 
old primary storage on the existing interfaces (cs-mgmt?) and to the new 
primary storage over the new interfaces (call this e.g. cs-primarystorage if 
you want). This would potentially allow you to storage migrate from old to new 
primary storage *without changing any existing CloudStack labelling*.

If you require more advanced traffic configuration changes than this the phased 
migration mentioned above would be your best bet.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 21/11/2016, 16:47, "Grégoire Lamodière"  wrote:

Hi Dag, 

Thanks for your quick reply.

As you can easily understand, the goal of this lab job is to be moved to 
production later on, so un-predictable options are not really options 
(specially when you'll know that Shapeblue will probably handle the support for 
us in the coming weeks:))

The goal is to move our current SAS 6gbits storage to 10 Gbe, so yes, I was 
assuming management network to a new network card (10 Gbe), and storage as well 
(to take advantage of 10 Gbe rather than staying on the 1 Gbe link).

When you say " Keep in mind the physical network is mainly cosmetic", does 
it stand for :

"Physical Network 1
Management traffic (ex. : cs-mgmt label)"

Doesn't mean that cs-mgmt network should be related to eth0 on the physical 
host ? 
In other work, can I plug my new network card, rename cs-mgmt on the old 
one to cs-mgmt-old and create cs-mgmt on the new one ?

If so, you are right, I will add the card and move the management label to 
the 10 Gbe.
But I assume I could do the same for the storage network label ?

Thanks again.
    
Best Regards.

---
Grégoire Lamodière

-Message d'origine-
De : Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Envoyé : lundi 21 novembre 2016 16:59
À : users@cloudstack.apache.org
Objet : Re: Zone edit - Physical network update

Hi Gregoire,

First of all – I think you’re probably on very thin ice with this one – you 
may be able to do this in the lab but it would be very risky to carry out in a 
production environment.

Saying that – if you want to persevere and are happy this may break your 
lab - a starting point would possibly be:

- Add the new cards to your XenServers.
- Create new networks in XenCentre – with new network labels.
- Not sure if this is required – but you would probably add the new 
physical network using CloudMonkey. Keep in mind the physical network is mainly 
cosmetic – your network configuration uses labelling only hence you may just 
use the existing physical network.
- Stop CloudStack management.
- Hack the network labels in the cloud.host_details and 
cloud.physical_network_traffic_types to reflect the new networks.
- On your XenServers / Xencentre move all existing networks (VLANs) to the 
new networks (unsure of the steps or if this is possible).
- 
- Restart management.

On

RE: Zone edit - Physical network update

2016-11-21 Thread Grégoire Lamodière
Hi Dag, 

Thanks for your quick reply.

As you can easily understand, the goal of this lab job is to be moved to 
production later on, so un-predictable options are not really options 
(specially when you'll know that Shapeblue will probably handle the support for 
us in the coming weeks:))

The goal is to move our current SAS 6gbits storage to 10 Gbe, so yes, I was 
assuming management network to a new network card (10 Gbe), and storage as well 
(to take advantage of 10 Gbe rather than staying on the 1 Gbe link).

When you say " Keep in mind the physical network is mainly cosmetic", does it 
stand for :

"Physical Network 1
Management traffic (ex. : cs-mgmt label)"

Doesn't mean that cs-mgmt network should be related to eth0 on the physical 
host ? 
In other work, can I plug my new network card, rename cs-mgmt on the old one to 
cs-mgmt-old and create cs-mgmt on the new one ?

If so, you are right, I will add the card and move the management label to the 
10 Gbe.
But I assume I could do the same for the storage network label ?

Thanks again.

Best Regards.

---
Grégoire Lamodière

-Message d'origine-
De : Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Envoyé : lundi 21 novembre 2016 16:59
À : users@cloudstack.apache.org
Objet : Re: Zone edit - Physical network update

Hi Gregoire,

First of all – I think you’re probably on very thin ice with this one – you may 
be able to do this in the lab but it would be very risky to carry out in a 
production environment.

Saying that – if you want to persevere and are happy this may break your lab - 
a starting point would possibly be:

- Add the new cards to your XenServers.
- Create new networks in XenCentre – with new network labels.
- Not sure if this is required – but you would probably add the new physical 
network using CloudMonkey. Keep in mind the physical network is mainly cosmetic 
– your network configuration uses labelling only hence you may just use the 
existing physical network.
- Stop CloudStack management.
- Hack the network labels in the cloud.host_details and 
cloud.physical_network_traffic_types to reflect the new networks.
- On your XenServers / Xencentre move all existing networks (VLANs) to the new 
networks (unsure of the steps or if this is possible).
- 
- Restart management.

One other point – you mention this is for a *primary storage* migration – keep 
in mind the storage network in the CS configuration is for *secondary storage*. 
If all you want to do is to migrate to a new storage network and a new storage 
backend your best bet is to:

- Add the new cards + configure the new storage network on the XenServers – no 
need to label this.
- Present new primary storage pools from your NAS on the respective networks – 
making sure the new NFS shares are only available over your 10Gbe network.
- Add the new NFS shares as new primary storage pools in the ACS gui.
- Storage migrate from old NFS shares to new.

This is a much simpler way and would be a supported configuration change.

Hope this helps,

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue


On 21/11/2016, 14:50, "Grégoire Lamodière"  wrote:

Dear All,

In my lab, I have a CS 4.7 advanced zone working fine.

I am now in the process of testing a primary storage migration (currently 
SAS based to 10 Gbe nfs).
To perform this migration, I need to attach new network controlers (10 Gbe 
controlers)

I have no problem for the hardware part of my work, but I am wondering how 
CS can handle this change.

Right now, on my zone conf, I have :


-  Physical Network 1

o   Management traffic (with XenServer Label)

-  Physical Network 2

o   Public traffic

o   Guest

o   Storage

What is the best « CS way » to add my card (each with 2 10 Gbits ports), 
add move management + Storage on the first port of the new card ?

-  Physical Network 1

o   No more traffic

-  Physical Network 2

o   Public

o   Guest

-  Physical Network 3 (first port of the new 10 Gbe card)

o   Management

o   Storage


I tried to play arround with create physical network, but I can't do much 
with the newly created network (no way to assign traffic type to them).

Any help would be highly appreciated.

    Best Regards.

---
Grégoire Lamodière



dag.sonst...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 



Zone edit - Physical network update

2016-11-21 Thread Grégoire Lamodière
Dear All,

In my lab, I have a CS 4.7 advanced zone working fine.

I am now in the process of testing a primary storage migration (currently SAS 
based to 10 Gbe nfs).
To perform this migration, I need to attach new network controlers (10 Gbe 
controlers)

I have no problem for the hardware part of my work, but I am wondering how CS 
can handle this change.

Right now, on my zone conf, I have :


-  Physical Network 1

o   Management traffic (with XenServer Label)

-  Physical Network 2

o   Public traffic

o   Guest

o   Storage

What is the best « CS way » to add my card (each with 2 10 Gbits ports), add 
move management + Storage on the first port of the new card ?

-  Physical Network 1

o   No more traffic

-  Physical Network 2

o   Public

o   Guest

-  Physical Network 3 (first port of the new 10 Gbe card)

o   Management

o   Storage


I tried to play arround with create physical network, but I can't do much with 
the newly created network (no way to assign traffic type to them).

Any help would be highly appreciated.

Best Regards.

---
Grégoire Lamodière