Re: [DISCUSS] Upgrade to Vue3 library

2021-09-08 Thread Daan Hoogland
Hello Hoang, (or should I say Nguyen?)

I see no options to discuss in your request. We will have to move our vue
usage forward to version 3 eventually. Are there choices to make? If not,
go ahead.

regards,

On Tue, Sep 7, 2021 at 2:49 AM Nguyen Mai Hoang  wrote:

> Hi All,
>
> We are upgrading a Vue 2 application to Vue 3 on CloudStack. As far as our
> investigation goes, Vue 3 does support migration from Vue 2 to Vue 3 by
> using `@vue/compat` (aka "the migration build"). However, it is worth
> mentioning that there are some incompatible features (please refer to:
> https://v3.vuejs.org/guide/migration/migration-build.html#overview).
> The biggest differences between Vue 2 and Vue 3 on Cloudstack are:
> - mount: https://v3.vuejs.org/guide/migration/mount-changes.html#overview
> - Slots: https://v3.vuejs.org/guide/component-slots.html#slots
> - Async components:
> https://v3.vuejs.org/guide/migration/async-components.html#async-components
> - Events: https://v3.vuejs.org/guide/migration/events-api.html#overview
> - Watch: https://v3.vuejs.org/guide/migration/watch.html#overview
>
> In order to make them compatible with Vue 3, it is necessary to upgrade or
> replace some libraries as well as some other components, which are listed
> below:
> - Antd: https://2x.antdv.com/components/overview
> - Router: https://next.router.vuejs.org/installation.html
> - I18n: https://vue-i18n.intlify.dev/introduction.html
> - Clipboard: https://www.npmjs.com/package/vue3-clipboard
> - Vue-ls (https://www.npmjs.com/package/vue-ls) => vue-web-storage (
> https://github.com/ankurk91/vue-web-storage)
>
> These upgrades and replacements will require changes in source code,
> structure and elements in UI. We would like to have your opinions about it.
>
> Thank you and best regards,
>


-- 
Daan


Re: High increase in bandwidth usage

2021-09-08 Thread Hean Seng
This should not happen, Cloudstack Is just  web application, it does not
consume  any bandwidth .  The one consume bandwidth is the VM inside .

On Wed, Sep 8, 2021 at 10:52 PM Alex Mattioli 
wrote:

> Hi,
>
> That would be bandwidth between which hosts?   Also, what exactly would
> you call normal and excessive bandwidth usage?
>
> Regards
> Alex
>
>
>
>
> -Original Message-
> From: Saurabh Rapatwar 
> Sent: 08 September 2021 16:46
> To: us...@cloudstack.apache.org
> Cc: dev@cloudstack.apache.org
> Subject: Re: High increase in bandwidth usage
>
> Hi
>
> I am facing the same problem. Please suggest any solution group members.
>
> Thanks in advance
>
> On Tue, 7 Sep, 2021, 11:30 pm R R,  wrote:
>
> > I installed a cloudstack server on a bare metal server (all in one
> > installation). The bandwidth usage was normal. After a couple days,
> > the bandwidth usage was very high, got several emails as well from the
> > DC. I tried to limit it using wondershaper. Worked for a while, but
> > then I was locked out of the machine. Couldn't ssh into the machine.
> > Had to format the machine.
> >
> > The same thing happened again. I am able to ssh into the system for
> > now, bandwidth usage is high, cloudstack server isn't responding.
> > Attaching ss of cloudstack management server logs.
> >
> > Please address me if I am doing something wrong, or the solution to
> > this problem.
> >
>


-- 
Regards,
Hean Seng


RE: IPV6 in Isolated/VPC networks

2021-09-08 Thread Alex Mattioli
Hi Rohit, Kristaps,

I'd say option 1 as well,  it does create a bit more overhead with static 
routes but if that's automated for a VPC it can also be easily automated for 
several tiers of a VPC.  We also don't constrain the amount of tiers in a  VPC.
It has the added advantage to be closer to the desired behaviour with dynamic 
routing in the future, where a VPC VR can announce several subnets upstream.

Cheers
Alex




 


-Original Message-
From: Rohit Yadav  
Sent: 08 September 2021 19:04
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi Kristaps,

Thanks for sharing, I suppose that means individual tiers should be allocated 
/64 instead of larger ipv6 blocks to the whole VPC which could cause wastage.

Any objection from anybody?

Regards.

From: Kristaps Cudars 
Sent: Wednesday, September 8, 2021 9:24:01 PM
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks

Hello,

I asked networking team to comment on "How should the IPv6 block/allocation 
work in VPCs?"
Option1: They haven't seen lately devices with limits on how many static routes 
can be created.
Option2: With /60 and /62 assignments and big quantity of routers IPv6 
assignment from RIPE NNC can be drained fast.

/48 contains 64000 /64
/60 contains 16 /64
64000 / 16 = 4000 routers


On 2021/09/07 11:59:09, Rohit Yadav  wrote:
> All,
>
> After another iteration with Alex, I've updated the design doc. Kindly review:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in
> +Isolated+Network+and+VPC+with+Static+Routing
>
>
> ... and advise on some outstanding questions:
>
>   *   How should the IPv6 block/allocation work in VPCs?
> Option1: Should this be simply /64 allocation on any new tier, the 
> cons of this option is one static route/rule per VPC tier. (many 
> upstream routers may have limit on no. of static routes?)
> Option2: Let user ask/specify tier size, say /60 (for 16 tiers) or /62 (4 
> tiers) for the VPC, this can be filtered based on the vpc.max.networks global 
> setting (3 is default). The pros of this option are less no. of static 
> route/rule and easy programming, but potential wastage of multiple /64 prefix 
> blocks for unused/uncreated tiers.
>   *   How do we manage IPv6 usage? Can anyone advise how we do IPv6 usage for 
> shared network (design, implementation and use-cases?)
> Option1: We don't do it, all user VMs nics have ipv4 address which whose 
> usage we don't track. For public VR/nics/networks, we can simply add the IPv6 
> details for a related IPv4 address.
> Option2: Implement a separate, first-class IPv6 address or /64 prefix 
> tracking/management and usage for all VMs and systemvms nic (this means 
> account/domain level limits and new billing/records)
>   *   Enable IPv6 on systemvms (specifically SSVM and CPVM) by default if 
> zone has a IPv6 address block allocated/assigned for use for systemvms (this 
> was mainly thought for VRs, but why no ssvm and cpvms too - any cons of this?)
>   *
>
> Regards.
>
> 
> From: Rohit Yadav 
> Sent: Thursday, August 19, 2021 15:45
> To: dev@cloudstack.apache.org 
> Subject: Re: IPV6 in Isolated/VPC networks
>
> Hi all,
>
> I've taken feedback from this thread and wrote this design doc:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in
> +Isolated+Network+and+VPC+with+Static+Routing
>
> Kindly review and advise if I missed anything or anything that needs to be 
> changed/updated. You may comment on the wiki directly too.
>
> Kindly suggest your views on the following (also in the design doc above):
>
> Outstanding Questions:
>
>   *   Should admin or user be able to specify how VPC super CIDRs are 
> created/needed; for example a user can ask for VPC with /60 super CIDR? Or 
> should CloudStack automatically find/allocate a /64 for a new VPC tier from 
> the root-admin configured /64-/48 block?
>   *   Should we explore FRR and iBGP or other strategies to do dynamic 
> routing which may not require advance/complex configuration in the VR or for 
> the users/admin?
>   *   With SLAAC and no dhcpv6, is there a way to support secondary IPv6 
> addresses (or floating IPv6 addresses for VR/public traffic) for guest VM's 
> nics?
>   *   Any thoughts on UI/UX for firewall/routing management?
>   *   Any other feature/support for isolated network or VPC feature that must 
> be explored or supported such as PF, VPN, LB, vpc static routes, vpc gateway 
> etc.
>   *   For usage - should we have any consideration, or should we assume that 
> IPv4 and IPv6 address will go together for every nic; so IPv6 usage for a nic 
> is in tandem with Ipv4 address for a nic, i.e. no explicit/new biling/usage 
> needed?
>   *   For smoketests, local dev-test should we explore ULA? Unique Local 
> Address - in the range fc00::/7. Typically only within the 'local' half 
> fd00::/8. ULA for IPv6 is analogous to IPv4 private network 

Re: IPV6 in Isolated/VPC networks

2021-09-08 Thread Rohit Yadav
Hi Kristaps,

Thanks for sharing, I suppose that means individual tiers should be allocated 
/64 instead of larger ipv6 blocks to the whole VPC which could cause wastage.

Any objection from anybody?

Regards.

From: Kristaps Cudars 
Sent: Wednesday, September 8, 2021 9:24:01 PM
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks

Hello,

I asked networking team to comment on “How should the IPv6 block/allocation 
work in VPCs?”
Option1: They haven’t seen lately devices with limits on how many static routes 
can be created.
Option2: With /60 and /62 assignments and big quantity of routers IPv6 
assignment from RIPE NNC can be drained fast.

/48 contains 64000 /64
/60 contains 16 /64
64000 / 16 = 4000 routers


On 2021/09/07 11:59:09, Rohit Yadav  wrote:
> All,
>
> After another iteration with Alex, I've updated the design doc. Kindly review:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in+Isolated+Network+and+VPC+with+Static+Routing
>
>
> ... and advise on some outstanding questions:
>
>   *   How should the IPv6 block/allocation work in VPCs?
> Option1: Should this be simply /64 allocation on any new tier, the cons of 
> this option is one static route/rule per VPC tier. (many upstream routers may 
> have limit on no. of static routes?)
> Option2: Let user ask/specify tier size, say /60 (for 16 tiers) or /62 (4 
> tiers) for the VPC, this can be filtered based on the vpc.max.networks global 
> setting (3 is default). The pros of this option are less no. of static 
> route/rule and easy programming, but potential wastage of multiple /64 prefix 
> blocks for unused/uncreated tiers.
>   *   How do we manage IPv6 usage? Can anyone advise how we do IPv6 usage for 
> shared network (design, implementation and use-cases?)
> Option1: We don't do it, all user VMs nics have ipv4 address which whose 
> usage we don't track. For public VR/nics/networks, we can simply add the IPv6 
> details for a related IPv4 address.
> Option2: Implement a separate, first-class IPv6 address or /64 prefix 
> tracking/management and usage for all VMs and systemvms nic (this means 
> account/domain level limits and new billing/records)
>   *   Enable IPv6 on systemvms (specifically SSVM and CPVM) by default if 
> zone has a IPv6 address block allocated/assigned for use for systemvms (this 
> was mainly thought for VRs, but why no ssvm and cpvms too - any cons of this?)
>   *
>
> Regards.
>
> 
> From: Rohit Yadav 
> Sent: Thursday, August 19, 2021 15:45
> To: dev@cloudstack.apache.org 
> Subject: Re: IPV6 in Isolated/VPC networks
>
> Hi all,
>
> I've taken feedback from this thread and wrote this design doc:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in+Isolated+Network+and+VPC+with+Static+Routing
>
> Kindly review and advise if I missed anything or anything that needs to be 
> changed/updated. You may comment on the wiki directly too.
>
> Kindly suggest your views on the following (also in the design doc above):
>
> Outstanding Questions:
>
>   *   Should admin or user be able to specify how VPC super CIDRs are 
> created/needed; for example a user can ask for VPC with /60 super CIDR? Or 
> should CloudStack automatically find/allocate a /64 for a new VPC tier from 
> the root-admin configured /64-/48 block?
>   *   Should we explore FRR and iBGP or other strategies to do dynamic 
> routing which may not require advance/complex configuration in the VR or for 
> the users/admin?
>   *   With SLAAC and no dhcpv6, is there a way to support secondary IPv6 
> addresses (or floating IPv6 addresses for VR/public traffic) for guest VM's 
> nics?
>   *   Any thoughts on UI/UX for firewall/routing management?
>   *   Any other feature/support for isolated network or VPC feature that must 
> be explored or supported such as PF, VPN, LB, vpc static routes, vpc gateway 
> etc.
>   *   For usage - should we have any consideration, or should we assume that 
> IPv4 and IPv6 address will go together for every nic; so IPv6 usage for a nic 
> is in tandem with Ipv4 address for a nic, i.e. no explicit/new biling/usage 
> needed?
>   *   For smoketests, local dev-test should we explore ULA? Unique Local 
> Address - in the range fc00::/7. Typically only within the ‘local’ half 
> fd00::/8. ULA for IPv6 is analogous to IPv4 private network addressing. This 
> prefix can be randomly generated at first install by CloudStack in a zone 
> using zoneid etc?
>   *   Should we expand support for VR diagnostic feature to include support 
> for ipv6, traceroute6?
>   *   Discuss - expand support/ability to allocate a /60, or /56 etc prefix 
> to an individual VM in shared network (Wido's suggestion)
>
>
> Regards.
>
> 
> From: Wei ZHOU 
> Sent: Tuesday, August 17, 2021 21:16
> To: dev@cloudstack.apache.org 
> Subject: Re: IPV6 in Isolated/VPC networks
>
> Thanks Kristaps, Wido, Rohit and Alex for 

Re: IPV6 in Isolated/VPC networks

2021-09-08 Thread Kristaps Cudars
Hello,

I asked networking team to comment on “How should the IPv6 block/allocation 
work in VPCs?”
Option1: They haven’t seen lately devices with limits on how many static routes 
can be created.
Option2: With /60 and /62 assignments and big quantity of routers IPv6 
assignment from RIPE NNC can be drained fast.

/48 contains 64000 /64
/60 contains 16 /64
64000 / 16 = 4000 routers


On 2021/09/07 11:59:09, Rohit Yadav  wrote: 
> All,
> 
> After another iteration with Alex, I've updated the design doc. Kindly review:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in+Isolated+Network+and+VPC+with+Static+Routing
> 
> 
> ... and advise on some outstanding questions:
> 
>   *   How should the IPv6 block/allocation work in VPCs?
> Option1: Should this be simply /64 allocation on any new tier, the cons of 
> this option is one static route/rule per VPC tier. (many upstream routers may 
> have limit on no. of static routes?)
> Option2: Let user ask/specify tier size, say /60 (for 16 tiers) or /62 (4 
> tiers) for the VPC, this can be filtered based on the vpc.max.networks global 
> setting (3 is default). The pros of this option are less no. of static 
> route/rule and easy programming, but potential wastage of multiple /64 prefix 
> blocks for unused/uncreated tiers.
>   *   How do we manage IPv6 usage? Can anyone advise how we do IPv6 usage for 
> shared network (design, implementation and use-cases?)
> Option1: We don't do it, all user VMs nics have ipv4 address which whose 
> usage we don't track. For public VR/nics/networks, we can simply add the IPv6 
> details for a related IPv4 address.
> Option2: Implement a separate, first-class IPv6 address or /64 prefix 
> tracking/management and usage for all VMs and systemvms nic (this means 
> account/domain level limits and new billing/records)
>   *   Enable IPv6 on systemvms (specifically SSVM and CPVM) by default if 
> zone has a IPv6 address block allocated/assigned for use for systemvms (this 
> was mainly thought for VRs, but why no ssvm and cpvms too - any cons of this?)
>   *
> 
> Regards.
> 
> 
> From: Rohit Yadav 
> Sent: Thursday, August 19, 2021 15:45
> To: dev@cloudstack.apache.org 
> Subject: Re: IPV6 in Isolated/VPC networks
> 
> Hi all,
> 
> I've taken feedback from this thread and wrote this design doc:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in+Isolated+Network+and+VPC+with+Static+Routing
> 
> Kindly review and advise if I missed anything or anything that needs to be 
> changed/updated. You may comment on the wiki directly too.
> 
> Kindly suggest your views on the following (also in the design doc above):
> 
> Outstanding Questions:
> 
>   *   Should admin or user be able to specify how VPC super CIDRs are 
> created/needed; for example a user can ask for VPC with /60 super CIDR? Or 
> should CloudStack automatically find/allocate a /64 for a new VPC tier from 
> the root-admin configured /64-/48 block?
>   *   Should we explore FRR and iBGP or other strategies to do dynamic 
> routing which may not require advance/complex configuration in the VR or for 
> the users/admin?
>   *   With SLAAC and no dhcpv6, is there a way to support secondary IPv6 
> addresses (or floating IPv6 addresses for VR/public traffic) for guest VM's 
> nics?
>   *   Any thoughts on UI/UX for firewall/routing management?
>   *   Any other feature/support for isolated network or VPC feature that must 
> be explored or supported such as PF, VPN, LB, vpc static routes, vpc gateway 
> etc.
>   *   For usage - should we have any consideration, or should we assume that 
> IPv4 and IPv6 address will go together for every nic; so IPv6 usage for a nic 
> is in tandem with Ipv4 address for a nic, i.e. no explicit/new biling/usage 
> needed?
>   *   For smoketests, local dev-test should we explore ULA? Unique Local 
> Address - in the range fc00::/7. Typically only within the ‘local’ half 
> fd00::/8. ULA for IPv6 is analogous to IPv4 private network addressing. This 
> prefix can be randomly generated at first install by CloudStack in a zone 
> using zoneid etc?
>   *   Should we expand support for VR diagnostic feature to include support 
> for ipv6, traceroute6?
>   *   Discuss - expand support/ability to allocate a /60, or /56 etc prefix 
> to an individual VM in shared network (Wido's suggestion)
> 
> 
> Regards.
> 
> 
> From: Wei ZHOU 
> Sent: Tuesday, August 17, 2021 21:16
> To: dev@cloudstack.apache.org 
> Subject: Re: IPV6 in Isolated/VPC networks
> 
> Thanks Kristaps, Wido, Rohit and Alex for your replies.
> 
> I am fine with not having stateful dhcpv6 and privacy extension/temporary
> address in phase 1. If community decides not to do eventually , it is also
> ok to me.
> 
> We could explore how to better use secondary ipv6 addresses as Wido
> advised. It would be great if anyone share some user experience.
> 
> -Wei
> 
> 
> On Tuesday, 17 August 2021, 

Re: [VOTE] Apache CloudStack 4.15.2.0 (RC1)

2021-09-08 Thread Daan Hoogland
signing correct, archive contents sane (no further testing done)
+1 (binding)

On Tue, Sep 7, 2021 at 2:53 PM Rohit Yadav  wrote:

> All,
>
> I've created a 4.15.2.0 release, with the following artifacts up for a
> vote:
>
> Git Branch and Commit SHA:
> https://github.com/apache/cloudstack/tree/4.15.2.0-RC20210907T1815
> Commit: 5ba2867598ecf7ce16807e78d5033c342a2a52d7
>
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.15.2.0/
>
> PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> The vote will be open for a week until 13 September 2021.
>
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> For users convenience, the packages from this release candidate (RC1)
> will be available here shortly:
> https://download.cloudstack.org/testing/4.15.2.0-RC1/
>
> There is no new systemvmtemplate for 4.15.2, the 4.15.1
> systemvmtemplate can be used from here:
> https://download.cloudstack.org/systemvm/4.15/
>
> Docs are not published yet but upgrade notes are similar to the one
> below without the requirement of registering a new systemvmtemplate:
>
> https://github.com/apache/cloudstack-documentation/tree/4.15/source/upgrading/upgrade
>
> Regards.
>


-- 
Daan


Re: [Discussion] String libs

2021-09-08 Thread Daan Hoogland
Daniel et al,
I've no preference and don't mind multiple dependencies when they supply
overlapping features. I do want to keep 3rd party libraries in facade
projects at all times. It keeps maintenance surface small and it is easier
to see conflicts happening (a good reason to reduce dependencies btw, me
contradicting me).
Both your and Rohit's points make sense to me.

On Wed, Sep 8, 2021 at 2:36 PM Nicolas Vazquez <
nicolas.vazq...@shapeblue.com> wrote:

> Hi Daniel,
>
> I don't have a preference either, but the work you are proposing on your
> PR makes sense to me.
>
>
> Regards,
>
> Nicolas Vazquez
>
> 
> From: Rohit Yadav 
> Sent: Wednesday, September 8, 2021 5:05 AM
> To: dev@cloudstack.apache.org 
> Subject: Re: [Discussion] String libs
>
> I don't have any specific inclination, I would use whatever becomes a
> standard.
>
> However, I prefer the readability of a utility method that is readable and
> easy to understand such as isNullOrEmpty (which suggests it's doing a null
> check) versus isEmpty.
>
> I suppose a refactoring exercise can be done by picking whatever favourite
> dependency community consensus is built for (if at all) and then write a
> utility method in something like StringsUtil in cloud-utils and use it
> throughout the codebase so in future if we want to move to something else -
> all you do is replace your favourite dependency with something new only in
> StringsUtils of cloud-utils.
>
> ... and update the cloudstack-checkstyle to enforce the new agreed upon
> rule and also update -
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Coding+conventions
>
>
> Regards.
>
> 
> From: Daniel Augusto Veronezi Salvador 
> Sent: Tuesday, September 7, 2021 04:37
> To: dev@cloudstack.apache.org 
> Subject: [Discussion] String libs
>
> Hi all,
>
> Currently, the main String libs we are using are "commons.lang" and
> "commons.lang3" (either directly or by our facade, "com.cloud.utils"). We
> have a current discussion about using them directly or via a facade (such
> as "com.cloud.utils"); however, a third implementation has been added
> (google.common.base), which adds more to the discussion. "commons.lang"
> already implement all we need; therefore, adding a third one does not seem
> to add/improve/help with anything, but adding more moving parts and
> libraries that we need to watch out for (managing versions, checking for
> security issues, and so on).
>
> I created a PR (https://github.com/apache/cloudstack/pull/5386) to
> replace "google.common.base" with "commons.lang3". However, and as Daan
> suggested too, I'd like to go forward and revisit this discussion to
> standardize our code. To guide it, I'd like to start with what I think is
> the main topic:
>
> - Should we use a facade to "commons.lang"? Which are the pros and cons,
> according to your perspective?
>
> Best regards,
> Daniel.
>
>
>
>
>
>
>

-- 
Daan


RE: High increase in bandwidth usage

2021-09-08 Thread Alex Mattioli
Hi,

That would be bandwidth between which hosts?   Also, what exactly would you 
call normal and excessive bandwidth usage?

Regards
Alex

 


-Original Message-
From: Saurabh Rapatwar  
Sent: 08 September 2021 16:46
To: us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: Re: High increase in bandwidth usage

Hi

I am facing the same problem. Please suggest any solution group members.

Thanks in advance

On Tue, 7 Sep, 2021, 11:30 pm R R,  wrote:

> I installed a cloudstack server on a bare metal server (all in one 
> installation). The bandwidth usage was normal. After a couple days, 
> the bandwidth usage was very high, got several emails as well from the 
> DC. I tried to limit it using wondershaper. Worked for a while, but 
> then I was locked out of the machine. Couldn't ssh into the machine. 
> Had to format the machine.
>
> The same thing happened again. I am able to ssh into the system for 
> now, bandwidth usage is high, cloudstack server isn't responding. 
> Attaching ss of cloudstack management server logs.
>
> Please address me if I am doing something wrong, or the solution to 
> this problem.
>


Re: High increase in bandwidth usage

2021-09-08 Thread Saurabh Rapatwar
Hi

I am facing the same problem. Please suggest any solution group members.

Thanks in advance

On Tue, 7 Sep, 2021, 11:30 pm R R,  wrote:

> I installed a cloudstack server on a bare metal server (all in one
> installation). The bandwidth usage was normal. After a couple days, the
> bandwidth usage was very high, got several emails as well from the DC. I
> tried to limit it using wondershaper. Worked for a while, but then I was
> locked out of the machine. Couldn't ssh into the machine. Had to format the
> machine.
>
> The same thing happened again. I am able to ssh into the system for now,
> bandwidth usage is high, cloudstack server isn't responding. Attaching ss
> of cloudstack management server logs.
>
> Please address me if I am doing something wrong, or the solution to this
> problem.
>


[DISCUSS] Upgrade to Vue3 library

2021-09-08 Thread Mai Nguyen
Hi All,

We are upgrading a Vue 2 application to Vue 3 on CloudStack. As far as our 
investigation goes, Vue 3 does support migration from Vue 2 to Vue 3 by using 
`@vue/compat` (aka "the migration build"). However, it is worth mentioning that 
there are some incompatible features (please refer to: 
https://v3.vuejs.org/guide/migration/migration-build.html#overview).
The biggest differences between Vue 2 and Vue 3 on Cloudstack are:
- mount: https://v3.vuejs.org/guide/migration/mount-changes.html#overview
- Slots: https://v3.vuejs.org/guide/component-slots.html#slots
- Async components: 
https://v3.vuejs.org/guide/migration/async-components.html#async-components
- Events: https://v3.vuejs.org/guide/migration/events-api.html#overview
- Watch: https://v3.vuejs.org/guide/migration/watch.html#overview

In order to make them compatible with Vue 3, it is necessary to upgrade or 
replace some libraries as well as some other components, which are listed below:
- Antd: https://2x.antdv.com/components/overview
- Router: https://next.router.vuejs.org/installation.html
- I18n: https://vue-i18n.intlify.dev/introduction.html
- Clipboard: https://www.npmjs.com/package/vue3-clipboard
- Vue-ls (https://www.npmjs.com/package/vue-ls) => vue-web-storage 
(https://github.com/ankurk91/vue-web-storage)

These upgrades and replacements will require changes in source code, structure 
and elements in UI. We would like to have your opinions about it.

Thank you and best regards,

__

Mai Nguyen
Frontend Developer

EWERK DIGITAL GmbH
Brühl 24, D-04109 Leipzig
P
F +49 341 42649 - 98
hoang.ngu...@ext.ewerk.com
www.ewerk.com

Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Tassilo Möschke
Registergericht: Leipzig HRB 9065

Support:
+49 341 42649 555

Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 2-1:2018

ISAE 3402 Typ II Assessed

EWERK-Blog | 
LinkedIn | 
Xing | 
Twitter | 
Facebook


Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.

Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist 
vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der 
bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, 
Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte 
informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die 
E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen 
Dank.

The contents of this e-mail (including any attachments) are confidential and 
may be legally privileged. If you are not the intended recipient of this 
e-mail, any disclosure, copying, distribution or use of its contents is 
strictly prohibited, and you should please notify the sender immediately and 
then delete it (including any attachments) from your system. Thank you.


Re: [Discussion] Release Cycle

2021-09-08 Thread Gabriel Bräscher
Thanks all for helping with the discussion.

Yes, Rohit. I need to answer a few pings, sorry for the delay :-)
I totally agree with you and Paul, the costs of releasing are high,
especially for the release manager(s) which dedicates a lot of energy and
time to it.
This is one of the reasons behind this discussion; when we formalize and
document the release pace it is easier to plan a year knowing how things
will roll out, from the perspective of RMs, devs, or users.

Going in the same direction as Rohit, I also agree that we are getting each
year stabler, maybe one LTS per year is better than the current pace of 2.
Therefore, I would propose 1 LTS and 1 Regular per year; I see it as a good
balance between stability and agility.
Additionally, most users do not upgrade from an LTS to another twice a
year, it takes time to plan and execute such tasks (and they always have
some risks).
>From my experience, an LTS per year would perfectly match the needs of most
users.

I do understand that many adopt ".0" as an "unstable"/"Regular" LTS.
However, I don't think this is the best approach.
Additionally, many users do not see a ".0" LTS (which is how we brand in
documentation, website, and release announcement) as a "Regular".
I think that LTS, regardless of being the first spin or not, should be as
stable as it can get. Having a Regular release could avoid the idea of ".0"
not being a stable release.

As an example, I've seen 4.12.0.0 (Regular) running in production with no
issues regarding stability, while also bringing features that otherwise
would be available only in 3-5 months.
It was as stable as many ".0" LTS and I do believe that it also provided
crucial feedback for the 4.13.0.0 (LTS).

Regards,
Gabriel.

Em qua., 8 de set. de 2021 às 04:58, Rohit Yadav 
escreveu:

> Gabriel, all,
>
> I suppose it depends, there's no right answer just trade-offs. Here's my
> lengthy brain dump;
>
> 0. our LTS definition is really to tag a set of releases and show intent
> that they are "stable" and will be supported and get maintenance releases.
> We don't really do LTS releases like larger projects whose support lasts
> multi-years (3-5yrs, sometimes 7-10yrs). Fundamentally all our major .0
> releases are just regular releases, with really the minor/maintenance
> releases making them stable or LTS-que. I like what Pierre-Luc is
> suggesting, but then say a 2-year "LTS" release means users don't get to
> consume features as they would only use "LTS" releases and wait for 2 years
> which may not be acceptable trade-off.
>
> 1. so if we leave what makes a release regular vs LTS for a moment, the
> important question is - do our *users* really want releases in production
> that may be potentially buggy with possibly no stablised releases (i.e.
> minor releases)? Most serious users won't/don't really install/upgrade the
> .0 release in production but wait for a .1 or above release, maybe in their
> test environments first - this is true for most of IT industry, not
> specific to CloudStack.
>
> 2. a typical major release effort would allow for at least a month of dev
> before freeze, then another month or two for stabilisation with multiple
> RCs, tests/smoketest/upgrade tests, getting people to participate, votes
> and wrap up the release do post-release
> docs/packages/announcements/websites etc; so speaking from experience and
> burnt hands a major release can eat up 2-3 months of bandwidth easily
> irrespective of what we call it (regular or LTS).
>
> If the development freeze is done for at least a month, you can
> theoretically do 12 major releases in a year but you would end up having
> intersecting release cycles and overlaps - you would also need a dedicated
> release team. One major release may be too less in a year for project's
> health, two in a year is what we're currently sort of trying (usually Q1/Q2
> has a major release, and Q3/Q4 has another). Three is possible - maybe? But
> I think four would be just pushing it with people's time/bandwidth/focus
> eaten by release work than dev work.
>
> 3. the *main* issue is practicality and feasibility which Paul has
> mentioned too - do we've time, resources, and bandwidth to do multiple
> major releases, especially when we struggle to get the community to
> collaborate on issues and PRs (I'm looking at you Gabriel not responding to
> my comment for days and weeks sometimes  - we all do it don't we ) and
> then participate, test, and vote for releases when RCs are cut.
>
>
> 4. all said ^^ we do have an inclination to move fast break things and try
> things, and for this we do now have nightlies or daily snapshot builds for
> people to try out features/things without waiting for formal releases (but
> without the promise of upgrade paths) -
> http://download.cloudstack.org/testing/nightly/
>
>
> 5. finally - I would say if you or anyone wants to work on a release (call
> it whatever, regular, LTS) - just propose and do!
>
>
> Regards.
>
> 

Re: [Discussion] String libs

2021-09-08 Thread Nicolas Vazquez
Hi Daniel,

I don't have a preference either, but the work you are proposing on your PR 
makes sense to me.


Regards,

Nicolas Vazquez


From: Rohit Yadav 
Sent: Wednesday, September 8, 2021 5:05 AM
To: dev@cloudstack.apache.org 
Subject: Re: [Discussion] String libs

I don't have any specific inclination, I would use whatever becomes a standard.

However, I prefer the readability of a utility method that is readable and easy 
to understand such as isNullOrEmpty (which suggests it's doing a null check) 
versus isEmpty.

I suppose a refactoring exercise can be done by picking whatever favourite 
dependency community consensus is built for (if at all) and then write a 
utility method in something like StringsUtil in cloud-utils and use it 
throughout the codebase so in future if we want to move to something else - all 
you do is replace your favourite dependency with something new only in 
StringsUtils of cloud-utils.

... and update the cloudstack-checkstyle to enforce the new agreed upon rule 
and also update - 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Coding+conventions


Regards.


From: Daniel Augusto Veronezi Salvador 
Sent: Tuesday, September 7, 2021 04:37
To: dev@cloudstack.apache.org 
Subject: [Discussion] String libs

Hi all,

Currently, the main String libs we are using are "commons.lang" and 
"commons.lang3" (either directly or by our facade, "com.cloud.utils"). We have 
a current discussion about using them directly or via a facade (such as 
"com.cloud.utils"); however, a third implementation has been added 
(google.common.base), which adds more to the discussion. "commons.lang" already 
implement all we need; therefore, adding a third one does not seem to 
add/improve/help with anything, but adding more moving parts and libraries that 
we need to watch out for (managing versions, checking for security issues, and 
so on).

I created a PR (https://github.com/apache/cloudstack/pull/5386) to replace 
"google.common.base" with "commons.lang3". However, and as Daan suggested too, 
I'd like to go forward and revisit this discussion to standardize our code. To 
guide it, I'd like to start with what I think is the main topic:

- Should we use a facade to "commons.lang"? Which are the pros and cons, 
according to your perspective?

Best regards,
Daniel.




 



Re: [cloudstack-go] sdk releasing

2021-09-08 Thread Rohit Yadav
+1

That makes sense. In the go-sdk we've a generator that consumes the listApis 
output of a ACS release and generates the library - 
https://github.com/apache/cloudstack-go/tree/main/generate

I suppose for every ACS release, we can update go-sdk with release-specific API 
list, test it, release and tag it. Even automate this?

I would say - no need to vote it unless the SDK is manually changed. Since it 
is used with the k8s provider or the terraform provider so tags on go-sdk may 
go in-line with tags/release of these users.

Regards.


From: Pierre-Luc Dion 
Sent: Friday, August 27, 2021 17:57
To: Rohit Yadav ; dev 
Subject: [cloudstack-go] sdk releasing

I've messing around with cloudstack-go

Did a fix that rohit merged yesturday for hostsservices, but this fix will only 
work for acs4.15, I'd like to fix it for previous acs version too, but look 
like the version of the SDK depend on acs version;

Example; for the hostservices, the host attribute managementserverid is a UUID 
in 4.15, but an integer in an older version of xenserver. This breaks the 
structs,or map, we must have some other similar issue.

so I'd like to help on creating a tag or version of the SDK so they would match 
acs version target,
ie: SDK version = v4.15-0; where the last digit would define the sdk version or 
increase with fixes.
Current versioning in use = https://github.com/apache/cloudstack-go/releases

The issue I'm expecting to face is if we create a release , let's say v4.15-0 
from main branch today, if we want to create v4.14.0, it will not be possible 
from the main branch because we need to revert the last commit but also fix 
hostservices.

Here are a bunch of questions I have:
1. Should we create a branch for older ACS versions  and keep main for latest 
fixes and future releases ?
2. Do we need to vote for such changes?
3. Does such releases could impact other Go projects that use this one, such as 
terraform and kubernetes drivers?
4. Should we provide similar versioning on our kubernetes driver?


 



Re: [DISCUSS] Export Virtual Router Statistics to Users

2021-09-08 Thread Rohit Yadav
+1

At work we're discussing a new RRD framework feature that would expand on the 
metrics feature of various resources which would include (possibly in future) 
metrics of VRs too. I suppose, just like VM stats you would want to capture VR 
stats plus something extra (like active connection, VR firewall state/metrics, 
network bandwidth etc). However, in current implementation the "user" 
roles/accounts can't see VRs, only admins.

Some of the VR metrics are indeed implemented currently already but not expose 
via APIs (for example network bandwidth from VRs is used for usage data 
generation for networks).


Regards.


From: Pierre-Luc Dion 
Sent: Thursday, September 2, 2021 20:25
To: dev ; Sina Kashipazha 

Subject: Re: [DISCUSS] Export Virtual Router Statistics to Users

We've been looking at something similar on our side where we would have
installed telegraf on the VR template and telegraf would have been sending
data to a port forwarding on the hypervisor host.
I don't know how  we could have the VR exposting Prometheus data securely;
outside of the customer network without going through the internet
otherwise.


On Tue, Aug 31, 2021 at 9:57 AM Sina Kashipazha
 wrote:

> Hey there,
>
> We want to improve virtual router statistic visibility. One option that
> pops into my mind is to export some statistics in router VM in Prometheus
> format and let our customers do what they like with those data.
> It enabled our customers to find packet loss, memory usage, etc. Based on
> these data, they can create their alert and dashboard.
>
> If we implement this feature, is this going to merge into upstream?
>
> Another option is to have these data visible in cloudstack dashboard in
> some graph directly. Please let me know if you have a better solution to
> address this issue?
>
>
> Kind regards,
> Sina
>

 



Re: Feature Cloudstack 4.15

2021-09-08 Thread Rohit Yadav
Hi Benoit,

Yes, it's possible to write a backup and recovery plugin. The framework exposes 
interfaces that your plugin or an external service managed by the plugin can 
execute/implement. For example, taking backup, restoring backup to primary 
storage etc. Depending on what/how you're integrating it the B framework 
itself may required to be refactored to accomodate the new plugin.

If it helps you can look into the current B framework and plugin (see the 
dummy provider for example) or the original pull request for implementation 
details - https://github.com/apache/cloudstack/pull/3553


Regards.


From: benoit lair 
Sent: Friday, September 3, 2021 17:38
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Subject: Feature Cloudstack 4.15

Hi ,

I am trying to use Backup and Recovery Framework with ACS 4.15.1

I would like to implement it with Xcp-NG servers
What i see is that only Veeam with Vmware is ready

Would it be possible to have an interface in order to define a custom
External Provider (3rd Party Backup Solutions like bacula, amanda or
backuppc ) like described here :

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Backup+and+Recovery+Framework

I was thinking about a form giving the commands to execute for each type of
Backup API Call of the framework


Thanks for your help and ideas

Regards, Benoit

 



Re: [Discussion] String libs

2021-09-08 Thread Rohit Yadav
I don't have any specific inclination, I would use whatever becomes a standard.

However, I prefer the readability of a utility method that is readable and easy 
to understand such as isNullOrEmpty (which suggests it's doing a null check) 
versus isEmpty.

I suppose a refactoring exercise can be done by picking whatever favourite 
dependency community consensus is built for (if at all) and then write a 
utility method in something like StringsUtil in cloud-utils and use it 
throughout the codebase so in future if we want to move to something else - all 
you do is replace your favourite dependency with something new only in 
StringsUtils of cloud-utils.

... and update the cloudstack-checkstyle to enforce the new agreed upon rule 
and also update - 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Coding+conventions


Regards.


From: Daniel Augusto Veronezi Salvador 
Sent: Tuesday, September 7, 2021 04:37
To: dev@cloudstack.apache.org 
Subject: [Discussion] String libs

Hi all,

Currently, the main String libs we are using are "commons.lang" and 
"commons.lang3" (either directly or by our facade, "com.cloud.utils"). We have 
a current discussion about using them directly or via a facade (such as 
"com.cloud.utils"); however, a third implementation has been added 
(google.common.base), which adds more to the discussion. "commons.lang" already 
implement all we need; therefore, adding a third one does not seem to 
add/improve/help with anything, but adding more moving parts and libraries that 
we need to watch out for (managing versions, checking for security issues, and 
so on).

I created a PR (https://github.com/apache/cloudstack/pull/5386) to replace 
"google.common.base" with "commons.lang3". However, and as Daan suggested too, 
I'd like to go forward and revisit this discussion to standardize our code. To 
guide it, I'd like to start with what I think is the main topic:

- Should we use a facade to "commons.lang"? Which are the pros and cons, 
according to your perspective?

Best regards,
Daniel.

 



Re: [Discussion] Release Cycle

2021-09-08 Thread Rohit Yadav
Gabriel, all,

I suppose it depends, there's no right answer just trade-offs. Here's my 
lengthy brain dump;

0. our LTS definition is really to tag a set of releases and show intent that 
they are "stable" and will be supported and get maintenance releases. We don't 
really do LTS releases like larger projects whose support lasts multi-years 
(3-5yrs, sometimes 7-10yrs). Fundamentally all our major .0 releases are just 
regular releases, with really the minor/maintenance releases making them stable 
or LTS-que. I like what Pierre-Luc is suggesting, but then say a 2-year "LTS" 
release means users don't get to consume features as they would only use "LTS" 
releases and wait for 2 years which may not be acceptable trade-off.

1. so if we leave what makes a release regular vs LTS for a moment, the 
important question is - do our *users* really want releases in production that 
may be potentially buggy with possibly no stablised releases (i.e. minor 
releases)? Most serious users won't/don't really install/upgrade the .0 release 
in production but wait for a .1 or above release, maybe in their test 
environments first - this is true for most of IT industry, not specific to 
CloudStack.

2. a typical major release effort would allow for at least a month of dev 
before freeze, then another month or two for stabilisation with multiple RCs, 
tests/smoketest/upgrade tests, getting people to participate, votes and wrap up 
the release do post-release docs/packages/announcements/websites etc; so 
speaking from experience and burnt hands a major release can eat up 2-3 months 
of bandwidth easily irrespective of what we call it (regular or LTS).

If the development freeze is done for at least a month, you can theoretically 
do 12 major releases in a year but you would end up having intersecting release 
cycles and overlaps - you would also need a dedicated release team. One major 
release may be too less in a year for project's health, two in a year is what 
we're currently sort of trying (usually Q1/Q2 has a major release, and Q3/Q4 
has another). Three is possible - maybe? But I think four would be just pushing 
it with people's time/bandwidth/focus eaten by release work than dev work.

3. the *main* issue is practicality and feasibility which Paul has mentioned 
too - do we've time, resources, and bandwidth to do multiple major releases, 
especially when we struggle to get the community to collaborate on issues and 
PRs (I'm looking at you Gabriel not responding to my comment for days and weeks 
sometimes  - we all do it don't we ) and then participate, test, and vote for 
releases when RCs are cut.


4. all said ^^ we do have an inclination to move fast break things and try 
things, and for this we do now have nightlies or daily snapshot builds for 
people to try out features/things without waiting for formal releases (but 
without the promise of upgrade paths) - 
http://download.cloudstack.org/testing/nightly/


5. finally - I would say if you or anyone wants to work on a release (call it 
whatever, regular, LTS) - just propose and do!


Regards.


From: Daniel Augusto Veronezi Salvador 
Sent: Tuesday, September 7, 2021 22:07
To: dev@cloudstack.apache.org 
Subject: Re: [Discussion] Release Cycle

Hi Gabriel, thanks for opening this discussion.

I'm +1 on it. My considerations:

- We've to put a lot of efforts to support 3+ LTS simultaneously, which
doesn't make sense. Regular versions will give us some breath and will
reduce rework.
- Although the EOL is well defined, it seems we don't have a solid
criteria to define new versions, because they don't have a pattern.
Users don't know when they will have a new version. Also, we don't have
much planning to do the implementations.
- We've been seeing Ubuntu life-cycle working for a long time, and we
know it works well. It's a good reference to follow, we will not need to
reinvent the wheel.

Best regards,
Daniel.

On 31/08/2021 14:44, Gabriel Bräscher wrote:
> Hello,
>
> I would like to open a discussion regarding the project release cycle. More
> specifically on the following topics:
>
> 1. LTS and Regular releases
>
> 2. Releases period
>
> 3. Enhance roadmap and Release cycle for users
>
>  1 LTS and Regular releases
>
> It has been a while since the last regular release. Nowadays there are only
> LTS releases; maybe we should get back to having regular versions in
> between LTS.
>
> With a “Regular” release users would be able to trade stability for new
> features. Additionally, developers and users would have a “pilot” of the
> next LTS which could anticipate issues and result in a stable long-term
> release.
>
> Please, let me know what you think of this. Should we get back to releasing
> Regular releases in between LTS releases?
>
> For reference, here follow the past releases:
>
> +-+-+--+-+
> | Release | Type| Release date | EOL |
>