RE: Port Forwarding in Network

2024-09-02 Thread Alex Mattioli
I can't say for sure, but based on my experience with the VR and other 
networking devices I'd say it is (in order):

- NAT, (sNAT, dNAT, 1:1 NAT), by a fair margin
- Load Balancing
- Firewall/ACL
- Routing
- DHCP, DNS, UserData (those are very low cost)

VPN is also demanding, but is not used nearly as often as the rest.

Cheers
Alex


 


-Original Message-
From: Bryan Tiang  
Sent: 01 September 2024 21:48
To: us...@cloudstack.apache.org; us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: RE: Port Forwarding in Network

Hey Alex

Noted on this, will look into it.

Whats the most expensive task in the VR? Load Balancing? Routing? NAT? ACL?

Regards,
Bryan
On 30 Aug 2024 at 7:27 PM +0800, Alex Mattioli , 
wrote:
> Hi Bryan,
>
> Indeed, your use case is extreme, I'd highly recommend using more networks 
> with less autoscale groups.
>
> On making the VRs redundant, that will take even more resources than 
> standalone routers and won't really give you much extra uptime.
>
> Regards,
> Alex
>
>
>
>
> -Original Message-
> From: Bryan Tiang 
> Sent: Thursday, August 29, 2024 9:09 PM
> To: us...@cloudstack.apache.org; us...@cloudstack.apache.org
> Cc: dev@cloudstack.apache.org
> Subject: Re: Port Forwarding in Network
>
> We update the VR offering to be 4 Core, 4GB. Its a single router setup atm 
> but we’re going to make it redundant soon.
>
> Also, we have a 3rd case which i forgot to mention.
>
> Internet/Leased Line -> ASG LB (API GW) -> Private Gateway to another 
> VPC within same zone -> ASG LB (Microservice 3) -> DB
>
> This scenario is meant to route traffic from VPC A (API GW only) to many 
> other customer VPCs.
>
> Regards,
> Bryan
> On 30 Aug 2024 at 1:48 AM +0800, Wei ZHOU , wrote:
> > Thanks for sharing. Interesting
> >
> > How many cpu and memory does you VR have ?
> >
> >
> > -Wei
> > On Thursday, August 29, 2024, Bryan Tiang  wrote:
> >
> > > Hi Alex and Wei Zhou,
> > >
> > > Thanks for the input, so it seems this new feature is more 
> > > beneficial for those who are currently using Shared Networks.
> > >
> > > We have 50 AutoscaleGroups in a single VR because our company 
> > > mainly distributes/broadcasts stock prices from multiple exchanges 
> > > to public users, so lots of micro services that need to autoscale 
> > > instantaneously when the markets suddenly spike/rally which can 
> > > result in 1 - 10x traffic bursts.
> > >
> > > However, most of our Autoscale Groups consists of API Gateways to 
> > > route traffic to different network tiers and micro services. This 
> > > is what takes up lots of Autoscale Groups.
> > >
> > > We had to duplicate lots of API Gateway into multiple Autoscale 
> > > Groups because the current feature only allows load balancing to 1 single 
> > > port.
> > >
> > > So this is more of a workaround for us to overcome the current 
> > > Autoscale feature limitation.
> > >
> > > I think something worth mentioning is that our Autoscale Group, 
> > > load balances traffic to other Autoscale Groups.
> > >
> > > For example:
> > >
> > > Internet -> ASG LB (API GW) -> ASG LB (Microservice 1) -> Database
> > >
> > > And in some cases, we have this as well:
> > >
> > > Internet -> ASG LB (API GW) -> ASG LB (Microservice 1) -> ASG LB 
> > > (Microservice 2)-> Database
> > >
> > > I guess makes the VR very busy.
> > >
> > > Happy to share more, sounds like our use is bit extreme… but it 
> > > works so far though. Its only the CPU Utilisation that’s 
> > > concerning… (memory is always around 40% so not a bottleneck 
> > > there)
> > >
> > > Regards,
> > > Bryan
> > > On 29 Aug 2024 at 11:27 PM +0800, Alex Mattioli < 
> > > alex.matti...@shapeblue.com>, wrote:
> > > > Hi Bryan,
> > > >
> > > > What's your use case for 50 autoscale groups in 1 VR? When 
> > > > designing the
> > > feature we never envisioned more than 2 or 3.
> > > >
> > > > In NAT mode you should be able to get some 3gpbs through the VR, 
> > > > in
> > > ROUTED mode then some 6-7gbps. Those numbers do go down 
> > > (considerably
> > > sometimes) with the number of firewall rules, load balancing, etc...
> > > you have setup in the network.
> > > >
> > > > You'll need to create new networks

RE: Port Forwarding in Network

2024-08-30 Thread Alex Mattioli
Hi Bryan,

Indeed, your use case is extreme, I'd highly recommend using more networks with 
less autoscale groups.

On making the VRs redundant, that will take even more resources than standalone 
routers and won't really give you much extra uptime.

Regards,
Alex

 


-Original Message-
From: Bryan Tiang  
Sent: Thursday, August 29, 2024 9:09 PM
To: us...@cloudstack.apache.org; us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: Re: Port Forwarding in Network

We update the VR offering to be 4 Core, 4GB. Its a single router setup atm but 
we’re going to make it redundant soon.

Also, we have a 3rd case which i forgot to mention.

Internet/Leased Line -> ASG LB (API GW) -> Private Gateway to another VPC 
within same zone -> ASG LB (Microservice 3) -> DB

This scenario is meant to route traffic from VPC A (API GW only) to many other 
customer VPCs.

Regards,
Bryan
On 30 Aug 2024 at 1:48 AM +0800, Wei ZHOU , wrote:
> Thanks for sharing. Interesting
>
> How many cpu and memory does you VR have ?
>
>
> -Wei
> On Thursday, August 29, 2024, Bryan Tiang  wrote:
>
> > Hi Alex and Wei Zhou,
> >
> > Thanks for the input, so it seems this new feature is more 
> > beneficial for those who are currently using Shared Networks.
> >
> > We have 50 AutoscaleGroups in a single VR because our company mainly 
> > distributes/broadcasts stock prices from multiple exchanges to 
> > public users, so lots of micro services that need to autoscale 
> > instantaneously when the markets suddenly spike/rally which can 
> > result in 1 - 10x traffic bursts.
> >
> > However, most of our Autoscale Groups consists of API Gateways to 
> > route traffic to different network tiers and micro services. This is 
> > what takes up lots of Autoscale Groups.
> >
> > We had to duplicate lots of API Gateway into multiple Autoscale 
> > Groups because the current feature only allows load balancing to 1 single 
> > port.
> >
> > So this is more of a workaround for us to overcome the current 
> > Autoscale feature limitation.
> >
> > I think something worth mentioning is that our Autoscale Group, load 
> > balances traffic to other Autoscale Groups.
> >
> > For example:
> >
> > Internet -> ASG LB (API GW) -> ASG LB (Microservice 1) -> Database
> >
> > And in some cases, we have this as well:
> >
> > Internet -> ASG LB (API GW) -> ASG LB (Microservice 1) -> ASG LB 
> > (Microservice 2)-> Database
> >
> > I guess makes the VR very busy.
> >
> > Happy to share more, sounds like our use is bit extreme… but it 
> > works so far though. Its only the CPU Utilisation that’s concerning… 
> > (memory is always around 40% so not a bottleneck there)
> >
> > Regards,
> > Bryan
> > On 29 Aug 2024 at 11:27 PM +0800, Alex Mattioli < 
> > alex.matti...@shapeblue.com>, wrote:
> > > Hi Bryan,
> > >
> > > What's your use case for 50 autoscale groups in 1 VR? When 
> > > designing the
> > feature we never envisioned more than 2 or 3.
> > >
> > > In NAT mode you should be able to get some 3gpbs through the VR, 
> > > in
> > ROUTED mode then some 6-7gbps. Those numbers do go down 
> > (considerably
> > sometimes) with the number of firewall rules, load balancing, etc... 
> > you have setup in the network.
> > >
> > > You'll need to create new networks in ROUTED mode, there's no 
> > > migration
> > path from NATTED mode to ROUTED mode.
> > >
> > > You definitely can allow all traffic in the firewall and setup 
> > > firewall
> > rules in each individual VM.
> > >
> > > In this initial implementation there's no load balancer in ROUTED 
> > > mode,
> > so no Autoscale groups. But it is definitely a possible improvement 
> > for future versions.
> > >
> > > Cheers
> > > Alex
> > >
> > >
> > >
> > >
> > > -Original Message-
> > > From: Bryan Tiang 
> > > Sent: Thursday, August 29, 2024 11:11 AM
> > > To: us...@cloudstack.apache.org; us...@cloudstack.apache.org
> > > Cc: dev@cloudstack.apache.org
> > > Subject: RE: Port Forwarding in Network
> > >
> > > Hey Alex,
> > >
> > > It’s exiting to hear this new features coming about, and that the 
> > > VR
> > performance will be improved as a result of pure routing.
> > >
> > > We have a pain point right now where our VR is at 75% CPU when 
> > > handling
> >

RE: Port Forwarding in Network

2024-08-29 Thread Alex Mattioli
Hi Bryan,

What's your use case for 50 autoscale groups in 1 VR?  When designing the 
feature we never envisioned more than 2 or 3.

In NAT mode you should be able to get some 3gpbs through the VR, in ROUTED mode 
then some 6-7gbps. Those numbers do go down (considerably sometimes) with the 
number of firewall rules, load balancing, etc... you have setup in the network.

You'll need to create new networks in ROUTED mode, there's no migration path 
from NATTED mode to ROUTED mode.

You definitely can allow all traffic in the firewall and setup firewall rules 
in each individual VM.

In this initial implementation there's no load balancer in ROUTED mode, so no 
Autoscale groups.  But it is definitely a possible improvement for future 
versions.

Cheers
Alex

 


-Original Message-
From: Bryan Tiang  
Sent: Thursday, August 29, 2024 11:11 AM
To: us...@cloudstack.apache.org; us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: RE: Port Forwarding in Network

Hey Alex,

It’s exiting to hear this new features coming about, and that the VR 
performance will be improved as a result of pure routing.

We have a pain point right now where our VR is at 75% CPU when handling 200Mbps 
Internet Traffic. Probably because we have 50 Autoscale Groups within that 1 
VR… (VR is 4Core,4GB).

We have plans support 1Gb-5Gbps Internet Bandwidth within a single VR one day, 
but if it’s already at 75%… kinda worrying for us. So this is exciting.

I went through the design document and have few questions. Is this going to be 
a new network? Or can existing VPC networks upgrade to Routed Mode?

Since every VM will get to have its own Public IP, does it mean every VM can 
have its own firewall rules now?

Will this feature be available for Autoscale Groups? We are heavy users of it.

Regards,
Bryan
On 29 Aug 2024 at 4:22 AM +0800, Alex Mattioli , 
wrote:
> Hi Marty,
>
>
>
> Here's the documentation for Routed Mode and Simple Dynamic Routing, I did 
> the original design and my colleague @Wei Zhou<mailto:wei.z...@shapeblue.com> 
> refined and implemented it.
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=306153967
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=315492858
>
> Cheers,
>
> Alex
>
>
>
>
>
>
>
> -Original Message-
> From: Marty Godsey 
> Sent: Wednesday, August 28, 2024 11:07 AM
> To: us...@cloudstack.apache.org
> Subject: Re: Port Forwarding in Network
>
>
>
> Thank you, Alex. I am excited about that addition. Even having the ability to 
> not have to NAT is very useful.
>
>
>
> Regards,
>
> Marty Godsey
>
> Rudio, LLC
>
>
>
> Book Time: https://calendly.com/rudio-martyg
>
> Support: 
> supp...@rudio.net<mailto:supp...@rudio.net?subject=Rudio%20Support<mailto:supp...@rudio.net%3cmailto:supp...@rudio.net?subject=Rudio%20Support>>
>
> Ph: 859-328-1100
>
> The content of this email is intended for the person or entity to which it is 
> addressed only. This email may contain confidential information. If you are 
> not the person to whom this message is addressed, be aware that any use, 
> reproduction, or distribution of this message is strictly prohibited. If you 
> received this in error, please contact the sender and immediately delete this 
> email and any attachments.
>
>
>
>
>
> From: Alex Mattioli 
> mailto:alex.matti...@shapeblue.com>>
>
> Date: Tuesday, August 27, 2024 at 11:56 AM
>
> To: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org> 
> mailto:us...@cloudstack.apache.org>>
>
> Subject: RE: Port Forwarding in Network
>
> WARNING: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
>
>
>
>
>
> Hi Marty,
>
>
>
> There are two PRs in progress, one for Routed Mode for IPv4 in Isolated 
> Networks and VPCs and another for Simple Dynamic Route with BGP.
>
>
>
> With Routed Mode you'll be able to assign public IPs directly to VMs, this 
> should be ready for ACS 4.20, which will be routed via the ACS VR.
>
> This has been possible for IPv6 since ACS 4.17 and will work in a similar way 
> (with some differences) for IPv4. Here's a video explaining how it works for 
> IPv6: https://www.youtube.com/watch?v=UvCSmU1TjRY&t=1583s
>
>
>
> As mentioned before, if you want to skip the VR completely then you need to 
> use Shared Networks, but then end users can't deploy the networks themselves 
> without operator intervention.
>
>
>
> Cheers
>
> Alex
>
>
>
>
>
>
>
>
>
>
>
> --

RE: Port Forwarding in Network

2024-08-28 Thread Alex Mattioli
Hi Marty,



Here's the documentation for Routed Mode and Simple Dynamic Routing,  I did the 
original design and my colleague @Wei Zhou<mailto:wei.z...@shapeblue.com> 
refined and implemented it.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=306153967

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=315492858

Cheers,

Alex




 


-Original Message-
From: Marty Godsey 
Sent: Wednesday, August 28, 2024 11:07 AM
To: us...@cloudstack.apache.org
Subject: Re: Port Forwarding in Network



Thank you, Alex. I am excited about that addition. Even having the ability to 
not have to NAT is very useful.



Regards,

Marty Godsey

Rudio, LLC



Book Time: https://calendly.com/rudio-martyg

Support: 
supp...@rudio.net<mailto:supp...@rudio.net?subject=Rudio%20Support<mailto:supp...@rudio.net%3cmailto:supp...@rudio.net?subject=Rudio%20Support>>

Ph: 859-328-1100

The content of this email is intended for the person or entity to which it is 
addressed only. This email may contain confidential information. If you are not 
the person to whom this message is addressed, be aware that any use, 
reproduction, or distribution of this message is strictly prohibited. If you 
received this in error, please contact the sender and immediately delete this 
email and any attachments.





From: Alex Mattioli 
mailto:alex.matti...@shapeblue.com>>

Date: Tuesday, August 27, 2024 at 11:56 AM

To: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org> 
mailto:us...@cloudstack.apache.org>>

Subject: RE: Port Forwarding in Network

WARNING: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.





Hi Marty,



There are two PRs in progress, one for Routed Mode for IPv4 in Isolated 
Networks and VPCs and another for Simple Dynamic Route with BGP.



With Routed Mode you'll be able to assign public IPs directly to VMs, this 
should be ready for ACS 4.20,  which will be routed via the ACS VR.

This has been possible for IPv6 since ACS 4.17 and will work in a similar way 
(with some differences) for IPv4. Here's a video explaining how it works for 
IPv6: https://www.youtube.com/watch?v=UvCSmU1TjRY&t=1583s



As mentioned before, if you want to skip the VR completely then you need to use 
Shared Networks, but then end users can't deploy the networks themselves 
without operator intervention.



Cheers

Alex











-Original Message-

From: Jayanth Babu A 
mailto:jayanth.b...@nxtgen.com.INVALID>>

Sent: Tuesday, August 27, 2024 10:27 AM

To: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>

Subject: Re: Port Forwarding in Network



Hi Marty,

Please use Shared Networks [1].



[1] 
https://atpscan.global.hornetsecurity.com?d=xMOwK4fYoexeGDaCItpovxDkoPdExpSMKaLuotztWEw&f=1X9ll9UDNTAUv9XEhAoS-oCZLIFMKLOf3SQZgHrZSZlrXbexUH8NtKLJCqQbeAYB&i=&k=bm7B&m=x1rGyep2ImM3kF-8P6y1JWh7yEkoCGNNgU8oyJkxPaALdf4b2xt3n4PE01uT1okjgB6Kw5tM2yIKoLpa6cjYlK58irpRbdjWYflteXydz9OVb4jJgpLPFwQzFkj2QYTn&n=qT4mJ0BYBeh6jAxOCD1hayLTVyupmjmzwzzkOhAmOF4z7wMla_tk04lc9D939Rfl&r=IVbx63cjnjXzXq_Sv0qS0mvAEousFhnYo0ONd_j_NKawfjzf9DWkEB-VcJALkcaL&s=40bdd3dc1b6d4512eb8828b1f28bd4d08a871934dab0ba463a647f6e5f009a36&u=https%3A%2F%2Fdocs.cloudstack.apache.org%2Fen%2Flatest%2Fadminguide%2Fnetworking.html%23shared-networks



Thanks,

Jayanth





From: Marty Godsey mailto:mar...@rudio.net>>

Sent: Tuesday, August 27, 2024 6:38:12 pm

To: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org> 
mailto:us...@cloudstack.apache.org>>

Subject: Re: Port Forwarding in Network



This is what I went ahead and used.



Has there been a feature request to create a way to directly provide a public 
IP to an instance instead of using a VR?



Regards,

Marty Godsey





From: Jithin Raju mailto:jithin.r...@shapeblue.com>>

Date: Tuesday, August 27, 2024 at 12:06 AM

To: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org> 
mailto:us...@cloudstack.apache.org>>

Subject: Re: Port Forwarding in Network

WARNING: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.





Hi Marty,



Could you use static NAT instead?



-Jithin



From: Marty Godsey mailto:mar...@rudio.net>>

Date: Monday, 26 August 2024 at 9:26 PM

To: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org> 
mailto:us...@cloudstack.apache.org>>

Subject: Port Forwarding in Network

Is there a way to easily forward all ports without having to put in 1 – 65525? 
I know it’s small and petty, but in other places, you can do a -1 to specify 
all. You don’t seem to be able to do that here.



Regards,

Marty Godsey







Disclaimer *** This e

RE: [DISCUSS] Deprecate/remove support for EOL distros and hypervisors

2024-06-20 Thread Alex Mattioli
I'd like if we keep EL7 for at least one more version, the transition path out 
of that is clear now but many cloud operators haven't replaced it yet.

On the rest +1 

 


-Original Message-
From: Rohit Yadav  
Sent: Thursday, June 20, 2024 11:43 AM
To: dev@cloudstack.apache.org
Subject: [DISCUSS] Deprecate/remove support for EOL distros and hypervisors

All,

Referencing 
https://docs.cloudstack.apache.org/en/4.19.0.0/releasenotes/compat.html, some 
of the distros and hypervisors we support have reached or reaching EOL by end 
of this month.

Please review and advise how we should deprecating/remove the following for the 
next 4.20 release (i.e. compatibility matrix for the future 4.20 release notes):

Distros:

  *
EL7 (CentOS 7, RHEL7, https://endoflife.date/centos)
  *
Ubuntu 18.04 (https://endoflife.date/ubuntu)


Software requirements:

  *
JRE 11 (Discuss - should we transition to support JRE/JDK 17 or 21, for 4.20? 
https://endoflife.date/oracle-jdk And are all supported distros have a JRE17/21 
package/dependency availalble)
  *
MySQL 5.6, 5.7 (https://endoflife.date/mysql)

Hypervisors:

  *
KVM: Ubuntu 18.04 (https://endoflife.date/ubuntu), EL7 
(https://endoflife.date/centos)
  *
XenServer All versions except 8.x (retain note that it's not tested, 
https://www.citrix.com/support/product-lifecycle/legacy-product-matrix.html)
  *
XCP-ng: All versions except 8.2/LTS (https://endoflife.date/xcp-ng)
  *
VMware: 6.5, 6.7 (https://endoflife.date/vcenter)


Regards.

 




RE: [Proposal] Storage Filesystem as a First Class Feature

2024-06-19 Thread Alex Mattioli
+1 on that,  keeping it hypervisor agnostic is key.

 


-Original Message-
From: Nux  
Sent: Wednesday, June 19, 2024 10:14 AM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: [Proposal] Storage Filesystem as a First Class Feature

Thanks Piotr,

This is the second time virtio-fs has been mentioned and just researched it a 
bit, it looks like something really nice to have in Cloudstack, definitely 
something to look at in the future.

Nice as it is though, it has a big drawback, it's KVM-only, so for now we'll 
stick to "old school" tech that can be used in an agnostic matter.

You are more than welcome to share thoughts on the other details presented, 
perhaps pros/cons on filesystems and other gotchas you may have encountered 
yourself.

On 2024-06-19 07:04, Piotr Pisz wrote:
> Hi,
> We considered a similar problem in our company.
> Shared storage is needed between VMs running on different networks.
> NFS/CephFS is ok as long as the VM can see the source.
> The best solution would be to use https://virtio-fs.gitlab.io/ Any FS 
> would be used on the host side (e.g. NFS or CephFS) and exported to 
> the VM natively (the network problem disappears).
> But you should start by introducing an appropriate mechanism on the CS 
> side (similar in operation to Manila Share from Openstack).
>  So, the initiative itself is very good.
> 
> Overall, CloudStack has been heading in the right direction lately :-)
> 
> Best regards,
> Piotr
> 
> 
> -Original Message-
> From: Nux 
> Sent: Wednesday, June 19, 2024 12:59 AM
> To: dev@cloudstack.apache.org; Users 
> Subject: Re: [Proposal] Storage Filesystem as a First Class Feature
> 
> Hi, I'd like to draw the attention to some of the more operational 
> aspects of this feature, mainly the storage appliance internals and 
> UI.
> 
> So long story short, I've discussed with Abhisar and others and we'll 
> be deploying a VM based on the Cloudstack Debian systemvm template 
> which will export NFS v3/4 for user VMs to consume.
> 
> Below are some of the more finer details, please have a read if you 
> are interested in this feature and feel free to comment and make 
> suggestions.
> 
> 1 - The appliance will only have a single export, that export will be 
> a single disk (data volume). Keep it simple.
> 2 - GPT partition table and a single partition, filesystem probably 
> XFS and/or customisable - something stock Debian supports, simple and 
> boring stuff.
> 3 - NFS export should be simple, we can standardise on a path name eg 
> /nfs or /fileshare and it will be identical on all appliances.
> 4 - Starting specs: 2 cores, 4 GB RAM - should be OK for a small NFS 
> server, the appliance can be upgraded to bigger offerings.
> 5 - Disk offering should be flagged accordingly, the disk offering 
> will have a flag/checkbox for "storage appliance" use.
> 6 - This appliance will not be a system VM, it will be a "blackbox", 
> but the approach will be similar here to CKS.
> 7 - Security model: by default we export to * (all hosts) into a 
> single network - for isolated networks - in SG zones we need to play 
> with security groups & a global setting for dumb shared networks 
> (without SG) because of security implications - requires further 
> discussion.
> 8 - We export with default, best practices NFS options - anything 
> against no_root_squash?
> 9 - Explore exporting the file share via multiple protocols - sftp, 
> tftp, smb, nfs, http(s)? - The issue here is authentication becomes a 
> problem, also user permissions will get messy and possibly conflict 
> with no_root_squash, in fact might require an all_squash and 
> everything mapped to a single user that will be then also used for all 
> those other services.
> Also
> logging will become necessary. Thoughts?
> 10 - UI details, but this will probably show up in the Storage section 
> somehow.
> 11 - Display free/used space, create alerts for full disk etc for this 
> appliance.
> 12 - Formatting and setting up to be done by an internal agent, 
> specifics are sent via the kernel cmd line of the VM, similar to how 
> we configure system VMs.
> 
> What do you folks think of these points and have I missed anything 
> crucial?
> 
> 
> 
> On 2024-06-04 05:04, Abhisar Sinha wrote:
>> Hi,
>> 
>> I would like to propose supporting storage filesystem as a 
>> first-class feature in Cloudstack.
>> The File Share can be associated with one or more guest networks or 
>> vpc tiers and can be used by any VM on the network in a shared 
>> manner. It is designed to be resizable and highly available. This 
>> feature can later be used as integration endpoints with the CSI 
>> driver, go-sdk, Terraform, Ansible and others.
>> 
>> The draft functional spec is here :
>> 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Filesys
> tem+as
> +a+First+Class+Feature
>> 
>> Looking forward to your comments and suggestions.
>> 
>> Thanks,
>> Abhisar



RE: [Proposal] Storage Filesystem as a First Class Feature

2024-06-19 Thread Alex Mattioli
Hi Piotr,

> Overall, CloudStack has been heading in the right direction lately :-)

That's great to hear :)

Cheers
Alex

 


-Original Message-
From: Piotr Pisz  
Sent: Wednesday, June 19, 2024 8:04 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: RE: [Proposal] Storage Filesystem as a First Class Feature

Hi,
We considered a similar problem in our company. 
Shared storage is needed between VMs running on different networks. 
NFS/CephFS is ok as long as the VM can see the source. 
The best solution would be to use https://virtio-fs.gitlab.io/ Any FS would be 
used on the host side (e.g. NFS or CephFS) and exported to the VM natively (the 
network problem disappears).
But you should start by introducing an appropriate mechanism on the CS side 
(similar in operation to Manila Share from Openstack).
 So, the initiative itself is very good.

Overall, CloudStack has been heading in the right direction lately :-)

Best regards,
Piotr


-Original Message-
From: Nux 
Sent: Wednesday, June 19, 2024 12:59 AM
To: dev@cloudstack.apache.org; Users 
Subject: Re: [Proposal] Storage Filesystem as a First Class Feature

Hi, I'd like to draw the attention to some of the more operational aspects of 
this feature, mainly the storage appliance internals and UI.

So long story short, I've discussed with Abhisar and others and we'll be 
deploying a VM based on the Cloudstack Debian systemvm template which will 
export NFS v3/4 for user VMs to consume.

Below are some of the more finer details, please have a read if you are 
interested in this feature and feel free to comment and make suggestions.

1 - The appliance will only have a single export, that export will be a single 
disk (data volume). Keep it simple.
2 - GPT partition table and a single partition, filesystem probably XFS and/or 
customisable - something stock Debian supports, simple and boring stuff.
3 - NFS export should be simple, we can standardise on a path name eg /nfs or 
/fileshare and it will be identical on all appliances.
4 - Starting specs: 2 cores, 4 GB RAM - should be OK for a small NFS server, 
the appliance can be upgraded to bigger offerings.
5 - Disk offering should be flagged accordingly, the disk offering will have a 
flag/checkbox for "storage appliance" use.
6 - This appliance will not be a system VM, it will be a "blackbox", but the 
approach will be similar here to CKS.
7 - Security model: by default we export to * (all hosts) into a single network 
- for isolated networks - in SG zones we need to play with security groups & a 
global setting for dumb shared networks (without SG) because of security 
implications - requires further discussion.
8 - We export with default, best practices NFS options - anything against 
no_root_squash?
9 - Explore exporting the file share via multiple protocols - sftp, tftp, smb, 
nfs, http(s)? - The issue here is authentication becomes a problem, also user 
permissions will get messy and possibly conflict with no_root_squash, in fact 
might require an all_squash and everything mapped to a single user that will be 
then also used for all those other services. Also logging will become 
necessary. Thoughts?
10 - UI details, but this will probably show up in the Storage section somehow.
11 - Display free/used space, create alerts for full disk etc for this 
appliance.
12 - Formatting and setting up to be done by an internal agent, specifics are 
sent via the kernel cmd line of the VM, similar to how we configure system VMs.

What do you folks think of these points and have I missed anything crucial?



On 2024-06-04 05:04, Abhisar Sinha wrote:
> Hi,
> 
> I would like to propose supporting storage filesystem as a first-class 
> feature in Cloudstack.
> The File Share can be associated with one or more guest networks or 
> vpc tiers and can be used by any VM on the network in a shared manner. 
> It is designed to be resizable and highly available. This feature can 
> later be used as integration endpoints with the CSI driver, go-sdk, 
> Terraform, Ansible and others.
> 
> The draft functional spec is here : 
>
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Filesystem+as
+a+First+Class+Feature
> 
> Looking forward to your comments and suggestions.
> 
> Thanks,
> Abhisar




RE: [Proposal] Storage Filesystem as a First Class Feature

2024-06-06 Thread Alex Mattioli
Fair points, thanks for the input.

Cheers
Alex

 


-Original Message-
From: Wido den Hollander  
Sent: Thursday, June 6, 2024 4:38 PM
To: dev@cloudstack.apache.org; Wei ZHOU 
Cc: Wei Zhou ; Abhisar Sinha 
; us...@cloudstack.apache.org
Subject: Re: [Proposal] Storage Filesystem as a First Class Feature



Op 06/06/2024 om 11:26 schreef Wei ZHOU:
>> @Wei Zhou If the network into which the StorageVM runs IPv6 (as per your 
>> implementation of IPv6) it should automatically get an IPv6 IP, correct?
> 
> yes,it should get Ipv6 addr advertised by cloudstack VR. @Alexblue.com 
> we need to make sure IPv6 is enabled in the storagefs vm (Ipv6 is 
> disabled by default in systemvm template as far as I know), and proper 
> firewall rules are applied.

Don't forget that you also need to make sure that your NFS /etc/exports file 
contains the IPv6 addresses of VMs who want to mount it. Otherwise it still 
doesn't work.

> 
>> @Wido den Hollander @Wei ZhouHow much effort do you guys thing it would it 
>> take to add support to VirtioFS?  I'm not super aware of it, what would the 
>> benefits be? (I've quickly looked at Wido's links, but I rather get info 
>> from you guys directly).
> I see the benefits . However I do not know a strong use case of it.
> maybe @wido can advise ?

I haven't used it before because the support in Qemu + Libvirt is fairly new. 
The main benefit is that the end-user never has access to the NAS or storage 
network. The VM doesn't know if it's NFS or CephFS underneath, it simply has a 
filesystem. This takes away a lot of configuration inside the VM or needed 
software (CephFS drivers).

It adds additional security since the VM doesn't need to be able to talk to the 
storage device(s), only the hypervisors do this.

In the future Virtio-FS maybe gets support for rate limiting or other features. 
I personally think this is the way forward.

I would at least make sure it's understood that it exists and the code already 
takes this into account without making it a proper implementation from day one.

You would need to mount the FS on the hypervisor and then re-export it to the 
VM. This requires hooks to be executed for example.

Wido

> 
> On Thu, Jun 6, 2024 at 11:01 AM Alex Mattioli 
>  wrote:
>>
>>
>> @Wei Zhou If the network into which the StorageVM runs IPv6 (as per your 
>> implementation of IPv6) it should automatically get an IPv6 IP, correct?
>>
>> @Wido den Hollander @Wei ZhouHow much effort do you guys thing it would it 
>> take to add support to VirtioFS?  I'm not super aware of it, what would the 
>> benefits be? (I've quickly looked at Wido's links, but I rather get info 
>> from you guys directly).
>>
>> Cheers
>> Alex
>>
>>
>>
>>
>> -Original Message-
>> From: Wei ZHOU 
>> Sent: Thursday, June 6, 2024 10:50 AM
>> To: dev@cloudstack.apache.org
>> Cc: Abhisar Sinha 
>> Subject: Re: [Proposal] Storage Filesystem as a First Class Feature
>>
>> Hi Wido,
>>
>> Thanks for your feedback.
>>
>> It is a great idea to support virtio-fs. We could add VIRTIOFS as a valid 
>> value of enum ExportProtocol, and implement it in a separate plugin in the 
>> future.
>> Have you tested virtio-fs before ? Could you share more info if possible?
>> - is it supported by libvirt-java ?
>> - does it support hot plug or hot unplug ?
>>
>> I agree with you that we should consider IPv6 (ip and firewall rules) in 
>> storagefs vm.
>> cc abhisar.si...@shapeblue.com
>>
>>
>> Kind regards,
>> Wei
>>
>> On Thu, Jun 6, 2024 at 6:43 AM Wido den Hollander  
>> wrote:
>>>
>>>
>>>
>>> Op 04/06/2024 om 06:04 schreef Abhisar Sinha:
>>>> Hi,
>>>>
>>>> I would like to propose supporting storage filesystem as a first-class 
>>>> feature in Cloudstack.
>>>> The File Share can be associated with one or more guest networks or vpc 
>>>> tiers and can be used by any VM on the network in a shared manner. It is 
>>>> designed to be resizable and highly available. This feature can later be 
>>>> used as integration endpoints with the CSI driver, go-sdk, Terraform, 
>>>> Ansible and others.
>>>>
>>>> The draft functional spec is here :
>>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+File
>>>> s
>>>> ystem+as+a+First+Class+Feature
>>>>
>>>> Looking forward to your comments and suggestions.
>>>>
>>>
>>> I think this is great! Especially the Storage VM. Few things to keep 
>>> in
>>> mind:
>>>
>>> - Have we thought about passthrough of FileSystems coming from the 
>>> HV and being passed through to the VM [0]
>>> - The StorageFsVm, can we make sure it supports IPv6 from the start, 
>>> best would be if it. Make sure all the code at least supports this 
>>> for ACLs and such. The VM itself should obtain an IPv6 address when 
>>> possible and open the proper ports in it's firewall
>>>
>>> Wido
>>>
>>> [0]:
>>> - https://virtio-fs.gitlab.io/
>>> - https://chrisirwin.ca/posts/sharing-host-files-with-kvm/
>>>
>>>
>>>> Thanks,
>>>> Abhisar
>>>>
>>>>
>>>>
>>>>


RE: [Proposal] Storage Filesystem as a First Class Feature

2024-06-06 Thread Alex Mattioli

@Wei Zhou If the network into which the StorageVM runs IPv6 (as per your 
implementation of IPv6) it should automatically get an IPv6 IP, correct?

@Wido den Hollander @Wei ZhouHow much effort do you guys thing it would it take 
to add support to VirtioFS?  I'm not super aware of it, what would the benefits 
be? (I've quickly looked at Wido's links, but I rather get info from you guys 
directly).

Cheers
Alex 

 


-Original Message-
From: Wei ZHOU  
Sent: Thursday, June 6, 2024 10:50 AM
To: dev@cloudstack.apache.org
Cc: Abhisar Sinha 
Subject: Re: [Proposal] Storage Filesystem as a First Class Feature

Hi Wido,

Thanks for your feedback.

It is a great idea to support virtio-fs. We could add VIRTIOFS as a valid value 
of enum ExportProtocol, and implement it in a separate plugin in the future.
Have you tested virtio-fs before ? Could you share more info if possible?
- is it supported by libvirt-java ?
- does it support hot plug or hot unplug ?

I agree with you that we should consider IPv6 (ip and firewall rules) in 
storagefs vm.
cc abhisar.si...@shapeblue.com


Kind regards,
Wei

On Thu, Jun 6, 2024 at 6:43 AM Wido den Hollander  
wrote:
>
>
>
> Op 04/06/2024 om 06:04 schreef Abhisar Sinha:
> > Hi,
> >
> > I would like to propose supporting storage filesystem as a first-class 
> > feature in Cloudstack.
> > The File Share can be associated with one or more guest networks or vpc 
> > tiers and can be used by any VM on the network in a shared manner. It is 
> > designed to be resizable and highly available. This feature can later be 
> > used as integration endpoints with the CSI driver, go-sdk, Terraform, 
> > Ansible and others.
> >
> > The draft functional spec is here : 
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Files
> > ystem+as+a+First+Class+Feature
> >
> > Looking forward to your comments and suggestions.
> >
>
> I think this is great! Especially the Storage VM. Few things to keep 
> in
> mind:
>
> - Have we thought about passthrough of FileSystems coming from the HV 
> and being passed through to the VM [0]
> - The StorageFsVm, can we make sure it supports IPv6 from the start, 
> best would be if it. Make sure all the code at least supports this for 
> ACLs and such. The VM itself should obtain an IPv6 address when 
> possible and open the proper ports in it's firewall
>
> Wido
>
> [0]:
> - https://virtio-fs.gitlab.io/
> - https://chrisirwin.ca/posts/sharing-host-files-with-kvm/
>
>
> > Thanks,
> > Abhisar
> >
> >
> >
> >


RE: [Proposal] Storage Filesystem as a First Class Feature

2024-06-06 Thread Alex Mattioli
Hi Wido,

The StorageVM will receive an IP from the Isolated network (or VPC tier) it 
belongs to. If that network is setup for IPv6 (or dual-stack), then the 
StorageVM will use IPv6. 

Regarding file system pass through, that's definitely an possible enhancement 
for a future version, we'd like to keep this one as simple as we can for now.

Cheers
Alex

 


-Original Message-
From: Wido den Hollander  
Sent: Thursday, June 6, 2024 6:43 AM
To: dev@cloudstack.apache.org; Abhisar Sinha 
Subject: Re: [Proposal] Storage Filesystem as a First Class Feature



Op 04/06/2024 om 06:04 schreef Abhisar Sinha:
> Hi,
> 
> I would like to propose supporting storage filesystem as a first-class 
> feature in Cloudstack.
> The File Share can be associated with one or more guest networks or vpc tiers 
> and can be used by any VM on the network in a shared manner. It is designed 
> to be resizable and highly available. This feature can later be used as 
> integration endpoints with the CSI driver, go-sdk, Terraform, Ansible and 
> others.
> 
> The draft functional spec is here : 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Filesys
> tem+as+a+First+Class+Feature
> 
> Looking forward to your comments and suggestions.
> 

I think this is great! Especially the Storage VM. Few things to keep in
mind:

- Have we thought about passthrough of FileSystems coming from the HV and being 
passed through to the VM [0]
- The StorageFsVm, can we make sure it supports IPv6 from the start, best would 
be if it. Make sure all the code at least supports this for ACLs and such. The 
VM itself should obtain an IPv6 address when possible and open the proper ports 
in it's firewall

Wido

[0]:
- https://virtio-fs.gitlab.io/
- https://chrisirwin.ca/posts/sharing-host-files-with-kvm/


> Thanks,
> Abhisar
> 
>   
> 
> 


RE: [Proposal] Storage Filesystem as a First Class Feature

2024-06-04 Thread Alex Mattioli
That's a major technical debt that should have been addressed years ago. Am 
glad to see it is being addressed now.

Cheers
Alex


 


-Original Message-
From: Abhisar Sinha  
Sent: Tuesday, June 4, 2024 6:04 AM
To: dev@cloudstack.apache.org
Subject: [Proposal] Storage Filesystem as a First Class Feature

Hi,

I would like to propose supporting storage filesystem as a first-class feature 
in Cloudstack.
The File Share can be associated with one or more guest networks or vpc tiers 
and can be used by any VM on the network in a shared manner. It is designed to 
be resizable and highly available. This feature can later be used as 
integration endpoints with the CSI driver, go-sdk, Terraform, Ansible and 
others.

The draft functional spec is here : 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Filesystem+as+a+First+Class+Feature

Looking forward to your comments and suggestions.

Thanks,
Abhisar

 




RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-23 Thread Alex Mattioli
The idea is to allocate the AS number pools in the same way we allocate VLANs 
to a zone, with the possibility to enter the AS number manually per network 
(depending on offering, just like in the case of VLANs).

>This would solve most use-cases from the start:
>- BGP peer on zone level
 >  - override on network level
> AS number pool
>  - networks refer to this pool
>- BGP multihop enabled yes or no
>   - zone level
>   - network level override
>- BGP password

Definitely how we want to implement it, with a few additions, thanks for the 
input.

Cheers
Alex

 


-Original Message-
From: Wido den Hollander  
Sent: Thursday, May 23, 2024 9:16 AM
To: Alex Mattioli ; dev@cloudstack.apache.org; 
us...@cloudstack.apache.org; adietr...@ussignal.com
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks



Op 22/05/2024 om 14:55 schreef Alex Mattioli:
> Thanks for the input Wido,
> 
>> That said, you could also opt that you can specify BGP peers are zone level 
>> and override them at network level if one prefers. Nothing specified at >the 
>> network? The zone-level peers are used. If you do >specify them at the 
>> network level those are used. Again, think about multihop.
> 

Have you thought about the AS number pool? This pool could be assigned to a 
network. All networks can point to the same AS number pool or have multiple 
pools where you might make different choices.

On the network level this allows you to create specific BGP filters based on 
the AS number. When these are in fixed 'blocks' you can create better filters.

> That's exactly what I had in mind, the same way we set DNS for the zone but 
> can specify at network level as well. This way we keep self-service intact 
> for when end-users simply want a routed network that peers with whatever the 
> provider has setup upstream but also give the ability to either peer with an 
> user managed VNF upstream or an operator provided router.
> 
> I hope this way we can cater for most use cases, at least with a first simple 
> implementation.
> 
> Will definitely keep multihop in mind.

+ BGP password. This would solve most use-cases from the start:

- BGP peer on zone level
   - override on network level
- AS number pool
   - networks refer to this pool
- BGP multihop enabled yes or no
   - zone level
   - network level override
- BGP password

Wido

> 
> Cheers,
> Alex
> 
> 
>   
> 
> 
> -Original Message-
> From: Wido den Hollander 
> Sent: Monday, May 20, 2024 8:22 PM
> To: dev@cloudstack.apache.org; Alex Mattioli 
> ; us...@cloudstack.apache.org; 
> adietr...@ussignal.com
> Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated 
> and VPC networks
> 
> 
> 
> Op 20/05/2024 om 14:45 schreef Alex Mattioli:
>> Hi Alex,
>>
>> In this scenario:
>>
>>> I think adding the ability to add network specific peers as mentioned in 
>>> one of >your prior replies would still allow the level of control some 
>>> operators (myself >included) may desire.
>>
>> How do you propose network specific peers to be implemented?
>>
> 
> I do agree with Alex (Dietrich) that I think BGP peers should be 
> configured per network. There is no guarantee that every VLAN/VNI
> (VXLAN) ends up at the same pair of routers. Technically there is also no 
> need to do so.
> 
> Let's say I have two VNI (VXLAN):
> 
> VNI 500:
> Router 1: 192.168.153.1 / 2001:db8::153:1 Router 2: 192.168.153.2 / 
> 2001:db8::153:2
> 
> VNI 600:
> Router 1: 192.168.155.1 / 2001:db8::155:1 Router 2: 192.168.155.2 / 
> 2001:db8::155:2
> 
> In these case you would say that the upstream BGP peers are .153.1/2,
> .155.1/2 (and their IPv6 addresses). No need for BGP multihop.
> 
> Talking about multihop, I would make that optional, people might want to have 
> two central BGP routers where each VR peers with (multihop) and those routers 
> distribute the routes into the network again.
> 
> Per network you create you also provide the ASN range, but even better would 
> be to refer to a pool. You can use one pool for your zone by referencing 
> every network to the same pool are simply use multiple pools if your network 
> requires so.
> 
> That said, you could also opt that you can specify BGP peers are peer level 
> and override them at network level if one prefers. Nothing specified at the 
> network? The zone-level peers are used. If you do specify them at the network 
> level those are used. Again, think about multihop.
> 
> Wido
> 
>> Regards
>> Alex
>>
>>
>>
>>
>>
>> -Original Message-
>> From: Dietrich, Alex 
>> Sent: Monday, 

RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-22 Thread Alex Mattioli
Thanks for the input Wido,

> That said, you could also opt that you can specify BGP peers are zone level 
> and override them at network level if one prefers. Nothing specified at >the 
> network? The zone-level peers are used. If you do >specify them at the 
> network level those are used. Again, think about multihop.

That's exactly what I had in mind, the same way we set DNS for the zone but can 
specify at network level as well. This way we keep self-service intact for when 
end-users simply want a routed network that peers with whatever the provider 
has setup upstream but also give the ability to either peer with an user 
managed VNF upstream or an operator provided router.

I hope this way we can cater for most use cases, at least with a first simple 
implementation.

Will definitely keep multihop in mind.

Cheers,
Alex


 


-Original Message-
From: Wido den Hollander  
Sent: Monday, May 20, 2024 8:22 PM
To: dev@cloudstack.apache.org; Alex Mattioli ; 
us...@cloudstack.apache.org; adietr...@ussignal.com
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks



Op 20/05/2024 om 14:45 schreef Alex Mattioli:
> Hi Alex,
> 
> In this scenario:
> 
>> I think adding the ability to add network specific peers as mentioned in one 
>> of >your prior replies would still allow the level of control some operators 
>> (myself >included) may desire.
> 
> How do you propose network specific peers to be implemented?
> 

I do agree with Alex (Dietrich) that I think BGP peers should be configured per 
network. There is no guarantee that every VLAN/VNI
(VXLAN) ends up at the same pair of routers. Technically there is also no need 
to do so.

Let's say I have two VNI (VXLAN):

VNI 500:
Router 1: 192.168.153.1 / 2001:db8::153:1 Router 2: 192.168.153.2 / 
2001:db8::153:2

VNI 600:
Router 1: 192.168.155.1 / 2001:db8::155:1 Router 2: 192.168.155.2 / 
2001:db8::155:2

In these case you would say that the upstream BGP peers are .153.1/2,
.155.1/2 (and their IPv6 addresses). No need for BGP multihop.

Talking about multihop, I would make that optional, people might want to have 
two central BGP routers where each VR peers with (multihop) and those routers 
distribute the routes into the network again.

Per network you create you also provide the ASN range, but even better would be 
to refer to a pool. You can use one pool for your zone by referencing every 
network to the same pool are simply use multiple pools if your network requires 
so.

That said, you could also opt that you can specify BGP peers are peer level and 
override them at network level if one prefers. Nothing specified at the 
network? The zone-level peers are used. If you do specify them at the network 
level those are used. Again, think about multihop.

Wido

> Regards
> Alex
> 
> 
>   
> 
> 
> -Original Message-
> From: Dietrich, Alex 
> Sent: Monday, May 20, 2024 2:21 PM
> To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated 
> and VPC networks
> 
> Hi Alex,
> 
> This may be a difference in perspective in implementation of BGP at the 
> tenant level. I see the ability this would provide to seamlessly establishing 
> those peering relationships with minimal intervention (helping scalability).
> 
> I think adding the ability to add network specific peers as mentioned in one 
> of your prior replies would still allow the level of control some operators 
> (myself included) may desire.
> 
> Thanks,
> Alex
> 
> [photo]<http://www.ussignal.com/>
> 
> Alex Dietrich
> Senior Network Engineer, US Signal
> 
> 616-233-5094  |  
> www.ussignal.com<https://www.ussignal.com>  |  
> adietr...@ussignal.com<mailto:adietr...@ussignal.com>
> 
> 201 Ionia Ave SW, Grand Rapids, MI 
> 49503<https://maps.google.com/?q=201%20Ionia%20Ave%20SW,%20Grand%20Rap
> ids,%20MI%2049503>
> 
> [linkedin]<https://www.linkedin.com/company/us-signal/>
> 
> [facebook]<https://www.facebook.com/ussignalcom/>
> 
> [youtube]<https://www.youtube.com/channel/UCaFBGFfXmHziWGTFqjGzaWw>
> 
> IMPORTANT: The contents of this email are confidential. Information is 
> intended for the named recipient(s) only. If you have received this email by 
> mistake, please notify the sender immediately and do not disclose the 
> contents to anyone or make copies thereof.
> 
> [__tpx__]
> From: Alex Mattioli 
> Date: Monday, May 20, 2024 at 7:51 AM
> To: us...@cloudstack.apache.org , 
> dev@cloudstack.apache.org 
> Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated 
> and VPC networks EXTERNAL
> 
> Hi Alex,
> 
>> I am not convinced that specifying BGP peers at the zone level 

RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-20 Thread Alex Mattioli
Hi Alex,

In this scenario:

>I think adding the ability to add network specific peers as mentioned in one 
>of >your prior replies would still allow the level of control some operators 
>(myself >included) may desire.

How do you propose network specific peers to be implemented?

Regards
Alex


 


-Original Message-
From: Dietrich, Alex  
Sent: Monday, May 20, 2024 2:21 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hi Alex,

This may be a difference in perspective in implementation of BGP at the tenant 
level. I see the ability this would provide to seamlessly establishing those 
peering relationships with minimal intervention (helping scalability).

I think adding the ability to add network specific peers as mentioned in one of 
your prior replies would still allow the level of control some operators 
(myself included) may desire.

Thanks,
Alex

[photo]<http://www.ussignal.com/>

Alex Dietrich
Senior Network Engineer, US Signal

616-233-5094  |  www.ussignal.com<https://www.ussignal.com>  
|  adietr...@ussignal.com<mailto:adietr...@ussignal.com>

201 Ionia Ave SW, Grand Rapids, MI 
49503<https://maps.google.com/?q=201%20Ionia%20Ave%20SW,%20Grand%20Rapids,%20MI%2049503>

[linkedin]<https://www.linkedin.com/company/us-signal/>

[facebook]<https://www.facebook.com/ussignalcom/>

[youtube]<https://www.youtube.com/channel/UCaFBGFfXmHziWGTFqjGzaWw>

IMPORTANT: The contents of this email are confidential. Information is intended 
for the named recipient(s) only. If you have received this email by mistake, 
please notify the sender immediately and do not disclose the contents to anyone 
or make copies thereof.

[__tpx__]
From: Alex Mattioli 
Date: Monday, May 20, 2024 at 7:51 AM
To: us...@cloudstack.apache.org , 
dev@cloudstack.apache.org 
Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks EXTERNAL

Hi Alex,

> I am not convinced that specifying BGP peers at the zone level is a good idea 
> given the impacts BGP can have on a given network. I would much rather see 
> both peer and AS specification handled at the >network configuration, or 
> another more specific level.

I don't see how else end users would be able to automatically create routed 
networks without intervention from the operator.


Cheers
Alex




-Original Message-
From: Dietrich, Alex 
Sent: Thursday, May 16, 2024 2:23 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hello Alex,

I appreciate this back and forth as I am excited about the potential this 
feature would hold.


  *   This is a very valid point.  We could add network specific BGP peers as 
well, which would override the automatic AS allocation, in the same way that we 
now allocate DNS servers in the zone level but can override that by manually 
selecting different DNS servers at network creation time.  Would that address 
your point?

Why does the network specific BGP peers need to override automatic AS 
allocation? In my mind there isn’t a dependency that needs to exist to those 
two as they are somewhat independent of one another.

I am not convinced that specifying BGP peers at the zone level is a good idea 
given the impacts BGP can have on a given network. I would much rather see both 
peer and AS specification handled at the network configuration, or another more 
specific level.

Thanks,
Alex

From: Alex Mattioli 
Date: Wednesday, May 15, 2024 at 10:15 AM
To: us...@cloudstack.apache.org , 
dev@cloudstack.apache.org 
Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks EXTERNAL

Hi Alex,

> Would zone-level BGP peers be those used by default for establishing new BGP 
> peers in networks where dynamic routing is enabled?

Correct, so far we plan to allow for up to 4 BGP peers for a zone, with the 
possibility to setup different metrics to each peer.

> This could affect a multi-tenant model where there may be different BGP peers 
> presented based on what the upstream network provides. An example of >this 
> would be where the VLANs associated to a given account are associated to 
> distinct VRFs and may have different peering IP addresses.
> I would like to see the peering IP addresses specific to the networks where 
> dynamic routing is enabled instead of specifying defaults at the zone level.


This is a very valid point.  We could add network specific BGP peers as well, 
which would override the automatic AS allocation, in the same way that we now 
allocate DNS servers in the zone level but can override that by manually 
selecting different DNS servers at network creation time.  Would that address 
your point?

Cheers,
Alex




-Original Message-
From: Dietrich, Alex 
Sent

RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-20 Thread Alex Mattioli
Hi Alex,

> I am not convinced that specifying BGP peers at the zone level is a good idea 
> given the impacts BGP can have on a given network. I would much rather see 
> both peer and AS specification handled at the >network configuration, or 
> another more specific level.

I don't see how else end users would be able to automatically create routed 
networks without intervention from the operator.


Cheers
Alex

 


-Original Message-
From: Dietrich, Alex  
Sent: Thursday, May 16, 2024 2:23 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hello Alex,

I appreciate this back and forth as I am excited about the potential this 
feature would hold.


  *   This is a very valid point.  We could add network specific BGP peers as 
well, which would override the automatic AS allocation, in the same way that we 
now allocate DNS servers in the zone level but can override that by manually 
selecting different DNS servers at network creation time.  Would that address 
your point?

Why does the network specific BGP peers need to override automatic AS 
allocation? In my mind there isn’t a dependency that needs to exist to those 
two as they are somewhat independent of one another.

I am not convinced that specifying BGP peers at the zone level is a good idea 
given the impacts BGP can have on a given network. I would much rather see both 
peer and AS specification handled at the network configuration, or another more 
specific level.

Thanks,
Alex

From: Alex Mattioli 
Date: Wednesday, May 15, 2024 at 10:15 AM
To: us...@cloudstack.apache.org , 
dev@cloudstack.apache.org 
Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks EXTERNAL

Hi Alex,

> Would zone-level BGP peers be those used by default for establishing new BGP 
> peers in networks where dynamic routing is enabled?

Correct, so far we plan to allow for up to 4 BGP peers for a zone, with the 
possibility to setup different metrics to each peer.

> This could affect a multi-tenant model where there may be different BGP peers 
> presented based on what the upstream network provides. An example of >this 
> would be where the VLANs associated to a given account are associated to 
> distinct VRFs and may have different peering IP addresses.
> I would like to see the peering IP addresses specific to the networks where 
> dynamic routing is enabled instead of specifying defaults at the zone level.


This is a very valid point.  We could add network specific BGP peers as well, 
which would override the automatic AS allocation, in the same way that we now 
allocate DNS servers in the zone level but can override that by manually 
selecting different DNS servers at network creation time.  Would that address 
your point?

Cheers,
Alex




-Original Message-
From: Dietrich, Alex 
Sent: Wednesday, May 15, 2024 2:34 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hi Alex,

I appreciate the clarity!

Excuse my ignorance if I am misunderstanding the intention of specifying BGP 
peers at the zone level.

Would zone-level BGP peers be those used by default for establishing new BGP 
peers in networks where dynamic routing is enabled?

This could affect a multi-tenant model where there may be different BGP peers 
presented based on what the upstream network provides. An example of this would 
be where the VLANs associated to a given account are associated to distinct 
VRFs and may have different peering IP addresses.

I would like to see the peering IP addresses specific to the networks where 
dynamic routing is enabled instead of specifying defaults at the zone level.


  *   Alex

[__tpx__]
From: Alex Mattioli 
Date: Wednesday, May 15, 2024 at 9:27 AM
To: us...@cloudstack.apache.org , 
dev@cloudstack.apache.org 
Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks EXTERNAL

Hi Alex,

Answers inline below with >

Cheers




-Original Message-
From: Dietrich, Alex 
Sent: Wednesday, May 15, 2024 3:12 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hello Alex,

I appreciate you taking on this initiative as I’d like to see similar 
functionality made available in CloudStack.

I do have some feedback on your implementation approach:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)

What is the intention behind specifying BGP peers at the zone level? I would 
think this would need to be specific to the network that you want to enable BGP 
on and does not need to concern the entire zone.

>The goal is for the process to be drive by the end user without operator 
>intervention. In the current design we

RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-20 Thread Alex Mattioli
Hi Wido,

Thanks for the feedback,  comments below:

> I would suggest that the upstream router (Juniper, Frr, etc) should then use 
> Dynamic BGP neihbors.

That's the plan.

> I do suggest we add BGP passwords/encryption from the start for safety 
> reasons.

That's very likely to be there from day one.

> On the VR you just need to make sure you properly configure the BGP daemon 
> and it points to the right upstream routers.
Indeed, and we plan to use FRR for that.


Thanks for the link to the doc, I'll review it.

Cheers
Alex

 


-Original Message-
From: Wido den Hollander  
Sent: Friday, May 17, 2024 5:24 PM
To: dev@cloudstack.apache.org; Alex Mattioli ; 
us...@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

My apologies! I totally missed this one. Commments inline.

Op 15/05/2024 om 14:55 schreef Alex Mattioli:
> Hi all,
> 
> Does anyone have an opinion on the implementation of dynamic routing in 
> Isolated networks and VPCs?
> 
> So far the design is:
> 
> 1 - Operator configures one or more BGP peers for a given Zone (with 
> different metrics)
> 2 - Operator presents a pool of Private AS numbers to the Zone (just 
> like we do for VLANs)
> 3 - When a network is created with an offering which has dynamic 
> routing enabled an AS number is allocated to the network
> 4 - ACS configures the BGP session on the VR (using FRR), advertising 
> all its connected networks
> 

I would suggest that the upstream router (Juniper, Frr, etc) should then use 
Dynamic BGP neihbors.

On JunOS this is the "allow" statement [0]. The VR would indeed get an AS 
assigned by ACS and the network should know the 1, 2 or X upstream routers it 
can peer with. I do suggest we add BGP passwords/encryption from the start for 
safety reasons.

"allow 192.168.1.0/24"

On JunOS this allows any router within that subnet to establish a BGP sessions 
(and when the BGP password matches).

On the VR you just need to make sure you properly configure the BGP daemon and 
it points to the right upstream routers.

[0]: 
https://www.juniper.net/documentation/us/en/software/junos/cli-reference/topics/ref/statement/allow-edit-protocols-bgp.html

> Any and all input will be very welcome.
> 
> Cheers,
> Alex
> 
> 
>   
> 
> From: Alex Mattioli
> Sent: Wednesday, April 17, 2024 3:25 AM
> To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> Subject: Dynamic routing for routed mode IPv6 and IPv4 Isolated and 
> VPC networks
> 
> Hi all,
> 
> I'd like to brainstorm dynamic routing in ACS (yes, again... for the 
> newcomers to this mailing list - this has been discussed multiple 
> times in the past 10+ years)
> 
> ACS 4.17 has introduced routed mode for IPv6 in Isolated networks and VPCs, 
> we are currently working on extending that to IPv4 as well, which will 
> support the current NAT'ed mode and also a routed mode (inspired by the NSX 
> integration https://www.youtube.com/watch?v=f7ao-vv7Ahk).
> 
> With stock ACS (i.e. without NSX or OpenSDN) this routing is purely static, 
> with the operator being responsible to add static routes to the Isolated 
> network or VPC tiers via the "public" (outside) IP of the virtual router.
> 
> The next step on this journey is to add some kind of dynamic routing. One way 
> that I have in mind is using dynamic BGP:
> 
> 1 - Operator configures one or more BGP peers for a given Zone (with 
> different metrics)
> 2 - Operator presents a pool of Private AS numbers to the Zone (just 
> like we do for VLANs)
> 3 - When a network is created with an offering which has dynamic 
> routing enabled an AS number is allocated
> 4 - ACS configures the BGP session on the VR, advertising all its 
> connected networks
> 
> This way there's no need to reconfigure the upstream router for each 
> new ACS network (it just needs to allow dynamic BGP peering from the 
> pool of AS numbers presented to the zone)
> 
> This implementation could also be used for Shared Networks, in which case the 
> destination advertised via BGP is to the gateway of the shared network.
> 
> There could also be an offering where we allow for end users to setup the BGP 
> parameters for their Isolated or VPC networks, which can then peer with 
> upstream VNF(s).
> 
> Any and all input is very welcome...
> 
> Taking the liberty to tag some of you: @Wei 
> Zhou<mailto:wei.z...@shapeblue.com> @Wido den 
> Hollander<mailto:w...@widodh.nl> @Kristaps 
> Čudars<mailto:kristaps.cud...@telia.lv>
> 
> Cheers,
> Alex
> 


RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-15 Thread Alex Mattioli
Hi Alex,

> Would zone-level BGP peers be those used by default for establishing new BGP 
> peers in networks where dynamic routing is enabled?

Correct, so far we plan to allow for up to 4 BGP peers for a zone, with the 
possibility to setup different metrics to each peer.

> This could affect a multi-tenant model where there may be different BGP peers 
> presented based on what the upstream network provides. An example of >this 
> would be where the VLANs associated to a given account are associated to 
> distinct VRFs and may have different peering IP addresses.
> I would like to see the peering IP addresses specific to the networks where 
> dynamic routing is enabled instead of specifying defaults at the zone level.


This is a very valid point.  We could add network specific BGP peers as well, 
which would override the automatic AS allocation, in the same way that we now 
allocate DNS servers in the zone level but can override that by manually 
selecting different DNS servers at network creation time.  Would that address 
your point?

Cheers,
Alex

 


-Original Message-
From: Dietrich, Alex  
Sent: Wednesday, May 15, 2024 2:34 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hi Alex,

I appreciate the clarity!

Excuse my ignorance if I am misunderstanding the intention of specifying BGP 
peers at the zone level.

Would zone-level BGP peers be those used by default for establishing new BGP 
peers in networks where dynamic routing is enabled?

This could affect a multi-tenant model where there may be different BGP peers 
presented based on what the upstream network provides. An example of this would 
be where the VLANs associated to a given account are associated to distinct 
VRFs and may have different peering IP addresses.

I would like to see the peering IP addresses specific to the networks where 
dynamic routing is enabled instead of specifying defaults at the zone level.


  *   Alex

[__tpx__]
From: Alex Mattioli 
Date: Wednesday, May 15, 2024 at 9:27 AM
To: us...@cloudstack.apache.org , 
dev@cloudstack.apache.org 
Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks EXTERNAL

Hi Alex,

Answers inline below with >

Cheers




-Original Message-
From: Dietrich, Alex 
Sent: Wednesday, May 15, 2024 3:12 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hello Alex,

I appreciate you taking on this initiative as I’d like to see similar 
functionality made available in CloudStack.

I do have some feedback on your implementation approach:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)

What is the intention behind specifying BGP peers at the zone level? I would 
think this would need to be specific to the network that you want to enable BGP 
on and does not need to concern the entire zone.

>The goal is for the process to be drive by the end user without operator 
>intervention. In the current design we'd enable the VR to share routes with 
>upstream routers without any need for extra configuration on the part of the 
>operator.
>Your point is very valid and it should definitely be a future enhancement on 
>the feature.

2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)

As a private AS consumer, I agree that this approach would be helpful for a 
more dynamic allocation as new dynamic routing enabled networks are created.

>Glad we are in the same page there.

3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated to the network

4 - ACS configures the BGP session on the VR (using FRR), advertising all its 
connected networks

Given there is a lot of extensibility within BGP, I would think there would 
need to be some level of customizability to the peering configurations. Is the 
intention to consider adding additional knobs, or relegating that to the 
upstream BGP peer? I could see scenarios where you would at least want to have 
control over prefix lengths, etc.

>Absolutely, but I think this should be a future enhancement, the current goal 
>is to have a very simple and basic dynamic BGP implementation working, after 
>that's out there and in use then we definitely should discuss how to enhance 
>the >feature with exactly what you pointed out.


Thanks,
Alex Dietrich


[__tpx__]
From: Alex Mattioli 
Date: Wednesday, May 15, 2024 at 8:55 AM
To: us...@cloudstack.apache.org , 
dev@cloudstack.apache.org 
Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks EXTERNAL

Hi all,

Does anyone have an opinion on the implementation of dynamic routing in 
Isolated networks and VPCs?

So far the design is:

1 - Operator c

RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-15 Thread Alex Mattioli
Hi Alex,

Answers inline below with >

Cheers

 


-Original Message-
From: Dietrich, Alex  
Sent: Wednesday, May 15, 2024 3:12 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks

Hello Alex,

I appreciate you taking on this initiative as I’d like to see similar 
functionality made available in CloudStack.

I do have some feedback on your implementation approach:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)

What is the intention behind specifying BGP peers at the zone level? I would 
think this would need to be specific to the network that you want to enable BGP 
on and does not need to concern the entire zone.

>The goal is for the process to be drive by the end user without operator 
>intervention. In the current design we'd enable the VR to share routes with 
>upstream routers without any need for extra configuration on the part of the 
>operator.
>Your point is very valid and it should definitely be a future enhancement on 
>the feature.

2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)

As a private AS consumer, I agree that this approach would be helpful for a 
more dynamic allocation as new dynamic routing enabled networks are created.

>Glad we are in the same page there.

3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated to the network

4 - ACS configures the BGP session on the VR (using FRR), advertising all its 
connected networks

Given there is a lot of extensibility within BGP, I would think there would 
need to be some level of customizability to the peering configurations. Is the 
intention to consider adding additional knobs, or relegating that to the 
upstream BGP peer? I could see scenarios where you would at least want to have 
control over prefix lengths, etc.

>Absolutely, but I think this should be a future enhancement, the current goal 
>is to have a very simple and basic dynamic BGP implementation working, after 
>that's out there and in use then we definitely should discuss how to enhance 
>the >feature with exactly what you pointed out.


Thanks,
Alex Dietrich


[__tpx__]
From: Alex Mattioli 
Date: Wednesday, May 15, 2024 at 8:55 AM
To: us...@cloudstack.apache.org , 
dev@cloudstack.apache.org 
Subject: RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks EXTERNAL

Hi all,

Does anyone have an opinion on the implementation of dynamic routing in 
Isolated networks and VPCs?

So far the design is:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated to the network
4 - ACS configures the BGP session on the VR (using FRR), advertising all its 
connected networks

Any and all input will be very welcome.

Cheers,
Alex




From: Alex Mattioli
Sent: Wednesday, April 17, 2024 3:25 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

Hi all,

I'd like to brainstorm dynamic routing in ACS (yes, again... for the newcomers 
to this mailing list - this has been discussed multiple times in the past 10+ 
years)

ACS 4.17 has introduced routed mode for IPv6 in Isolated networks and VPCs, we 
are currently working on extending that to IPv4 as well, which will support the 
current NAT'ed mode and also a routed mode (inspired by the NSX integration 
https://urldefense.com/v3/__https://www.youtube.com/watch?v=f7ao-vv7Ahk__;!!P9cq_d3Gyw!gRe7Js-1plXE8vRRc_mJQIri5T4-Z1zOFVmqEwmHGE_AGkN6P6BU5T8nq0WL4Fx0MTwP0p-ucEL6DjwFzB7TaoBNnS4$<https://urldefense.com/v3/__https:/www.youtube.com/watch?v=f7ao-vv7Ahk__;!!P9cq_d3Gyw!gRe7Js-1plXE8vRRc_mJQIri5T4-Z1zOFVmqEwmHGE_AGkN6P6BU5T8nq0WL4Fx0MTwP0p-ucEL6DjwFzB7TaoBNnS4$>
 ).

With stock ACS (i.e. without NSX or OpenSDN) this routing is purely static, 
with the operator being responsible to add static routes to the Isolated 
network or VPC tiers via the "public" (outside) IP of the virtual router.

The next step on this journey is to add some kind of dynamic routing. One way 
that I have in mind is using dynamic BGP:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated
4 - ACS configures the BGP session on the VR, advertising all its connected 
networks

This way there's no need to reconfigure the upstream router for each new ACS 
network (it just needs to all

RE: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-15 Thread Alex Mattioli
Hi all,

Does anyone have an opinion on the implementation of dynamic routing in 
Isolated networks and VPCs?

So far the design is:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated to the network
4 - ACS configures the BGP session on the VR (using FRR), advertising all its 
connected networks

Any and all input will be very welcome.

Cheers,
Alex


 

From: Alex Mattioli
Sent: Wednesday, April 17, 2024 3:25 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

Hi all,

I'd like to brainstorm dynamic routing in ACS (yes, again... for the newcomers 
to this mailing list - this has been discussed multiple times in the past 10+ 
years)

ACS 4.17 has introduced routed mode for IPv6 in Isolated networks and VPCs, we 
are currently working on extending that to IPv4 as well, which will support the 
current NAT'ed mode and also a routed mode (inspired by the NSX integration 
https://www.youtube.com/watch?v=f7ao-vv7Ahk).

With stock ACS (i.e. without NSX or OpenSDN) this routing is purely static, 
with the operator being responsible to add static routes to the Isolated 
network or VPC tiers via the "public" (outside) IP of the virtual router.

The next step on this journey is to add some kind of dynamic routing. One way 
that I have in mind is using dynamic BGP:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated
4 - ACS configures the BGP session on the VR, advertising all its connected 
networks

This way there's no need to reconfigure the upstream router for each new ACS 
network (it just needs to allow dynamic BGP peering from the pool of AS numbers 
presented to the zone)

This implementation could also be used for Shared Networks, in which case the 
destination advertised via BGP is to the gateway of the shared network.

There could also be an offering where we allow for end users to setup the BGP 
parameters for their Isolated or VPC networks, which can then peer with 
upstream VNF(s).

Any and all input is very welcome...

Taking the liberty to tag some of you: @Wei Zhou<mailto:wei.z...@shapeblue.com> 
@Wido den Hollander<mailto:w...@widodh.nl> @Kristaps 
Čudars<mailto:kristaps.cud...@telia.lv>

Cheers,
Alex


Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-04-16 Thread Alex Mattioli
Hi all,

I'd like to brainstorm dynamic routing in ACS (yes, again... for the newcomers 
to this mailing list - this has been discussed multiple times in the past 10+ 
years)
ACS 4.17 has introduced routed mode for IPv6 in Isolated networks and VPCs, we 
are currently working on extending that to IPv4 as well, which will support the 
current NAT'ed mode and also a routed mode (inspired by the NSX integration 
https://www.youtube.com/watch?v=f7ao-vv7Ahk).

With stock ACS (i.e. without NSX or OpenSDN) this routing is purely static, 
with the operator being responsible to add static routes to the Isolated 
network or VPC tiers via the "public" (outside) IP of the virtual router.

The next step on this journey is to add some kind of dynamic routing. One way 
that I have in mind is using dynamic BGP:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated
4 - ACS configures the BGP session on the VR, advertising all its connected 
networks

This way there's no need to reconfigure the upstream router for each new ACS 
network (it just needs to allow dynamic BGP peering from the pool of AS numbers 
presented to the zone)

This implementation could also be used for Shared Networks, in which case the 
destination advertised via BGP is to the gateway of the shared network.

There could also be an offering where we allow for end users to setup the BGP 
parameters for their Isolated or VPC networks, which can then peer with 
upstream VNF(s).

Any and all input is very welcome...

Taking the liberty to tag some of you: @Wei Zhou 
@Wido den Hollander @Kristaps 
Čudars

Cheers,
Alex

 



RE: [VOTE] next version 20 instead of 4.20

2024-02-19 Thread Alex Mattioli
+1

 


-Original Message-
From: Daan Hoogland  
Sent: Monday, February 19, 2024 1:50 PM
To: dev 
Cc: users 
Subject: [VOTE] next version 20 instead of 4.20

LS,

This is a vote on dev@c.a.o with cc to users@c.a.o. If you want to be counted 
please reply to dev@.

As discussed in [1] we are deciding to drop the 4 from our versioning scheme. 
The result would be that the next major version will be 20 instead of 4.20, as 
it would be in a traditional upgrade. As 20 > 4 and the versions are processed 
numerically there are no technical impediments.

+1 agree (next major version as 20
0 (no opinion)
-1 disagree (keep 4.20 as the next version, give a reason)

As this is a lazy consensus vote any -1 should be accompanied with a reason.

[1] https://lists.apache.org/thread/lh45w55c3jmhm7w2w0xgdvlw78pd4p87

--
Daan


RE: Future of Tungsten Fabric Integration with ACS

2024-01-09 Thread Alex Mattioli
Hi Rahul,

The project has been tentatively named "OpenSDN",  here's the new mailing list: 
https://groups.io/g/OpenSDN.  There should be a larger update there soon.

Also, here are the meeting notes for the initial discussion around the project: 
https://docs.google.com/document/d/1jQ-XbMZnfZN0EjtfuSrfGu5jFBh-_HuTe5aF9L_TUp4/edit

And a recording of the first meeting: 
https://drive.google.com/file/d/1e4C4HRPu4-LMy06TJbTxjlNAf1syopEV/view

Compatibility with the ACS plugin should remain unchanged.

Cheers,
Alex

 


-Original Message-
From: Rahul Rai  
Sent: Tuesday, January 9, 2024 12:04 AM
To: dev@cloudstack.apache.org
Subject: Future of Tungsten Fabric Integration with ACS

Dear Dev Community,

Hope this email finds you well and wishing you a great and prosperous new year 
ahead.

Just now noticed that the Linux foundation will archive the Tungsten project in 
next few months, curious to know if the ACS upcoming version is going to offer 
integration with another open source SDN for micro-segmentation?
Thank you for your interest in Tungsten Fabric. The community has decided to 
shut down the project and will sunset this website on August 1, 2024.

Thanks,
Rahul


RE: Hello

2023-02-01 Thread Alex Mattioli
Welcome to the community Vishesh.

Alex

 


-Original Message-
From: Vishesh Jindal  
Sent: 01 February 2023 16:45
To: dev@cloudstack.apache.org
Subject: Hello

Hi All,

This is Vishesh. I have recently joined ShapeBlue. I am looking forward to 
contributing to the cloudstack project and working with the community.

Thanks!
Vishesh Jindal
vishesh.jin...@shapeblue.com

 




RE: CloudStack and Tungsten Fabric Solution Brief

2023-01-24 Thread Alex Mattioli
+1 on that, and especially useful for Edge zones.

 


-Original Message-
From: Nux  
Sent: 24 January 2023 14:43
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org; Apache CloudStack Marketing 

Subject: Re: CloudStack and Tungsten Fabric Solution Brief

Thanks Ivet, this is amazing!
ACS does need a new SDN story and Tungsten looks the real deal here, can't wait 
to try it!

Also cheers to EWERK and ENA for their involvement to make this happen!

Regards

On 2023-01-24 11:31, Ivet Petrova wrote:
> Hi all,
> 
> I am happy to share a new solution brief we have developed together 
> with the team of Tungsten Fabric:
> https://blogs.apache.org/cloudstack/entry/apache-cloudstack-and-tungst
> en-fabric If you are interested to learn more for SDN and how tungsten 
> integrates with CloudStack take a look.
> 
> Kind regards,



RE: [PROPOSAL] postpone 4.18 to the new year

2022-11-01 Thread Alex Mattioli
+1 on that Daan, definitely worth waiting a bit longer.

 


-Original Message-
From: Rohit Yadav  
Sent: 01 November 2022 06:37
To: us...@cloudstack.apache.org; dev 
Subject: Re: [PROPOSAL] postpone 4.18 to the new year

+1 Daan, your proposal to postpone to early next year makes sense.


Regards.


From: Nicolas Vazquez 
Sent: Tuesday, November 1, 2022 07:34
To: us...@cloudstack.apache.org ; dev 

Subject: Re: [PROPOSAL] postpone 4.18 to the new year

Thanks Daan, I'm +1 on postponing and getting these features and more 
contributions in.

Regards,
Nicolas Vazquez


From: Daan Hoogland 
Date: Saturday, 29 October 2022 at 05:25
To: users , dev 
Subject: [PROPOSAL] postpone 4.18 to the new year Users and devs,

I announced to start the releasing by end of October [1] but I think we have a 
lot of promising PRs that would really make this release worthwhile.
At the same time the noteworthy features that are in now are limited:



What is in:

- custom tarifs

- Prometeus exporter enhancements

- volume encryption

- console access enhancements

- custom DNS for guest networks



pending:

- ant-design upgrade #6369

- EMC networker B&R #6550

- clone virtual machine #5216

- keyboard shortcuts #5122

- vm deploy retry #6062

- tungsten fabric #4205

- intuitive global setting #5797



I want to therefore postpone the release till January, or at leaset till after 
CCC. I think a release will have much more value if we wait until some more of 
these are in.

regards,

[1] https://lists.apache.org/thread/vnkd4gv43l94dsn3sn8g57mg236nostc




 




RE: [PROPOSE] CloudStack 4.17.1.0 release and RM

2022-08-02 Thread Alex Mattioli
+1 on all below, I think you'll be a great RM.

 


-Original Message-
From: Abhishek Kumar  
Sent: 02 August 2022 14:38
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [PROPOSE] CloudStack 4.17.1.0 release and RM

Hi all,

I would like to propose and put myself forward as the release manager for the 
4.17.1.0 release. My colleague Nicolas Vazquez will support me as the co-RM for 
the PR reviews/tests/merges, and others are welcome to support as well.
We can keep the scope limited to include only bugs, critical issues and fixes 
for a stable release. I see about 89 closed issues/PRs already on the 4.17.1.0 
milestone[1] and some 24 items are remaining.

I propose the following timeline, aiming to cut the first RC around the end of 
August.
 - ~3 weeks from now till late August 2022 (Ongoing): Accept all bugs, 
issues and minor improvements allowed in LTS [2]
 - 1 week: Accept only critical/blocker fixes, stabilize 4.17 branch
 - end-August 2022 and onwards: Cut 4.17.1.0 RC1 and further RCs if 
necessary, start/conclude vote, and finish release work

Please let me know if you have any thoughts/comments.

[1] https://github.com/apache/cloudstack/milestone/25
[2] https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS

Regards,
Abhishek

 



RE: IPV6 in Isolated/VPC networks

2022-05-13 Thread Alex Mattioli
> But there can be multiple networks behind the VPC router or not? If there are 
> multiple networks you need 
>/64 as you can then allocate /64s from that larger subnet.

Yes, there will be multiple /64s in a VPC, but they are very likely to be 
non-contiguous, so in general we should treat as completely separate networks.  

One could in theory dedicate a range to a domain/account and statically route 
to that, but I'm not sure if that will happen often.

Cheers
Alex

 


-Original Message-
From: Wido den Hollander  
Sent: 13 May 2022 11:30
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks



Op 12-05-2022 om 16:10 schreef Alex Mattioli:
>> ipv6 route fd23:313a:2f53:3cbf::/64 
>> fd23:313a:2f53:3000:1c00:baff:fe00:4
> 
> That's correct

Ok. So in that case the subnet would be very welcome to be present in the 
message on the bus.

> 
>> Or a larger subnet:
>> ipv6 route fd23:313a:2f53:3c00::/56 
>> fd23:313a:2f53:3000:1c00:baff:fe00:4
> 
> Not really, the subnets for isolated/VPC networks are always /64.  Which 
> means also no real need to include subnets as well.
> 

But there can be multiple networks behind the VPC router or not? If there are 
multiple networks you need >/64 as you can then allocate /64s from that larger 
subnet.

Wido

> Cheers
> Alex
> 
> 
>   
> 
> 
> -Original Message-
> From: Wido den Hollander 
> Sent: 12 May 2022 16:04
> To: Abhishek Kumar ; 
> dev@cloudstack.apache.org
> Subject: Re: IPV6 in Isolated/VPC networks
> 
> 
> 
> On 5/12/22 09:55, Abhishek Kumar wrote:
>> Hi Wido,
>>
>> I do not understand what you mean by WAB address but
> 
> WAB was a type. I meant WAN.
> 
>> fd23:313a:2f53:3000:1c00:baff:fe00:4 is the public IP of the network
>> (IPv6 of the public NIC of the network VR) in the sample.
>> Yes, route for fd23:313a:2f53:3cbf::/64 need to be added to this IP.
>> fd23:313a:2f53:3cbf::/64 is guest IPv6 CIDR of the network here.
>>
> 
> So that means that I would need to run this command on my upstream router:
> 
> ipv6 route fd23:313a:2f53:3cbf::/64 
> fd23:313a:2f53:3000:1c00:baff:fe00:4
> 
> Or a larger subnet:
> 
> ipv6 route fd23:313a:2f53:3c00::/56 
> fd23:313a:2f53:3000:1c00:baff:fe00:4
> 
>> Currently, the message on event bus does not include subnet. Should 
>> that be included?
> 
> Yes, because then you can pickup those messages and inject the route via 
> ExaBGP into a routing table right away.
> 
>> In case of VPCs, there could be multiple tiers which will need 
>> multiple routes to be added. Will that be an issue if we include 
>> current network/tier subnet in the event message?
> 
> No, as long as it points to the same VR you simply have multiple subnets 
> being routed to the same VR.
> 
> I do wonder what happens if you destroy the VR and create a new one. The WAN 
> address then changes (due to SLAAC) and thus the routes need to be 
> re-programmed.
> 
> Wido
> 
>>
>> Regards,
>> Abhishek
>>
>>
>>
>>
>>
>>
>> -
>> -
>> --
>> *From:* Wido den Hollander 
>> *Sent:* 10 May 2022 19:01
>> *To:* dev@cloudstack.apache.org ; Abhishek 
>> Kumar 
>> *Subject:* Re: IPV6 in Isolated/VPC networks
>>   
>> Hi,
>>
>> Op 10-05-2022 om 11:42 schreef Abhishek Kumar:
>>> Yes. When a public IPv6 is assigned or released, CloudStack will publish 
>>> event with type NET.IP6ASSIGN, NET.IP6RELEASE.
>>> These event notifications can be tracked. And with improvements in events 
>>> framework, these event messages will have network uuid as entityuuid and 
>>> Network as entitytype. Using this network can be queried using to list IPv6 
>>> routes that need to be added.
>>>
>>> Sample event message,
>>>
>>> {"eventDateTime":"2022-05-10 09:32:12
>>> +","entityuuid":"14658b39-9d20-4783-a1bc-12fb58bcbd98","Network":
>>> "14658b39-9d20-4783-a1bc-12fb58bcbd98","description":"Assigned 
>>> public
>>> IPv6 address: fd23:313a:2f53:3000:1c00:baff:fe00:4 for network ID:
>>> 14658b39-9d20-4783-a1bc-12fb58bcbd98","event":"NET.IP6ASSIGN","user":
>>> "bde866ba-c600-11ec-af19-1e00320001f3","account":"bde712c9-c600-11ec
>>> - af19-1e00320001f3","entity":"Network","status":"Completed"}
>>>
>>>
>>> ?Sample API call

RE: IPV6 in Isolated/VPC networks

2022-05-12 Thread Alex Mattioli
0:19 schreef Abhishek Kumar:
>>> Hi all,
>>>
>>> IPv6 Support in Isolated Network and VPC with Static Routing based on the 
>>> design doc [1] has been implemented and is available in 4.17.0 RC2. I hope 
>>> while testing 4.17.0 RC2 you will also try to test it ?
>>> Documentation for it is available at 
>>> http://qa.cloudstack.cloud/docs/WIP-PROOFING/pr/262/plugins/ipv6.htm
>>> l#isolated-network-and-vpc-tier
> <http://qa.cloudstack.cloud/docs/WIP-PROOFING/pr/262/plugins/ipv6.html
> #isolated-network-and-vpc-tier> (will be available in the official 
> docs once 4.17.0 version of docs is built).
>>>
>> 
>> Great work!
>> 
>> I see only static routing is supported. But do we publish something 
>> on the message bus once a new VR/VPC is created?
>> 
>> This way you could pick up these messages and have the network create 
>> a
>> (static) route based on those.
>> 
>> ExaBGP for example could be used to inject such routes.
>> 
>> Wido
>> 
>>> [1] 
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+
>>> in+Isolated+Network+and+VPC+with+Static+Routing
> <https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+i
> n+Isolated+Network+and+VPC+with+Static+Routing>
>>>
>>> Regards,
>>> Abhishek
>>>
>>> 
>>> From: Rohit Yadav 
>>> Sent: 13 September 2021 14:30
>>> To: dev@cloudstack.apache.org 
>>> Subject: Re: IPV6 in Isolated/VPC networks
>>>
>>> Thanks Alex, Wei. I've updated the docs here: 
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+
>>> in+Isolated+Network+and+VPC+with+Static+Routing
> <https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+i
> n+Isolated+Network+and+VPC+with+Static+Routing>
>>>
>>> I'll leave the thread open for futher discussion/ideas/feedback. I 
>>> think we've completed the phase1 design doc including all feedback 
>>> comments for adding IPv6 support in CloudStack and some initial 
>>> poc/work can be started. My colleagues and I will keep everyone 
>>> posted on this thread and/or on a Github PR as and when we're able 
>>> to
> start our work on the same (after 4.16, potentially towards 4.17).
>>>
>>>
>>> Regards.
>>>
>>> 
>>> From: Wei ZHOU 
>>> Sent: Friday, September 10, 2021 20:22
>>> To: dev@cloudstack.apache.org 
>>> Subject: Re: IPV6 in Isolated/VPC networks
>>>
>>> Agree with Alex.
>>> We only need to know how many /64 are allocated. We do not care how 
>>> many
>>> ipv6 addresses are used by VMs.
>>>
>>> -Wei
>>>
>>> On Fri, 10 Sept 2021 at 16:36, Alex Mattioli 
>>> 
>>> wrote:
>>>
>>>> Hi Rohit,
>>>>
>>>> I'd go for option 2, don't see a point tracking anything smaller 
>>>> than a
>>>> /64 tbh.
>>>>
>>>> Cheers
>>>> Alex
>>>>
>>>>
>>>>
>>>>
>>>> -Original Message-
>>>> From: Rohit Yadav 
>>>> Sent: 09 September 2021 12:44
>>>> To: dev@cloudstack.apache.org
>>>> Subject: Re: IPV6 in Isolated/VPC networks
>>>>
>>>> Thanks Alex, Kristaps. I've updated the design doc to reflect two
>>>> agreements:
>>>>
>>>> *   Allocate /64 for both isolated network and VPC tiers, no 
>>>>large  allocation of prefixes to VPC (cons: more static routing 
>>>>rules for upstream
>>>> router/admins)
>>>> *   All systemvms (incl. ssvm, cpvm, VRs) get IPv6 address if 
>>>>zone has a  dedicated /64 prefix/block for systemvms
>>>>
>>>> The only outstanding question now is:
>>>>
>>>> *   How do we manage IPv6 usage? Can anyone advise how we do 
>>>>IPv6 usage  for shared network (design, implementation and 
>>>>use-cases?)
>>>> Option1: We don't do it, all user VMs nics have ipv4 address which 
>>>>whose  usage we don't track. For public VR/nics/networks, we can 
>>>>simply add the
>>>> IPv6 details for a related IPv4 address.
>>>> Option2: Implement a separate, first-class IPv6 address or /64 
>>>>prefix  tracking/management and usage for all VMs and syste

RE: CloudStack Collaboration Conference 2022 - November 14-16

2022-04-05 Thread Alex Mattioli
Sounds amazing @Ivet Petrova


 


-Original Message-
From: Daman Arora  
Sent: 05 April 2022 16:51
To: dev@cloudstack.apache.org
Cc: users 
Subject: Re: CloudStack Collaboration Conference 2022 - November 14-16

Sounds like a good idea to me.

Thanks,
Daman Arora.

On Tue., Apr. 5, 2022, 10:45 a.m. Ivet Petrova, 
wrote:

> Hi all,
>
> I am working on the idea for the CloudStack Collaboration Conference 2022.
> I was thinking that this time we can make it as a hybrid event and the 
> end of the year - November 14-16th.
> We will choose one physical location in Europe and will also stream 
> the whole event online as previous year for the people who cannot/does 
> not want to travel.
> If nobody is against, I will start some organization plan.
>
> Kind regards,
>
>
>
>
>


RE: Zones and regions in CloudStack

2021-09-24 Thread Alex Mattioli
Hi Jonas,

A Region is one instance of the ACS database and a set of management servers, 
inside a Region you have Zones.

There are many ways to carve that but one example would be:
Regions: Europe, North America, Asia.  
Then say, inside the Europe region you can have the zones: London, Paris, 
Frankfurt, etc..etc..

Does that answer your question?

Cheers
Alex

 


-Original Message-
From: Jonas Porsche  
Sent: 24 September 2021 14:07
To: dev@cloudstack.apache.org
Subject: Zones and regions in CloudStack

Hi Rohit,

just a short question to my mail I send a few days ago. What is the difference 
between zones and regions?

Kind regards
Jonas


__

Jonas Porsche
BA-Student der Informatik

EWERK DIGITAL GmbH
Br?hl 24, D-04109 Leipzig
P
F +49 341 42649 - 98
j.pors...@ewerk.com
www.ewerk.com

Gesch?ftsf?hrer:
Dr. Erik Wende, Hendrik Schubert, Tassilo M?schke
Registergericht: Leipzig HRB 9065

Support:
+49 341 42649 555

Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 2-1:2018

ISAE 3402 Typ II Assessed

EWERK-Blog | 
LinkedIn | 
Xing | 
Twitter | 
Facebook


Ausk?nfte und Angebote per Mail sind freibleibend und unverbindlich.

Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschlie?lich etwaiger beigef?gter Dateien) ist 
vertraulich und nur f?r den Empf?nger bestimmt. Sollten Sie nicht der 
bestimmungsgem??e Empf?nger sein, ist Ihnen jegliche Offenlegung, 
Vervielf?ltigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte 
informieren Sie in diesem Fall unverz?glich den Absender und l?schen Sie die 
E-Mail (einschlie?lich etwaiger beigef?gter Dateien) von Ihrem System. Vielen 
Dank.

The contents of this e-mail (including any attachments) are confidential and 
may be legally privileged. If you are not the intended recipient of this 
e-mail, any disclosure, copying, distribution or use of its contents is 
strictly prohibited, and you should please notify the sender immediately and 
then delete it (including any attachments) from your system. Thank you.



RE: IPV6 in Isolated/VPC networks

2021-09-10 Thread Alex Mattioli
Hi Rohit,

I'd go for option 2, don't see a point tracking anything smaller than a /64 tbh.

Cheers
Alex

 


-Original Message-
From: Rohit Yadav  
Sent: 09 September 2021 12:44
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Thanks Alex, Kristaps. I've updated the design doc to reflect two agreements:

  *   Allocate /64 for both isolated network and VPC tiers, no large allocation 
of prefixes to VPC (cons: more static routing rules for upstream router/admins)
  *   All systemvms (incl. ssvm, cpvm, VRs) get IPv6 address if zone has a 
dedicated /64 prefix/block for systemvms

The only outstanding question now is:

  *   How do we manage IPv6 usage? Can anyone advise how we do IPv6 usage for 
shared network (design, implementation and use-cases?)
Option1: We don't do it, all user VMs nics have ipv4 address which whose usage 
we don't track. For public VR/nics/networks, we can simply add the IPv6 details 
for a related IPv4 address.
Option2: Implement a separate, first-class IPv6 address or /64 prefix 
tracking/management and usage for all VMs and systemvms nic (this means 
account/domain level limits and new billing/records)
Option3: other thoughts?


Regards.

____
From: Alex Mattioli 
Sent: Wednesday, September 8, 2021 23:24
To: dev@cloudstack.apache.org 
Subject: RE: IPV6 in Isolated/VPC networks

Hi Rohit, Kristaps,

I'd say option 1 as well,  it does create a bit more overhead with static 
routes but if that's automated for a VPC it can also be easily automated for 
several tiers of a VPC.  We also don't constrain the amount of tiers in a  VPC.
It has the added advantage to be closer to the desired behaviour with dynamic 
routing in the future, where a VPC VR can announce several subnets upstream.

Cheers
Alex







 


-Original Message-
From: Rohit Yadav 
Sent: 08 September 2021 19:04
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi Kristaps,

Thanks for sharing, I suppose that means individual tiers should be allocated 
/64 instead of larger ipv6 blocks to the whole VPC which could cause wastage.

Any objection from anybody?

Regards.

From: Kristaps Cudars 
Sent: Wednesday, September 8, 2021 9:24:01 PM
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks

Hello,

I asked networking team to comment on "How should the IPv6 block/allocation 
work in VPCs?"
Option1: They haven't seen lately devices with limits on how many static routes 
can be created.
Option2: With /60 and /62 assignments and big quantity of routers IPv6 
assignment from RIPE NNC can be drained fast.

/48 contains 64000 /64
/60 contains 16 /64
64000 / 16 = 4000 routers


On 2021/09/07 11:59:09, Rohit Yadav  wrote:
> All,
>
> After another iteration with Alex, I've updated the design doc. Kindly review:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in
> +Isolated+Network+and+VPC+with+Static+Routing
>
>
> ... and advise on some outstanding questions:
>
>   *   How should the IPv6 block/allocation work in VPCs?
> Option1: Should this be simply /64 allocation on any new tier, the 
> cons of this option is one static route/rule per VPC tier. (many 
> upstream routers may have limit on no. of static routes?)
> Option2: Let user ask/specify tier size, say /60 (for 16 tiers) or /62 (4 
> tiers) for the VPC, this can be filtered based on the vpc.max.networks global 
> setting (3 is default). The pros of this option are less no. of static 
> route/rule and easy programming, but potential wastage of multiple /64 prefix 
> blocks for unused/uncreated tiers.
>   *   How do we manage IPv6 usage? Can anyone advise how we do IPv6 usage for 
> shared network (design, implementation and use-cases?)
> Option1: We don't do it, all user VMs nics have ipv4 address which whose 
> usage we don't track. For public VR/nics/networks, we can simply add the IPv6 
> details for a related IPv4 address.
> Option2: Implement a separate, first-class IPv6 address or /64 prefix 
> tracking/management and usage for all VMs and systemvms nic (this means 
> account/domain level limits and new billing/records)
>   *   Enable IPv6 on systemvms (specifically SSVM and CPVM) by default if 
> zone has a IPv6 address block allocated/assigned for use for systemvms (this 
> was mainly thought for VRs, but why no ssvm and cpvms too - any cons of this?)
>   *
>
> Regards.
>
> 
> From: Rohit Yadav 
> Sent: Thursday, August 19, 2021 15:45
> To: dev@cloudstack.apache.org 
> Subject: Re: IPV6 in Isolated/VPC networks
>
> Hi all,
>
> I've taken feedback from this thread and wrote this design doc:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in
> +Is

RE: IPV6 in Isolated/VPC networks

2021-09-08 Thread Alex Mattioli
Hi Rohit, Kristaps,

I'd say option 1 as well,  it does create a bit more overhead with static 
routes but if that's automated for a VPC it can also be easily automated for 
several tiers of a VPC.  We also don't constrain the amount of tiers in a  VPC.
It has the added advantage to be closer to the desired behaviour with dynamic 
routing in the future, where a VPC VR can announce several subnets upstream.

Cheers
Alex




 


-Original Message-
From: Rohit Yadav  
Sent: 08 September 2021 19:04
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi Kristaps,

Thanks for sharing, I suppose that means individual tiers should be allocated 
/64 instead of larger ipv6 blocks to the whole VPC which could cause wastage.

Any objection from anybody?

Regards.

From: Kristaps Cudars 
Sent: Wednesday, September 8, 2021 9:24:01 PM
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks

Hello,

I asked networking team to comment on "How should the IPv6 block/allocation 
work in VPCs?"
Option1: They haven't seen lately devices with limits on how many static routes 
can be created.
Option2: With /60 and /62 assignments and big quantity of routers IPv6 
assignment from RIPE NNC can be drained fast.

/48 contains 64000 /64
/60 contains 16 /64
64000 / 16 = 4000 routers


On 2021/09/07 11:59:09, Rohit Yadav  wrote:
> All,
>
> After another iteration with Alex, I've updated the design doc. Kindly review:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in
> +Isolated+Network+and+VPC+with+Static+Routing
>
>
> ... and advise on some outstanding questions:
>
>   *   How should the IPv6 block/allocation work in VPCs?
> Option1: Should this be simply /64 allocation on any new tier, the 
> cons of this option is one static route/rule per VPC tier. (many 
> upstream routers may have limit on no. of static routes?)
> Option2: Let user ask/specify tier size, say /60 (for 16 tiers) or /62 (4 
> tiers) for the VPC, this can be filtered based on the vpc.max.networks global 
> setting (3 is default). The pros of this option are less no. of static 
> route/rule and easy programming, but potential wastage of multiple /64 prefix 
> blocks for unused/uncreated tiers.
>   *   How do we manage IPv6 usage? Can anyone advise how we do IPv6 usage for 
> shared network (design, implementation and use-cases?)
> Option1: We don't do it, all user VMs nics have ipv4 address which whose 
> usage we don't track. For public VR/nics/networks, we can simply add the IPv6 
> details for a related IPv4 address.
> Option2: Implement a separate, first-class IPv6 address or /64 prefix 
> tracking/management and usage for all VMs and systemvms nic (this means 
> account/domain level limits and new billing/records)
>   *   Enable IPv6 on systemvms (specifically SSVM and CPVM) by default if 
> zone has a IPv6 address block allocated/assigned for use for systemvms (this 
> was mainly thought for VRs, but why no ssvm and cpvms too - any cons of this?)
>   *
>
> Regards.
>
> 
> From: Rohit Yadav 
> Sent: Thursday, August 19, 2021 15:45
> To: dev@cloudstack.apache.org 
> Subject: Re: IPV6 in Isolated/VPC networks
>
> Hi all,
>
> I've taken feedback from this thread and wrote this design doc:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in
> +Isolated+Network+and+VPC+with+Static+Routing
>
> Kindly review and advise if I missed anything or anything that needs to be 
> changed/updated. You may comment on the wiki directly too.
>
> Kindly suggest your views on the following (also in the design doc above):
>
> Outstanding Questions:
>
>   *   Should admin or user be able to specify how VPC super CIDRs are 
> created/needed; for example a user can ask for VPC with /60 super CIDR? Or 
> should CloudStack automatically find/allocate a /64 for a new VPC tier from 
> the root-admin configured /64-/48 block?
>   *   Should we explore FRR and iBGP or other strategies to do dynamic 
> routing which may not require advance/complex configuration in the VR or for 
> the users/admin?
>   *   With SLAAC and no dhcpv6, is there a way to support secondary IPv6 
> addresses (or floating IPv6 addresses for VR/public traffic) for guest VM's 
> nics?
>   *   Any thoughts on UI/UX for firewall/routing management?
>   *   Any other feature/support for isolated network or VPC feature that must 
> be explored or supported such as PF, VPN, LB, vpc static routes, vpc gateway 
> etc.
>   *   For usage - should we have any consideration, or should we assume that 
> IPv4 and IPv6 address will go together for every nic; so IPv6 usage for a nic 
> is in tandem with Ipv4 address for a nic, i.e. no explicit/new biling/usage 
> needed?
>   *   For smoketests, local dev-test should we explore ULA? Unique Local 
> Address - in the range fc00::/7. Typically only within the 'local' half 
> fd00::/8. ULA for IPv6 is analogous to IPv4 private network addre

RE: High increase in bandwidth usage

2021-09-08 Thread Alex Mattioli
Hi,

That would be bandwidth between which hosts?   Also, what exactly would you 
call normal and excessive bandwidth usage?

Regards
Alex

 


-Original Message-
From: Saurabh Rapatwar  
Sent: 08 September 2021 16:46
To: us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: Re: High increase in bandwidth usage

Hi

I am facing the same problem. Please suggest any solution group members.

Thanks in advance

On Tue, 7 Sep, 2021, 11:30 pm R R,  wrote:

> I installed a cloudstack server on a bare metal server (all in one 
> installation). The bandwidth usage was normal. After a couple days, 
> the bandwidth usage was very high, got several emails as well from the 
> DC. I tried to limit it using wondershaper. Worked for a while, but 
> then I was locked out of the machine. Couldn't ssh into the machine. 
> Had to format the machine.
>
> The same thing happened again. I am able to ssh into the system for 
> now, bandwidth usage is high, cloudstack server isn't responding. 
> Attaching ss of cloudstack management server logs.
>
> Please address me if I am doing something wrong, or the solution to 
> this problem.
>


RE: IPV6 in Isolated/VPC networks

2021-08-17 Thread Alex Mattioli
gt;>>>>>>>> route, with peering ip both end  as one /48 can have a lot of 
>>>>>>>>> /64 on it.  And
>>>>>>>> hardware
>>>>>>>>> budgeting for new IPv6-VR will become very important, as all 
>>>>>>>>> traffic will need to pass over it .
>>>>>>>>>
>>>>>>>>
>>>>>>>> Routing or NAT is the same for the VR. You don't need a very 
>>>>>>>> beefy VR for this.
>>>>>>>>
>>>>>>>>> It will be like
>>>>>>>>>
>>>>>>>>> ISP Router  -- >  (new IPV6-VR )  > AdvanceZone-VR 
>>>>>>>>> > VM
>>>>>>>>>
>>>>>>>>> Relationship of (new IPv6 VR) and AdvanceZone-VR , may be 
>>>>>>>>> considering on OSPF instead of  BGP , otherwise few thousand 
>>>>>>>>> of AdvanceZone-VR wil have few thousand of BGP session. on 
>>>>>>>>> new-IPv6-VR
>>>>>>>>>
>>>>>>>>> Also, I suppose we cannot do ISP router. -->. Advancezone VR direct,  
>>>>>>>>>  ,
>>>>>>>>> otherwise ISP router will be full of /64 prefix route either on BGP( 
>>>>>>>>> Many
>>>>>>>>> BGP Session) , or  Many Static route .   If few thousand account, ti 
>>>>>>>>> will
>>>>>>>>> be few thousand of BGP session with ISP router or few thousand 
>>>>>>>>> static
>>>>>>>> route
>>>>>>>>> which  is not possible .
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Jul 15, 2021 at 10:47 PM Wido den Hollander 
>>>>>>>>> 
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> But you still need routing. See the attached PNG (and draw.io XML).
>>>>>>>>>>
>>>>>>>>>> You need to route the /48 subnet TO the VR which can then 
>>>>>>>>>> route it to the Virtual Networks behind the VR.
>>>>>>>>>>
>>>>>>>>>> There is no other way then routing with either BGP or a Static route.
>>>>>>>>>>
>>>>>>>>>> Wido
>>>>>>>>>>
>>>>>>>>>> Op 15-07-2021 om 12:39 schreef Hean Seng:
>>>>>>>>>>> Or explain like this :
>>>>>>>>>>>
>>>>>>>>>>> 1) Cloudstack generate list of /64 subnet from /48 that 
>>>>>>>>>>> Network admin assigned to Cloudstack
>>>>>>>>>>> 2) Cloudsack allocated the subnet (that generated from 
>>>>>>>>>>> step1) to
>>>>>>>> Virtual
>>>>>>>>>>> Router, one Virtual Router have one subniet /64
>>>>>>>>>>> 3) Virtual Router allocate single IPv6 (within the range of 
>>>>>>>>>>> /64 allocated to VR)  to VM
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Jul 15, 2021 at 6:25 PM Hean Seng 
>>>>>>>>>>> mailto:heans...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>   Hi Wido,
>>>>>>>>>>>
>>>>>>>>>>>   I think the /48 is at physical router as gateway , 
>>>>>>>>>>> and subnet of
>>>>>>>> /64
>>>>>>>>>>>   at VR of Cloudstack.   Cloudstack only keep which /48 
>>>>>>>>>>> prefix and
>>>>>>>>>>>   vlan information of this /48 to be later split the  /64. 
>>>>>>>>>>> to 

RE: IPV6 in Isolated/VPC networks

2021-08-17 Thread Alex Mattioli
+1 to keeping the scope tight on phase 1 and then expanding functionality later 
on.

 


-Original Message-
From: Rohit Yadav  
Sent: 17 August 2021 11:26
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi Wei,

Yes, root admins should add all the /64 and bigger /56, /48 blocks which 
CloudStack will use to calculate and allocate /64 block from to a IPv6 enabled 
isolated network or VPC tier; for every /64 allocation/assignment to such a 
network a static route for the /64 target with the ipv6 address of the VR as 
the gateway should be added to the core-router/ISP gateway.

Based on what I read, the RFCs and general IPv6 addressing practice are towards 
deprecating stateful dhcp(v6) and promote use of /64 network for a 
direct-attached-network (think like a L2 network or LAN) so I think we should 
only support SLAAC for isolated networks and VPC tiers, and IPv6 for servers 
won't need privacy extensions (as Wido advised in earlier reply).

If someone has a non-standard need, they may always create and use a shared 
network with custom-defined IPv6 ranges. Or, at least in the phase1 of the 
implementation I would prefer the scope to be tight and limited that is 
acceptable for the community.


Regards.


From: Alex Mattioli 
Sent: Tuesday, August 17, 2021 14:59
To: dev@cloudstack.apache.org 
Subject: RE: IPV6 in Isolated/VPC networks

Hi Wei,

That's correct. The network operator would need to create static routes for the 
/64s, both for Isolated networks and VPCs.  The next-hop for those being the 
outside interface of the Virtual Router, which will have an IP from the 
"public" /64.

Cheers
Alex




 


-Original Message-
From: Wei ZHOU 
Sent: 17 August 2021 10:20
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi Wido,

(cc to Rohit and Alex)

It is a good suggestion to use FRR for ipv6. The configuration is quite simple 
and the VMs can get SLAAC, routes, etc.

Privacy extension looks not the same as what you mentioned. see
https://datatracker.ietf.org/doc/html/rfc4941

You are right. To use static routing, the admins need to configure the routes 
in the upstream router, and add some ipv6 ranges (eg /56 for VPCs and /64 for 
isolated networks) and their next-hop  (which will be configured in VRs) in 
CloudStack. CloudStack will pick up an IPv6 range and assign it to an isolated 
network or vpc. @Rohit, correct me if I'm wrong.

I have a question, it looks stateless dhcpv6 (SLAAC from router/VR, router/dns 
etc via RA messages) will be the only option for now (related to your pr 
https://github.com/apache/cloudstack/pull/3077) . Would it be good to provide 
stateful dhcpv6 (which can be implemented by dnsmasq) as an option in 
cloudstack ? The advantages are
(1) support other ipv6 cidr sizes than /64.
(2) we can assign a specified Ipv6 address to a vm. vm Ipv6 addresses can be 
changed
(4) an Ipv6 addresses can be re-used by multiple vms.
The problem is, stateful dhcpv6 does not support routers,nameservers, etc.
we need to figure it out (probably use radvd/frr and dnsmasq both).

-Wei


On Fri, 13 Aug 2021 at 12:19, Wido den Hollander  wrote:

> Hi,
>
> See my inline responses:
>
> Op 11-08-2021 om 14:26 schreef Rohit Yadav:
> > Hi all,
> >
> > Thanks for your feedback and ideas, I've gone ahead with discussing 
> > them
> with Alex and came up with a PoC/design which can be implemented in 
> the following phases:
> >
> >*   Phase1: implement ipv6 support in isolated networks and VPC with
> static routing
> >*   Phase2: discuss and implement support for dynamic routing (TBD)
> >
> > For Phase1 here's the high-level proposal:
> >
> >*   IPv6 address management:
> >   *   At the zone level root-admin specifies a /64 public range that
> will be used for VRs, then they can add a /48, or /56 IPv6 range for 
> guest networks (to be used by isolated networks and VPC tiers)
> >   *   On creation of any IPv6 enabled isolated network or VPC tier,
> from the /48 or /56 block a /64 network is allocated/used
> >   *   We assume SLAAC and autoconfiguration, no DHCPv6 in the zone
> (discuss: is privacy a concern, can privacy extensions rfc4941 of 
> slaac be
> explored?)
>
> Privacy Extensions are only a concern for client devices which roam 
> between different IPv6 networks.
>
> If you IPv6 address of a client keeps the same suffix (MAC based) and 
> switches network then only the prefix (/64) will change.
>
> This way a network like Google, Facebook, etc could track your device 
> moving from network to network if they only look at the last 64-bits 
> of the IPv6 address.
>
> For servers this is not a problem as you already know in which network 
> they are.
>
> >*   Network 

RE: IPV6 in Isolated/VPC networks

2021-08-17 Thread Alex Mattioli
Hi Wei,

That's correct. The network operator would need to create static routes for the 
/64s, both for Isolated networks and VPCs.  The next-hop for those being the 
outside interface of the Virtual Router, which will have an IP from the 
"public" /64.

Cheers
Alex

 


-Original Message-
From: Wei ZHOU  
Sent: 17 August 2021 10:20
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi Wido,

(cc to Rohit and Alex)

It is a good suggestion to use FRR for ipv6. The configuration is quite simple 
and the VMs can get SLAAC, routes, etc.

Privacy extension looks not the same as what you mentioned. see
https://datatracker.ietf.org/doc/html/rfc4941

You are right. To use static routing, the admins need to configure the routes 
in the upstream router, and add some ipv6 ranges (eg /56 for VPCs and /64 for 
isolated networks) and their next-hop  (which will be configured in VRs) in 
CloudStack. CloudStack will pick up an IPv6 range and assign it to an isolated 
network or vpc. @Rohit, correct me if I'm wrong.

I have a question, it looks stateless dhcpv6 (SLAAC from router/VR, router/dns 
etc via RA messages) will be the only option for now (related to your pr 
https://github.com/apache/cloudstack/pull/3077) . Would it be good to provide 
stateful dhcpv6 (which can be implemented by dnsmasq) as an option in 
cloudstack ? The advantages are
(1) support other ipv6 cidr sizes than /64.
(2) we can assign a specified Ipv6 address to a vm. vm Ipv6 addresses can be 
changed
(4) an Ipv6 addresses can be re-used by multiple vms.
The problem is, stateful dhcpv6 does not support routers,nameservers, etc.
we need to figure it out (probably use radvd/frr and dnsmasq both).

-Wei


On Fri, 13 Aug 2021 at 12:19, Wido den Hollander  wrote:

> Hi,
>
> See my inline responses:
>
> Op 11-08-2021 om 14:26 schreef Rohit Yadav:
> > Hi all,
> >
> > Thanks for your feedback and ideas, I've gone ahead with discussing 
> > them
> with Alex and came up with a PoC/design which can be implemented in 
> the following phases:
> >
> >*   Phase1: implement ipv6 support in isolated networks and VPC with
> static routing
> >*   Phase2: discuss and implement support for dynamic routing (TBD)
> >
> > For Phase1 here's the high-level proposal:
> >
> >*   IPv6 address management:
> >   *   At the zone level root-admin specifies a /64 public range that
> will be used for VRs, then they can add a /48, or /56 IPv6 range for 
> guest networks (to be used by isolated networks and VPC tiers)
> >   *   On creation of any IPv6 enabled isolated network or VPC tier,
> from the /48 or /56 block a /64 network is allocated/used
> >   *   We assume SLAAC and autoconfiguration, no DHCPv6 in the zone
> (discuss: is privacy a concern, can privacy extensions rfc4941 of 
> slaac be
> explored?)
>
> Privacy Extensions are only a concern for client devices which roam 
> between different IPv6 networks.
>
> If you IPv6 address of a client keeps the same suffix (MAC based) and 
> switches network then only the prefix (/64) will change.
>
> This way a network like Google, Facebook, etc could track your device 
> moving from network to network if they only look at the last 64-bits 
> of the IPv6 address.
>
> For servers this is not a problem as you already know in which network 
> they are.
>
> >*   Network offerings: root-admin can create new network offerings
> (with VPC too) that specifies a network stack option:
> >   *   ipv4 only (default, for backward compatibility all
> networks/offerings post-upgrade migrate to this option)
> >   *   ipv4-and-ipv6
> >   *   ipv6-only (this can be phase 1.b)
> >   *   A new routing option: static (phase1), dynamic (phase2, with
> multiple sub-options such as ospf/bgp etc...)
>
> This means that the network admin will need to statically route the 
> IPv6 subnet to the VR's outside IPv6 address, for example, on a JunOS router:
>
> set routing-options rib inet6.0 static route 2001:db8:500::/48 
> next-hop
> 2001:db8:100::50
>
> I'm assuming that 2001:db8:100::50 is the address of the VR on the 
> outside (/64) network. In reality this will probably be a longer 
> address, but this is for just the example.
>
> >*   VR changes:
> >   *   VR gets its guest and public nics set to inet6 auto
> >   *   For each /64 allocated to guest network and VPC tiers, radvd
> is configured to do RA
>
> radvd is fine, but looking at phase 2 with dynamic routing you might 
> already want to look into FRRouting. FRR can also advertise RAs while 
> not doing any routing.
>
> interface ens4
>no ipv6 nd suppress-ra
>ipv6 nd prefix 2001:db8:500::/64
>ipv6 nd rdnss 2001:db8:400::53 2001:db8:200::53
>
> See: http://docs.frrouting.org/en/latest/ipv6.html
>
> >   *   Firewall: a new ipv6 zone/chain is created for ipv6 where ipv6
> firewall rules (ACLs, ingress, egress) are implemented; ACLs between 
> VPC tiers are managed/implemented by ipv6 firewall on VR
>
> Plea

RE: IPV6 in Isolated/VPC networks

2021-08-12 Thread Alex Mattioli
r split the  /64.
> to VR.
> >>>>>>>>>
> >>>>>>>>> And the instances is getting singe IPv6 of /64  IP.
>  The VR is
> >>>>>>>>> getting /64.  The default gateway shall goes to /48 
> >>>>>>>>> of
> physical
> >>>>>>>>> router ip .   In this case ,does not need any BGP router
> .
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Similar concept as IPv4 :
> >>>>>>>>>
> >>>>>>>>> /48 subnet of IPv6 is equivalent to current /24 
> >>>>>>>>> subnet
> of IPv4
> >>>>>> that
> >>>>>>>>> created in Network.
> >>>>>>>>> and /64  of IPv6 is equivalent to single IP of IPv4
> assign to VM.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Thu, Jul 15, 2021 at 5:31 PM Wido den Hollander <
> >>>>>> w...@widodh.nl
> >>>>>>>>> <mailto:w...@widodh.nl>> wrote:
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Op 14-07-2021 om 16:44 schreef Hean Seng:
> >>>>>>>>>  > Hi
> >>>>>>>>>  >
> >>>>>>>>>  > I replied in another thread, i think do not 
> >>>>>>>>> need
> implement
> >>>>>>>>> BGP or OSPF,
> >>>>>>>>>  > that would be complicated .
> >>>>>>>>>  >
> >>>>>>>>>  > We only need assign  IPv6 's /64 prefix to
> Virtual Router
> >>>>>>>>> (VR) in NAT
> >>>>>>>>>  > zone, and the VR responsible to deliver 
> >>>>>>>>> single
> IPv6 to VM
> >>>>>> via
> >>>>>>>>> DHCP6.
> >>>>>>>>>  >
> >>>>>>>>>  > In VR, you need to have Default IPv6 route to
> Physical
> >>>>>>>>> Router's /48. IP
> >>>>>>>>>  > as IPv6 Gateway.  Thens should be done .
> >>>>>>>>>  >
> >>>>>>>>>  > Example :
> >>>>>>>>>  > Physical Router Interface
> >>>>>>>>>  >   IPv6 IP : 2000:::1/48
> >>>>>>>>>  >
> >>>>>>>>>  > Cloudstack  virtual router :
> 2000::200:201::1/64 with
> >>>>>>>>> default ipv6
> >>>>>>>>>  > route to router ip 2000:::1
> >>>>>>>>>  > and Clodustack Virtual router dhcp allocate 
> >>>>>>>>> IP to
> VM , and
> >>>>>>>>> VM will have
> >>>>>>>>>  > default route to VR. IPv6 2000::200:201::1
> >>>>>>>>>  >
> >>>>>>>>>  > So in cloudstack need to allow  user to enter 
> >>>>>>>>> ,
> IPv6
> >>>>>>>>> gwateway , and
> >>>>>>>>>  > the  /48 Ipv6 prefix , then it will self 
> >>>>>>>>> allocate
> the /64
> >>>>>> ip
> >>>>>>>>> to the VR ,
> >>>>>>>>>  > and maintain make sure not ovelap allocation
> >>>>>>>>>  >
> >>>>>>>>>  >
> >>>>>>>>>
> >>>>>>>>> But NAT is truly not the solution with IPv6. 
> >>>>>>>>> IPv6

RE: Cloudstack GPU

2021-07-26 Thread Alex Mattioli
Thanks Rohit,
Those are the articles I'm basing my ideas on as well, looks like the only 
available alternative is passthrough, with it's limitations.

Am wondering if anyone out there is actually running this in production.

Cheers,
Alex

 


-Original Message-
From: Rohit Yadav  
Sent: 26 July 2021 15:44
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Cloudstack GPU

Hi Alex,

I've heard/seen some users using GPUs with XenServer for graphical rendering 
and I remember somebody discussing about GPU in KVM which is possible by using 
the extraconfig feature while deploying VM (the only limitation is on KVM you 
cannot share one GPU across VMs; however if your server has multiple GPUs you 
can assign them to one or more VMs).

I found this old wiki: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/GPU+and+vGPU+support+for+CloudStack+Guest+VMs
 (GPU models are enterprise Nvidia based)


Regards.

____
From: Alex Mattioli 
Sent: Thursday, July 22, 2021 19:07
To: us...@cloudstack.apache.org ; 
dev@cloudstack.apache.org 
Subject: Cloudstack GPU

Hi all,
Anyone out there using GPUs with Cloudstack?
If so, with which hypervisor and GPU?

Thanks,
Alex




 




Cloudstack GPU

2021-07-22 Thread Alex Mattioli
Hi all,
Anyone out there using GPUs with Cloudstack?
If so, with which hypervisor and GPU?

Thanks,
Alex

 



RE: IPV6 in Isolated/VPC networks

2021-07-14 Thread Alex Mattioli
Hi Hean,
Do you mean using NAT66?  Or did I miss something?

Regards,
Alex

 


-Original Message-
From: Hean Seng  
Sent: 14 July 2021 16:44
To: us...@cloudstack.apache.org
Cc: Wido den Hollander ; dev@cloudstack.apache.org; Wei Zhou 
; Rohit Yadav ; Gabriel 
Beims Bräscher 
Subject: Re: IPV6 in Isolated/VPC networks

Hi

I replied in another thread, i think do not need implement BGP or OSPF, that 
would be complicated .

We only need assign  IPv6 's /64 prefix to Virtual Router (VR) in NAT zone, and 
the VR responsible to deliver single IPv6 to VM via DHCP6.

In VR, you need to have Default IPv6 route to  Physical Router's /48. IP as
IPv6 Gateway.  Thens should be done .

Example :
Physical Router Interface
 IPv6 IP : 2000:::1/48

Cloudstack  virtual router : 2000::200:201::1/64 with default ipv6 route to 
router ip 2000:::1 and Clodustack Virtual router dhcp allocate IP to VM , 
and  VM will have default route to VR. IPv6 2000::200:201::1

So in cloudstack need to allow  user to enter ,  IPv6 gwateway , and the
/48 Ipv6 prefix , then it will self allocate the /64 ip to the VR , and 
maintain make sure not ovelap allocation







On Wed, Jul 14, 2021 at 8:55 PM Alex Mattioli 
wrote:

> Hi Wido,
> That's pretty much in line with our thoughts, thanks for the input.  I 
> believe we agree on the following points then:
>
> - FRR with BGP (no OSPF)
> - Route /48 (or/56) down to the VR
> - /64 per network
> - SLACC for IP addressing
>
> I believe the next big question is then "on which level of ACS do we 
> manage AS numbers?".  I see two options:
> 1) Private AS number on a per-zone basis
> 2) Root Admin assigned AS number on a domain/account basis
> 3) End-user driven AS number on a per network basis (for bring your 
> own AS and IP scenario)
>
> Thoughts?
>
> Cheers
> Alex
>
>
>
>
> -Original Message-
> From: Wido den Hollander 
> Sent: 13 July 2021 15:08
> To: dev@cloudstack.apache.org; Alex Mattioli 
> 
> Cc: Wei Zhou ; Rohit Yadav < 
> rohit.ya...@shapeblue.com>; Gabriel Beims Bräscher 
> 
> Subject: Re: IPV6 in Isolated/VPC networks
>
>
>
> On 7/7/21 1:16 PM, Alex Mattioli wrote:
> > Hi all,
> > @Wei Zhou<mailto:wei.z...@shapeblue.com> @Rohit Yadav rohit.ya...@shapeblue.com> and myself are investigating how to enable
> IPV6 support on Isolated and VPC networks and would like your input on it.
> > At the moment we are looking at implementing FRR with BGP (and 
> > possibly
> OSPF) on the ACS VR.
> >
> > We are looking for requirements, recommendations, ideas, rants,
> etc...etc...
> >
>
> Ok! Here we go.
>
> I think that you mean that the VR will actually route the IPv6 traffic 
> and for that you need to have a way of getting a subnet routed to the VR.
>
> BGP is probably you best bet here. Although OSPFv3 technically 
> supports this it is very badly implemented in Frr for example.
>
> Now FRR is a very good router and one of the fancy features it 
> supports is BGP Unnumered. This allows for auto configuration of BGP 
> over a L2 network when both sides are sending Router Advertisements. 
> This is very easy for flexible BGP configurations where both sides have 
> dynamic IPs.
>
> What you want to do is that you get a /56, /48 or something which is
> >/64 bits routed to the VR.
>
> Now you can sub-segment this into separate /64 subnets. You don't want 
> to go smaller then a /64 is that prevents you from using SLAAC for 
> IPv6 address configuration. This is how it works for Shared Networks 
> now in Basic and Advanced Zones.
>
> FRR can now also send out the Router Advertisements on the downlinks 
> sending out:
>
> - DNS servers
> - DNS domain
> - Prefix (/64) to be used
>
> There is no need for DHCPv6. You can calculate the IPv6 address the VM 
> will obtain by using the MAC and the prefix.
>
> So in short:
>
> - Using BGP you routed a /48 to the VR
> - Now you split this into /64 subnets towards the isolated networks
>
> Wido
>
> > Alex Mattioli
> >
> >
> >
> >
>
>

--
Regards,
Hean Seng


RE: IPV6 in Isolated/VPC networks

2021-07-14 Thread Alex Mattioli
Hi Kristaps,
Thanks for the nice schematic, pretty much where we were going.

I just didn't understand your first statement " I would like to argue that 
implementer dynamic routing protocol and associated security 
problems/challenges with it to have IPv6 route inserted in L3 router/s is not a 
good goal."

Would you mind clarifying/expanding on it please?

Thanks
Alex

 


-Original Message-
From: Kristaps Cudars  
Sent: 13 July 2021 20:44
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi,

I would like to argue that implementer dynamic routing protocol and associated 
security problems/challenges with it to have IPv6 route inserted in L3 router/s 
is not a good goal.

In my opinion dynamic routing on VR would be interesting to scale availability 
of service across several datacenter if they participate in same AS. With BGP 
you could advertise same IP form different VR located in different DC IPv6/128 
or/and IPv4/32.

I would delegate task of router creation to ACS somewhere at moment of VR 
creation. 
It could happen over ssh/snmp/api rest or ansible- something that supports wide 
variety of vendors/devices.

Have created rough schematic on how it could look on VR side: 
https://dice.lv/acs/ACS_router_v2.pdf


On 2021/07/13 13:08:20, Wido den Hollander  wrote: 
> 
> 
> On 7/7/21 1:16 PM, Alex Mattioli wrote:
> > Hi all,
> > @Wei Zhou<mailto:wei.z...@shapeblue.com> @Rohit 
> > Yadav<mailto:rohit.ya...@shapeblue.com> and myself are investigating how to 
> > enable IPV6 support on Isolated and VPC networks and would like your input 
> > on it.
> > At the moment we are looking at implementing FRR with BGP (and possibly 
> > OSPF) on the ACS VR.
> > 
> > We are looking for requirements, recommendations, ideas, rants, etc...etc...
> > 
> 
> Ok! Here we go.
> 
> I think that you mean that the VR will actually route the IPv6 traffic 
> and for that you need to have a way of getting a subnet routed to the VR.
> 
> BGP is probably you best bet here. Although OSPFv3 technically 
> supports this it is very badly implemented in Frr for example.
> 
> Now FRR is a very good router and one of the fancy features it 
> supports is BGP Unnumered. This allows for auto configuration of BGP 
> over a L2 network when both sides are sending Router Advertisements. 
> This is very easy for flexible BGP configurations where both sides have 
> dynamic IPs.
> 
> What you want to do is that you get a /56, /48 or something which is
> >/64 bits routed to the VR.
> 
> Now you can sub-segment this into separate /64 subnets. You don't want 
> to go smaller then a /64 is that prevents you from using SLAAC for 
> IPv6 address configuration. This is how it works for Shared Networks 
> now in Basic and Advanced Zones.
> 
> FRR can now also send out the Router Advertisements on the downlinks 
> sending out:
> 
> - DNS servers
> - DNS domain
> - Prefix (/64) to be used
> 
> There is no need for DHCPv6. You can calculate the IPv6 address the VM 
> will obtain by using the MAC and the prefix.
> 
> So in short:
> 
> - Using BGP you routed a /48 to the VR
> - Now you split this into /64 subnets towards the isolated networks
> 
> Wido
> 
> > Alex Mattioli
> > 
> >  
> > 
> > 
> 


RE: IPV6 in Isolated/VPC networks

2021-07-14 Thread Alex Mattioli
Hi Wido,
That's pretty much in line with our thoughts, thanks for the input.  I believe 
we agree on the following points then:

- FRR with BGP (no OSPF)
- Route /48 (or/56) down to the VR
- /64 per network
- SLACC for IP addressing

I believe the next big question is then "on which level of ACS do we manage AS 
numbers?".  I see two options:
1) Private AS number on a per-zone basis
2) Root Admin assigned AS number on a domain/account basis
3) End-user driven AS number on a per network basis (for bring your own AS and 
IP scenario)

Thoughts?

Cheers
Alex

 


-Original Message-
From: Wido den Hollander  
Sent: 13 July 2021 15:08
To: dev@cloudstack.apache.org; Alex Mattioli 
Cc: Wei Zhou ; Rohit Yadav ; 
Gabriel Beims Bräscher 
Subject: Re: IPV6 in Isolated/VPC networks



On 7/7/21 1:16 PM, Alex Mattioli wrote:
> Hi all,
> @Wei Zhou<mailto:wei.z...@shapeblue.com> @Rohit 
> Yadav<mailto:rohit.ya...@shapeblue.com> and myself are investigating how to 
> enable IPV6 support on Isolated and VPC networks and would like your input on 
> it.
> At the moment we are looking at implementing FRR with BGP (and possibly OSPF) 
> on the ACS VR.
> 
> We are looking for requirements, recommendations, ideas, rants, etc...etc...
> 

Ok! Here we go.

I think that you mean that the VR will actually route the IPv6 traffic and for 
that you need to have a way of getting a subnet routed to the VR.

BGP is probably you best bet here. Although OSPFv3 technically supports this it 
is very badly implemented in Frr for example.

Now FRR is a very good router and one of the fancy features it supports is BGP 
Unnumered. This allows for auto configuration of BGP over a L2 network when 
both sides are sending Router Advertisements. This is very easy for flexible 
BGP configurations where both sides have dynamic IPs.

What you want to do is that you get a /56, /48 or something which is
>/64 bits routed to the VR.

Now you can sub-segment this into separate /64 subnets. You don't want to go 
smaller then a /64 is that prevents you from using SLAAC for IPv6 address 
configuration. This is how it works for Shared Networks now in Basic and 
Advanced Zones.

FRR can now also send out the Router Advertisements on the downlinks sending 
out:

- DNS servers
- DNS domain
- Prefix (/64) to be used

There is no need for DHCPv6. You can calculate the IPv6 address the VM will 
obtain by using the MAC and the prefix.

So in short:

- Using BGP you routed a /48 to the VR
- Now you split this into /64 subnets towards the isolated networks

Wido

> Alex Mattioli
> 
>  
> 
> 



IPV6 in Isolated/VPC networks

2021-07-07 Thread Alex Mattioli
Hi all,
@Wei Zhou<mailto:wei.z...@shapeblue.com> @Rohit 
Yadav<mailto:rohit.ya...@shapeblue.com> and myself are investigating how to 
enable IPV6 support on Isolated and VPC networks and would like your input on 
it.
At the moment we are looking at implementing FRR with BGP (and possibly OSPF) 
on the ACS VR.

We are looking for requirements, recommendations, ideas, rants, etc...etc...

Alex Mattioli

 



RE: [DISCUSS] Moving to OpenVPN as the remote access VPN provider

2021-06-10 Thread Alex Mattioli
+1 on OpenVPN, and then a framework later on.

 


-Original Message-
From: Rohit Yadav  
Sent: 10 June 2021 10:25
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [DISCUSS] Moving to OpenVPN as the remote access VPN provider

All,

We've historically supported openswan and nowadays strongswan as the VPN 
provider in VR for both site-to-site and remote access modes. After discussing 
the situation with a few users and colleagues I learnt that OpenVPN is 
generally far easier to use, have clients for most OS and platforms (desktop, 
laptop, tablet, phones...)  and allows multiple clients in the same public IP 
(for example, multiple people in the office sharing a client-side public IP/nat 
while trying to connect to a VPC or an isolated network) and for these reasons 
many users actually deploy pfSense or setup a OpenVPN server in their isolated 
network or VPC and use that instead.

Therefore for the point-to-point VPN use-case of remote access [1] does it make 
sense to switch to OpenVPN? Or, are there users using strongswan/ipsec/l2tpd 
for remote access VPN?

A general-purpose VPN-framework/provider where an account or admin (via 
offering) can specify which VPN provider they want in the network 
(strongswan/ipsec, OpenVPN, Wireguard...). However, it may be more complex to 
implement and maintain. Any other thoughts in general about VPN implementation 
and support in CloudStack? Thanks.

[1] 
http://docs.cloudstack.apache.org/en/latest/adminguide/networking_and_traffic.html#remote-access-vpn



Regards.

 




RE: RE: RE: Virutal Router MTU

2021-03-25 Thread Alex Mattioli
Hi Rafael,

I've had very similar issues in the past, with SSL and TLS so playing well with 
fragmentation.
It is the same use case indeed, in that case I needed jumbo frames for a 
certain network.

I believe this should be implemented per-network, as a setting applied when the 
network is created (but editable and applied when the network is restarted with 
clean-up).

I'll consult with my colleagues what's the best way forward and get back to you.

Cheers,
Alex

From: Rafael del Valle 
Sent: 25 March 2021 09:06
To: Alex Mattioli 
Cc: dev@cloudstack.apache.org
Subject: Re: RE: RE: Virutal Router MTU

Hi Alex,

I have now found all the detail of the 1400 MTU past incident that lead us to 
patch OpenNuebula VRs.

The problem was detected because startTLS sessions failed in our email, 
persistently and to peers such as hotmail:


2019-01-26 14:58:06 + 02 9a1d30b6d6d1 SMTP-OUT:0001: SSL error remote 
104.47.13.33:25, SSL_connect:failed in SSLv2/v3 read server hello A


We investigated the issue together with the email platform vendor, and the 
problem persisted until we patched the MTU1400 issue.

So this is a must implement for us. A workaround exists: patch VRs and use 
cloud-init to customize NICs in VMs.

I am very happy to accept your collaboration offer :)

Where should this patch implemented?

It is actually a requirement of this VLAN (vlanIpRange) and propagates to 
Virtual Routers and NICs of the involved VMs.

Is it the same in your use-case of Jumbo frames for storage oriented networks?

Perhaps we should treat this setting just like a netmask or gateway setting.

Shall we open an issue?

Rafael




alex.matti...@shapeblue.com 
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue
  
 

On Wed, 2021-03-24 11:08 AM, Alex Mattioli 
mailto:alex.matti...@shapeblue.com>> wrote:
Hi Raf,

Can you share with us which SDWAN vendor it is? I've tried 4 different ones 
with ACS and they all worked fine, in all cases what I did was to set the MTU 
in the SDWAN appliance to be a bit lower than 1500 (in between 1422 and 1460, 
depending on SDWAN solution). In most network you'll end up with most of your 
traffic with an MTU of around 500-600 anyway, so larger MTU doesn't help that 
much, I'd highly recommend you run some traffic analysis to try to figure out 
what's the MTU distribution for your network traffic.

With that said, I also had to change the MTU in VRs for a proof of concept on 
iSCSI between datacenters, in that situation I just wrote a script that would 
login to each VR and change the MTU of the public and private interfaces, it 
worked OK. I would strongly advise you not to change the MTU of the management 
interface, when I did (by mistake) the VRs lost communication with the 
management server.

If you want to contribute by expanding cloudstack code to add a setting for VR 
MTU I'd be more than happy to collaborate with you on that.

Hope this helps.

Cheers,
Alex


alex.matti...@shapeblue.com<mailto:alex.matti...@shapeblue.com>
http://www.shapeblue.com
3 London Bridge Street, 3rd floor, News Building, London SE1 9SGUK
@shapeblue




-Original Message-
From: Rafael del Valle 
<mailto:%3crva...@privaz.io.INVALID%3e>
Sent: 24 March 2021 10:33
To: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>
Cc: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>; 
dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>
Subject: Re: RE: Virutal Router MTU

Hi Alex,

In our particular use case the Public Network is an SD WAN and we have a 
requirement of slightly smaller MTU than the standard 1500.

I have assumed that our traffic will be encapsulated into something else before 
delivery, I guess that is the reason for the requirement.

What would be the easier way to add support for MTU tunning on VRs?

I would be to contribute and implement it.

Regards,





On Wed, 2021-03-24 09:39 AM, Alex Mattioli 
<mailto:%3calex.matti...@shapeblue.com%3e> wrote:
>
Hi R,
>
> There's no ACS setting for the VR's MTU size.
> Unless you are running storage traffic s in that network then jumbo frames 
> aren't of much use. I've ran some tests at the request of some customers in 
> my previous job, and with some very busy VRs and the performance gains for an 
> MTU of 9000 were statistically insignificant.
> If your VRs are saturated your best option is to increase the
> resources for its offering (if you need guidance with that, am happy
> to provide it)
>
> Anyway, what's your use case for jumbo frames?
>
> Regards,
> Alex
>
> alex.matti...@shapeblue.com<mailto:alex.matti...@shapeblue.com>
> http://www.shapeblue.com
> 3 London Bridge Street, 3rd floor, News Building, London SE1 9SGUK
> @shapeblue
>
>
>
>
> -Original Message-
>

RE: RE: Virutal Router MTU

2021-03-24 Thread Alex Mattioli
Hi Raf,

Can you share with us which SDWAN vendor it is?  I've tried 4 different ones 
with ACS and they all worked fine, in all cases what I did was to set the MTU 
in the SDWAN appliance to be a bit lower than 1500 (in between 1422 and 1460, 
depending on SDWAN solution).  In most network you'll end up with most of your 
traffic with an MTU of around 500-600 anyway, so larger MTU doesn't help that 
much, I'd highly recommend you run some traffic analysis to try to figure out 
what's the MTU distribution for your network traffic.

With that said, I also had to change the MTU in VRs for a proof of concept on 
iSCSI between datacenters, in that situation I just wrote a script that would 
login to each VR and change the MTU of the public and private interfaces, it 
worked OK.  I would strongly advise you not to change the MTU of the management 
interface, when I did (by mistake) the VRs lost communication with the 
management server.

If you want to contribute by expanding cloudstack code to add a setting for VR 
MTU I'd be more than happy to collaborate with you on that. 

Hope this helps.

Cheers,
Alex


alex.matti...@shapeblue.com 
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue
  
 


-Original Message-
From: Rafael del Valle  
Sent: 24 March 2021 10:33
To: us...@cloudstack.apache.org
Cc: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: RE: Virutal Router MTU

Hi Alex, 

In our particular use case the Public Network is an SD WAN and we have a 
requirement of slightly smaller MTU than the standard 1500.

I have assumed that our traffic will be encapsulated into something else before 
delivery, I guess that is the reason for the requirement.

What would be the easier way to add support for MTU tunning on VRs?

I would be to contribute and implement it.

Regards,





On Wed, 2021-03-24 09:39 AM, Alex Mattioli  wrote:
> 
Hi R,
> 
> There's no ACS setting for the VR's MTU size. 
> Unless you are running storage traffic s in that network then jumbo frames 
> aren't of much use. I've ran some tests at the request of some customers in 
> my previous job, and with some very busy VRs and the performance gains for an 
> MTU of 9000 were statistically insignificant. 
> If your VRs are saturated your best option is to increase the 
> resources for its offering (if you need guidance with that, am happy 
> to provide it)
> 
> Anyway, what's your use case for jumbo frames?
> 
> Regards,
> Alex
> 
> alex.matti...@shapeblue.com
> http://www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK 
> @shapeblue
>   
>  
> 
> 
> -Original Message-
> From: rva...@privaz.io.INVALID " 
> target="_blank">
> Sent: 24 March 2021 09:23
> To: us...@cloudstack.apache.org
> Subject: Virutal Router MTU
> 
> Hi!
> 
> I can see in the Global Parameters that it is possible to specify the MTU for 
> secondary storage VM.
> 
> Is it possible to configure the MTU for a virtual router? how?
> 
> Regards,
> R.
> 


RE: RSTP

2021-03-09 Thread Alex Mattioli
Hi Evgeniy,

Do you mean RSTP as in Rapid Spanning Tree Protocol? Or something else?

Cheers,
Alex

alex.matti...@shapeblue.com 
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue
  
 


-Original Message-
From: Evgeniy Tochilin  
Sent: 09 March 2021 12:53
To: Rohit Yadav ; dev@cloudstack.apache.org
Subject: RSTP

 Hi,

Confirm please, Cloudstack supports RSTP or not?

Thanks.

-- 

Best Regards,
Evgeniy Tochilin