Re: GlusterFS consideration KVM.

2021-09-03 Thread Ivan Kudryavtsev
Glusterfs works fine as a shared mountpoint. No NFS and other stuff
required. Just mount them everywhere and you are good to go. Performance is
acceptable (at least for bunch of ssd), but not comparable with local
storage, of course. Not recommend for IO intensive. VM. We recommend such
VMs for HA, like routers, RAM/CPY intensive workloads.

Cheers

сб, 4 сент. 2021 г., 05:01 Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com>:

> Hi Abishek,
>
> We are testing CS 4.15.1 with Gluster at this moment with
> distributed-replicated configuration, NFS-ganesha as storage service
> protocol and ZFS 2.1 (raidz-1). At this moment we are trying different
> configurations and we cannot get a really good performance.
>
> If somebody in this group can contribute with information we'll
> appreciate your help too.
>
> El 2/9/2021 a las 01:03, Abishek escribió:
> > Hell All,
> >
> > I have been testing cloudstack 4.15.1 from past few weeks and its going
> well. For further testing into our environment I am planning to test out
> glusterfs with KVM host (the servers have only local storage). Will
> glusterfs have any performance downside. Did anyone previously had the
> setup with gluster(replicated). Any thing to consider while deploying?
> >
> > I will be very grateful for any kind of recommendation.
> > Thank you.
> >
>


Re: GlusterFS consideration KVM.

2021-09-03 Thread Mauro Ferraro - G2K Hosting

Hi Abishek,

We are testing CS 4.15.1 with Gluster at this moment with 
distributed-replicated configuration, NFS-ganesha as storage service 
protocol and ZFS 2.1 (raidz-1). At this moment we are trying different 
configurations and we cannot get a really good performance.


If somebody in this group can contribute with information we'll 
appreciate your help too.


El 2/9/2021 a las 01:03, Abishek escribió:

Hell All,

I have been testing cloudstack 4.15.1 from past few weeks and its going well. 
For further testing into our environment I am planning to test out glusterfs 
with KVM host (the servers have only local storage). Will glusterfs have any 
performance downside. Did anyone previously had the setup with 
gluster(replicated). Any thing to consider while deploying?

I will be very grateful for any kind of recommendation.
Thank you.
  


Re: Cannot add Ceph RBD storage as Primary Storage for Cloudstack 4.15.1

2021-09-03 Thread Mevludin Blazevic
Hi Wido,

thank you for the quick response! The output of $ cpeh df on the admin node is:

[root@cephnode1 ~]# ceph df
--- RAW STORAGE ---
CLASSSIZE   AVAIL USED  RAW USED  %RAW USED
hdd72 GiB  72 GiB  118 MiB   118 MiB   0.16
TOTAL  72 GiB  72 GiB  118 MiB   118 MiB   0.16

--- POOLS ---
POOL   ID  PGS   STORED  OBJECTSUSED  %USED  MAX AVAIL
device_health_metrics   11   31 KiB   12  94 KiB  0 23 GiB
cloudstack  2   32  2.9 KiB5  43 KiB  0 23 GiB
MeinPool3   32  0 B0 0 B  0 23 GiB

Before I set up a Round Robin DNS, I have tried to use the IPs from the other 2 
ceph monitors (192.168.1.5 and 192.168.1.6). Still the same error.
Furthermore, I changed the log level on my kvm node to debug. Output:

2021-09-01 10:35:54,698 DEBUG [cloud.agent.Agent] (agentRequest-Handler-2:null) 
(logid:3c2b5d3a) Processing command: 
com.cloud.agent.api.ModifyStoragePoolCommand
2021-09-01 10:35:54,698 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Attempting to create storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d (RBD) in libvirt
2021-09-01 10:35:54,698 DEBUG [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Looking for libvirtd connection 
at: qemu:///system
2021-09-01 10:35:54,699 WARN  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Storage pool 
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d was not found running in libvirt. Need to 
create it.
2021-09-01 10:35:54,699 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Didn't find an existing storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d by UUID, checking for pools with 
duplicate paths
2021-09-01 10:35:54,699 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Checking path of existing pool 
84aa6a27-0413-39ad-87ca-5e08078b9b84 against pool we want to create
2021-09-01 10:35:54,701 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Checking path of existing pool 
3f5b0819-232c-45cf-b533-4780f4e0f540 against pool we want to create
2021-09-01 10:35:54,705 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Attempting to create storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d
2021-09-01 10:35:54,705 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) 
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d

cloudstack@192.168.1.4:6789/cloudstack



2021-09-01 10:35:54,709 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) 
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d


cloudstack






2021-09-01 10:36:05,821 DEBUG [kvm.resource.LibvirtConnection] (Thread-58:null) 
(logid:) Looking for libvirtd connection at: qemu:///system
2021-09-01 10:36:05,824 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Found NFS storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84 in 
libvirt, continuing
2021-09-01 10:36:05,824 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Executing: 
/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 
192.168.1.149 -p /export/primary -m /mnt/84aa6a27-0413-39ad-87ca-5e08078b9b84 
-h 192.168.1.106
2021-09-01 10:36:05,825 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Executing while with timeout : 6
2021-09-01 10:36:05,837 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Execution is successful.
2021-09-01 10:36:06,115 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Executing: 
/usr/share/cloudstack-common/scripts/vm/network/security_group.py 
get_rule_logs_for_vms
2021-09-01 10:36:06,116 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Executing while with timeout : 180
2021-09-01 10:36:06,277 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Execution is successful.
2021-09-01 10:36:06,278 DEBUG [kvm.resource.LibvirtConnection] 
(UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system
2021-09-01 10:36:06,300 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) 
Sending ping: Seq 1-57:  { Cmd , MgmtId: -1, via: 1, Ver: v1, Flags: 11, 
[{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"s-54-VM":{"state":"PowerOn","host":"virthost2"},"v-1-VM":{"state":"PowerOn","host":"virthost2"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"1","wait":"0","bypassHostMaintenance":"false"}}]
 }
2021-09-01 10:36:06,321 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null) 
(logid:6b2e7694) Received response: Seq 1-57:  { Ans: , MgmtId: 8796751976908, 
via: 1, Ver: v1, Flags: 100010, 
[{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId"

Re: Cannot add Ceph RBD storage as Primary Storage for Cloudstack 4.15.1

2021-09-03 Thread Mevludin Blazevic
Hi Wido,

thank you for the quick response! The output of $ cpeh df on the admin node is:

[root@cephnode1 ~]# ceph df
--- RAW STORAGE ---
CLASSSIZE   AVAIL USED  RAW USED  %RAW USED
hdd72 GiB  72 GiB  118 MiB   118 MiB   0.16
TOTAL  72 GiB  72 GiB  118 MiB   118 MiB   0.16

--- POOLS ---
POOL   ID  PGS   STORED  OBJECTSUSED  %USED  MAX AVAIL
device_health_metrics   11   31 KiB   12  94 KiB  0 23 GiB
cloudstack  2   32  2.9 KiB5  43 KiB  0 23 GiB
MeinPool3   32  0 B0 0 B  0 23 GiB

Before I set up a Round Robin DNS, I have tried to use the IPs from the other 2 
ceph monitors (192.168.1.5 and 192.168.1.6). Still the same error.
Furthermore, I changed the log level on my kvm node to debug. Output:

2021-09-01 10:35:54,698 DEBUG [cloud.agent.Agent] (agentRequest-Handler-2:null) 
(logid:3c2b5d3a) Processing command: 
com.cloud.agent.api.ModifyStoragePoolCommand
2021-09-01 10:35:54,698 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Attempting to create storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d (RBD) in libvirt
2021-09-01 10:35:54,698 DEBUG [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Looking for libvirtd connection 
at: qemu:///system
2021-09-01 10:35:54,699 WARN  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Storage pool 
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d was not found running in libvirt. Need to 
create it.
2021-09-01 10:35:54,699 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Didn't find an existing storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d by UUID, checking for pools with 
duplicate paths
2021-09-01 10:35:54,699 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Checking path of existing pool 
84aa6a27-0413-39ad-87ca-5e08078b9b84 against pool we want to create
2021-09-01 10:35:54,701 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Checking path of existing pool 
3f5b0819-232c-45cf-b533-4780f4e0f540 against pool we want to create
2021-09-01 10:35:54,705 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Attempting to create storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d
2021-09-01 10:35:54,705 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) 
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d

cloudstack@192.168.1.4:6789/cloudstack



2021-09-01 10:35:54,709 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) 
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d


cloudstack






2021-09-01 10:36:05,821 DEBUG [kvm.resource.LibvirtConnection] (Thread-58:null) 
(logid:) Looking for libvirtd connection at: qemu:///system
2021-09-01 10:36:05,824 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Found NFS storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84 in 
libvirt, continuing
2021-09-01 10:36:05,824 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Executing: 
/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 
192.168.1.149 -p /export/primary -m /mnt/84aa6a27-0413-39ad-87ca-5e08078b9b84 
-h 192.168.1.106
2021-09-01 10:36:05,825 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Executing while with timeout : 6
2021-09-01 10:36:05,837 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Execution is successful.
2021-09-01 10:36:06,115 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Executing: 
/usr/share/cloudstack-common/scripts/vm/network/security_group.py 
get_rule_logs_for_vms
2021-09-01 10:36:06,116 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Executing while with timeout : 180
2021-09-01 10:36:06,277 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Execution is successful.
2021-09-01 10:36:06,278 DEBUG [kvm.resource.LibvirtConnection] 
(UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system
2021-09-01 10:36:06,300 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) 
Sending ping: Seq 1-57:  { Cmd , MgmtId: -1, via: 1, Ver: v1, Flags: 11, 
[{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"s-54-VM":{"state":"PowerOn","host":"virthost2"},"v-1-VM":{"state":"PowerOn","host":"virthost2"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"1","wait":"0","bypassHostMaintenance":"false"}}]
 }
2021-09-01 10:36:06,321 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null) 
(logid:6b2e7694) Received response: Seq 1-57:  { Ans: , MgmtId: 8796751976908, 
via: 1, Ver: v1, Flags: 100010, 
[{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId"

GlusterFS consideration KVM.

2021-09-03 Thread Abishek
Hell All,

I have been testing cloudstack 4.15.1 from past few weeks and its going well. 
For further testing into our environment I am planning to test out glusterfs 
with KVM host (the servers have only local storage). Will glusterfs have any 
performance downside. Did anyone previously had the setup with 
gluster(replicated). Any thing to consider while deploying?

I will be very grateful for any kind of recommendation.
Thank you.
 


Remove host from MySQL?

2021-09-03 Thread James Steele
Hi all,

We have a host that was removed from the webUI, but it somehow still exists in 
the cloudstack MySQL database.

I wanted to remove the host, reinstall the OS and then re-add back to CS - 
keeping the same name & IP.

What is the MySQL command to remove the existing host entry? Would be something 
like:

use cloud;
select * from host;
update host set removed=now() where id=12;

FYI: this is the same Host 12 mentioned here: 
https://github.com/apache/cloudstack/issues/5300

Thanks, Jim


Re: Console proxy creation failure

2021-09-03 Thread technologyrss.mail

Thank you so much ! It helps for me.


*---**
**Alamin*


On 9/3/2021 11:06 AM, David Jumani wrote:
If that's the case, you can remove the host, reset the IP table rules, 
reinstall the cloudstack agent and add the required IP tables rules as 
mentioned here (as some iptable rules are added to allow the console 
proxy communicate with the vnc port on the host)
https://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/kvm.html 

https://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/kvm.html#open-ports-in-rhel-centos 





*From:* technologyrss.mail 
*Sent:* Thursday, September 2, 2021 11:13 AM
*To:* David Jumani ; 
users@cloudstack.apache.org ; 
d...@cloudstack.apache.org 

*Subject:* Re: Console proxy creation failure

I see iptables issue from my kvm host. after some time iptables 
service stop then I can't access any vm.



*---*

*Alamin*



On 9/2/2021 10:12 AM, David Jumani wrote:
Could you send the /var/log/cloud.log in the console proxy VM ? Also 
try destroying and recreating the proxy VM




*From:* technologyrss.mail  


*Sent:* Wednesday, September 1, 2021 6:58 AM
*To:* users@cloudstack.apache.org 
  
; David Jumani 
 ; 
d...@cloudstack.apache.org  
 

*Subject:* Fwd: Console proxy creation failure

*Thank you so much ! *Yes, RAM issue. I increase RAM then fix but I 
see different error like I can't access vm console from browser. 
Please see below image.



ACS log file from below link.

https://drive.google.com/file/d/15C20k1wYlDFNReyY1iYoyfRdJodwdxiU/view?usp=sharing 





*---**
**Alamin.*


On 8/31/2021 2:38 PM, David Jumani wrote:

Hi

At just a glance, It looks like there isn't sufficient memory for 
the Console proxy to come up (by default it needs 1024 MB)
Try adding more memory or increasing the memory overprovisioning 
factor in the configuration / global settings tab




*From:* technologyrss.mail  


*Sent:* Tuesday, August 31, 2021 12:31 PM
*To:* users@cloudstack.apache.org 
  
; d...@cloudstack.apache.org 
  


*Subject:* Console proxy creation failure

*Hi,*

I am able to setup ACS using centos 7.9. all service are properly 
working fine. But when I create basic zone then I see error like below .


ACS server : Centos 0s 7.9
NFS server : Centos 0s 7.8
KVM server : Centos 0s 7.8

Secondary Storage VM working fine but can't start proxy vm. what is 
issue?


This is system capacity like as below.



Please give me any idea..


*Thanks, Alamin*



ApacheCon is just 3 weeks away!

2021-09-03 Thread Rich Bowen
[You are receiving this email because you are subscribed to the user 
list of one or more Apache project.]


Dear Apache enthusiast,

ApacheCon is our annual convention, featuring content related to our 
many software projects. This year, it will be held on September 21-23.


Registration is free this year, and since it’s online, you can attend 
from the comfort of your home or office.


Details about the event, including the schedule and registration, are 
available on the event website at https://apachecon.com/acah2021/


We hope you’ll consider attending this year, where you’ll see content in 
14 tracks, including: API & Microservice; Big Data: Ozone; Big Data: 
SQL/NoSQL; Big Data: Streaming; Cassandra; Community; Content Delivery; 
Content Management; Federated Data; Fineract & Fintech; Geospatial; 
Groovy; Highlights; Incubator; Integration; Internet of Things; 
Observability; Search; Tomcat.


We will also feature keynotes from Ashley Wolf, Mark Cox, Alison Parker 
and Michael Weinberg.


Details on the schedule, and these keynotes, can be found at 
https://www.apachecon.com/acah2021/tracks/


We look forward to seeing you at this year’s ApacheCon!

– Rich Bowen, for the Apachecon Planners


Feature Cloudstack 4.15

2021-09-03 Thread benoit lair
Hi ,

I am trying to use Backup and Recovery Framework with ACS 4.15.1

I would like to implement it with Xcp-NG servers
What i see is that only Veeam with Vmware is ready

Would it be possible to have an interface in order to define a custom
External Provider (3rd Party Backup Solutions like bacula, amanda or
backuppc ) like described here :

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Backup+and+Recovery+Framework

I was thinking about a form giving the commands to execute for each type of
Backup API Call of the framework


Thanks for your help and ideas

Regards, Benoit


Re: Low number of Talks for CloudStack Collaboration Conference

2021-09-03 Thread Rohit Yadav
Adding dev@ and 
pmc@:@d...@cloudstack.apache.org 
@priv...@cloudstack.apache.org

Thanks Ivet for your hard work, some of us in the community have a tendency of 
putting proposals in at the last minute (guilty as charged 🙂).

I'll submit my proposal/idea today and ask/request all on the users, dev, 
private lists for the same. Hope we'll get a fair number of submissions before 
the CFP dealine of September 20th 2021.


Regards.


From: Ivet Petrova 
Sent: Friday, September 3, 2021 16:20
To: users@cloudstack.apache.org 
Cc: Apache CloudStack Marketing 
Subject: Low number of Talks for CloudStack Collaboration Conference

Hi All,

I am writing this email to share that we are lacking enough talk submissions 
for the CloudStack Collaboration Conference in November.

With some disappointment, I need to share that we have just a few submissions. 
This make me very sad, as many people in the community share that we need 
better visibility and to grow the community. And this will not happen if we do 
not all put efforts.

In order to organise the conference, we really need more help from the 
community and more talks to be submitted. Otherwise, it will just not make 
sense.
Can I ask all of you to submit talk ideas here: 
https://forms.gle/sSdbyvLWAndjMFaN9

The talks can be about your use-case, something cool you've done with 
CloudStack, problems you have had, integrations done - basically anything that 
the community will find interesting.

Kind regards,






 



Re: [DISCUSS] 4.15.2.0

2021-09-03 Thread Rohit Yadav
Hi All,

Update - there are no outstanding issues or PRs for 4.15.2.0 at the time of 
writing this email:
https://github.com/apache/cloudstack/milestones/4.15.2.0

I'll run smoketests over the weekend and if there are no outstanding issues/PRs 
or objections I'll cut 4.15.2.0 RC1 early next week. Assuming 4.15.2.0 is going 
to be a relatively quick and stable minor release, I hope it won't interfere 
with the ongoing major 4.16 release effort.


Regards.


From: Rohit Yadav 
Sent: Tuesday, August 31, 2021 19:29
To: users@cloudstack.apache.org ; 
d...@cloudstack.apache.org 
Subject: Re: [DISCUSS] 4.15.2.0

All,

Since nobody has objected or volunteered I would continue as RM for 4.15.2.0 
release. The total no. of outstanding issues and PRs are around 20 with about 
80 issues/PRs already merged/closed towards the milestone. The most recent test 
matrix smoketests on 4.15 health PR has also passed.

To avoid conflicting timeline with 4.16 I propose to cut RC1 by end of this 
week or next week as soon as any outstanding blocker/critical/major bug fix PRs 
are reviewed/tested/merged. I request comments from the community on this and 
if anybody wants to report any blocker/critical issues/bugs found in 4.15.1.0.

Thanks and regards.

From: Rohit Yadav 
Sent: Saturday, August 21, 2021 4:36:38 PM
To: d...@cloudstack.apache.org ; 
users@cloudstack.apache.org 
Subject: Re: [DISCUSS] 4.15.2.0

Great thanks all, looks like we all think doing a quick and stable 4.15.2.0 
would be a good idea. Any volunteers for RM? I'm happy do it or help the 
volunteer as well.

Regards.

From: Nicolas Vazquez 
Sent: Friday, August 20, 2021 6:15:47 PM
To: d...@cloudstack.apache.org ; 
users@cloudstack.apache.org 
Subject: Re: [DISCUSS] 4.15.2.0

+1


Regards,

Nicolas Vazquez


From: Rohit Yadav 
Sent: Thursday, August 19, 2021 7:54 AM
To: d...@cloudstack.apache.org ; 
users@cloudstack.apache.org 
Subject: [DISCUSS] 4.15.2.0

All,

I want to kick a thread to discuss and gather interest from the community on 
doing a 4.15.2.0 release, before Nicolas (our RM) cuts 4.16.0.0 RC1 around the 
end of Sept '21.

We can keep the scope really tight to include only important fixes, I see about 
50 closed issues/PRs already on the 4.15.2.0 milestone and some 30 remaining:
https://github.com/apache/cloudstack/milestone/20

I'm hoping the 4.15.2.0 release can be a quick stable minor release, I can do 
it and any other volunteer RM may do it. Whoever does it, here's my proposal is:

  *   Limit the scope to very specific do-able bug-fixes up to end the month - 
so gives us roughly 2 weeks
  *   Cut RC1 in the first week of Sept
  *   Assuming the RC1 would be stable enough as 4.15 branch is quite stable, 
worst case we've another RC2
  *   Conclude release work by mid or end of Sept before 4.16.0.0 RC1 is cut; 
so 4.15.2.0 release work doesn't cause delay for 4.16.0.0

Thoughts?


Thanks and regards.













 



Re: GSoC 2021 Completes

2021-09-03 Thread Abhishek Kumar
Congratulations to both students and mentors, and the community in general!
Some great work started with these projects. Looking forward to seeing them in 
action in upcoming releases.
Students - hoping ACS community will continue to interest you and will look 
forward to seeing you around.


Regards,
Abhishek

From: Rohit Yadav 
Sent: 03 September 2021 16:10
To: d...@cloudstack.apache.org ; 
users@cloudstack.apache.org ; Bikram Biswas 
; Sangwoo Bae ; Apurv 
Gupta ; atrocityth...@gmail.com 
Cc: Apache CloudStack Marketing 
Subject: GSoC 2021 Completes

All,

I'm happy to share with the community that our project participation at Google 
Summer of Code 2021 comes to an end with four successful projects [1][2] by our 
four students passing with flying colours. Results were available earlier this 
week on 31st Aug 2021 [3].

Let's use this opportunity to congratulate our students and mentors and ask 
them to share any feedback on the project - hope we'll participate next year 
too!

Let me start;

Congratulate Apurv, Junxuan, Bikram, and Sangwoo for your hard work and 
successful projects! We look forward to seeing you around in the community!

Thank you mentors for your hard work and mentoring the students - Pearl, David, 
Suresh, Bobby, Hari, and Nicolas!

Two of our students have blogged about their experiences, you may read them 
here:
Apurv's blog: 
https://apurv-gupta.medium.com/google-summer-of-code-apachecloudstack-final-report-bae911b0bd44
Bikram's blog: 
https://medium.com/@bickrombishsass/gsoc-2021-experience-at-apache-cloudstack-8946fe31ff5b

[1] GSoC 2021 at Apache CloudStack Project: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/GSoC+2021
[2] Student PR submissions: 
https://github.com/apache/cloudstack/pulls?q=is%3Aopen+is%3Apr+label%3Agsoc2021
[3] https://summerofcode.withgoogle.com/how-it-works/#timeline

Regards.




 



Low number of Talks for CloudStack Collaboration Conference

2021-09-03 Thread Ivet Petrova
Hi All,

I am writing this email to share that we are lacking enough talk submissions 
for the CloudStack Collaboration Conference in November.

With some disappointment, I need to share that we have just a few submissions. 
This make me very sad, as many people in the community share that we need 
better visibility and to grow the community. And this will not happen if we do 
not all put efforts.

In order to organise the conference, we really need more help from the 
community and more talks to be submitted. Otherwise, it will just not make 
sense.
Can I ask all of you to submit talk ideas here: 
https://forms.gle/sSdbyvLWAndjMFaN9

The talks can be about your use-case, something cool you've done with 
CloudStack, problems you have had, integrations done - basically anything that 
the community will find interesting.

Kind regards,


 



GSoC 2021 Completes

2021-09-03 Thread Rohit Yadav
All,

I'm happy to share with the community that our project participation at Google 
Summer of Code 2021 comes to an end with four successful projects [1][2] by our 
four students passing with flying colours. Results were available earlier this 
week on 31st Aug 2021 [3].

Let's use this opportunity to congratulate our students and mentors and ask 
them to share any feedback on the project - hope we'll participate next year 
too!

Let me start;

Congratulate Apurv, Junxuan, Bikram, and Sangwoo for your hard work and 
successful projects! We look forward to seeing you around in the community!

Thank you mentors for your hard work and mentoring the students - Pearl, David, 
Suresh, Bobby, Hari, and Nicolas!

Two of our students have blogged about their experiences, you may read them 
here:
Apurv's blog: 
https://apurv-gupta.medium.com/google-summer-of-code-apachecloudstack-final-report-bae911b0bd44
Bikram's blog: 
https://medium.com/@bickrombishsass/gsoc-2021-experience-at-apache-cloudstack-8946fe31ff5b

[1] GSoC 2021 at Apache CloudStack Project: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/GSoC+2021
[2] Student PR submissions: 
https://github.com/apache/cloudstack/pulls?q=is%3Aopen+is%3Apr+label%3Agsoc2021
[3] https://summerofcode.withgoogle.com/how-it-works/#timeline

Regards.

 



RE: XCP-ng Backup Cloudstack 4.15

2021-09-03 Thread Yordan Kostov
Hey Benoit,

I am also interested is such integration to external vendor for XCP 
hypervisor.
Lately my attention is in other matters but I do begun thinking on an 
external solution for a cold backup. Here is a design concept:

Backup:
1. ACS framework does occasional volume backups and keeps them 
on NFS share (hot backup)
2. Veeam collects the backup folder structure and contents once 
every X and keeps it on tape (cold backup)

In this case ACS framework is what the backup solution for hot backup 
and Veeam is for cold.

Hot restore: 
1. Happens through ACS Framework and is done by the end user.

Cold restore:
1. admin dumps a cold backup to a NFS
2. in an automated manner single or all volumes are imported 
into ACS and assigned to the proper account owners of the original VMs.
3. ACS user restore VMs from volumes as Hot restore.

From everything above what is not possible is how the restored folder structure 
and volumes can be automatically imported and assigned to the proper accounts?
What I came up it as least effort solution that also is resilient is on the 
backup NFS server to make a cron script that executes once an hour and scans 
the /backup partition.

ACS NFS backup partition has the following structure
//snapshots/account_id/VOLUME_ID 
/master_and_delta_files

In each VOLUME_ID directory a YAML file will be created listing:
- owner acc id
- origin volume id
- origin volume name (taken from ACS DB)
- list of mater and delta files

Consecutive runs of the script will compare if VOLUME_ID folder 
contents are different than the YAML and will update it. Usually that will 
before Veeam backup occurs.
Then Veeam backup will occur. 
There is one caveat here - hot backups can occur anytime depending on 
user settings while cold backup happens once per time frame. There can be the 
case where cold backup occurs while hot backup jobs are running. This should be 
avoided.

When restore is required specific volume will be restored from Veeam 
and with restore shell script it will take the volume YAML conif:
- converge deltas
- set new volume name as RESTORED_
- import the volume to ACS under the origin account ID

Moreover a secondary script can create VMs from volumes for a case of 
mass cold restore but that will be at later stage.
Again that is just a concept and if anyone has an idea on how to 
improve or simplify that will be great!

Best regards,
Jordan


 
-Original Message-
From: benoit lair  
Sent: 02 септември 2021 г. 18:36
To: users@cloudstack.apache.org
Subject: Re: XCP-ng Backup Cloudstack 4.15


[X] This message came from outside your organization


Is there a way to implement ourself a custom external provider in order to 
backup VMs ?

Le jeu. 2 sept. 2021 à 15:52, benoit lair  a écrit :

> Hello,
>
> I am interested too in doing Backup VM for Xcp-NG Would you have a 
> solution for using Veeam like Yordan aims ?
>
> Le lun. 12 juil. 2021 à 13:30, Abishek Budhathoki  
> a écrit :
>
>> Thank You for the response. Really apricated.
>>
>> On 2021/07/12 09:41:07, Rohit Yadav  wrote:
>> > Hi Abishek,
>> >
>> > That's right, the current Backup & Recovery framework only supports
>> Veeam provider on VMware.
>> >
>> > For XenServer/xcpng, we don't have a plugin/provider, however 
>> > volume
>> snapshots can be used to backup snapshots on secondary storage.
>> >
>> > Regards.
>> >
>> > Regards,
>> > Rohit Yadav
>> >
>> > 
>> > From: Abishek Budhathoki 
>> > Sent: Saturday, July 10, 2021 7:42:12 PM
>> > To: users@cloudstack.apache.org 
>> > Subject: XCP-ng Backup Cloudstack 4.15
>> >
>> > Hello EveryOne,
>> >
>> > I am trying cloudstack with xen environment. I was trying out the
>> backup feature of the cloudstack and was not able to achieve it. Does 
>> the backup work in xen environment or it strictly only works with vmware 
>> only.
>> >
>> >
>> >
>> >
>> >
>> >
>>
>


Re: [DISCUSS] SystemVM template upgrade improvements

2021-09-03 Thread Pearl d'Silva
Hi Nathan,

You are right, the inclusion of templates as part of the cloudstack management 
package was done bearing in mind that we may have users with restricted 
internet connection.
With respect to the second point you make, while creating separate packages for 
different hypervisor templates would solve the issue of the size of the 
package, we may still end up facing the same issue of missing systemVM 
templates in scenarios where we start off with a zone with say, KVM hosts and 
in due course of time, decide to add XenServer/ VMWare hosts and the admin 
doesn't install the hypervisor specific packages. Keeping that in mind, we 
thought that the all-in-one approach covers more ground.

Thanks,
Pearl


From: Nathan McGarvey 
Sent: Friday, September 3, 2021 11:53 AM
To: users@cloudstack.apache.org 
Subject: Re: [DISCUSS] SystemVM template upgrade improvements

+1

This is also helpful for restricted-from-internet installations. (E.g.
places with on-site installs and strong firewall/proxy/air-gap rules.)
Which is a feature that is increasingly hard to come by for cloud-based
underpinnings, but increasingly of interest for many organizations who
like to have control of where their data resides. (Banks, medical
institutions, governments, etc.)


Should it be in the same packaging, or be a separate package entirely?
That way the current packaging could still remain as-is but have the
option of obtaining the seperately packaged systemVMs in package-manager
format. If you really wanted to, you could even break out the KVM vs Xen
vs VMWare into separate packages to help reduce size and increase
modularity. Then you still are hooking into the turnkey-method since it
lends itself to a apt-get upgrade or yum upgrade, and can update
components individually and maintain that certain versions of SystemVMs
require certain cloudstack versions and vice-versa.



Thanks,
-Nathan McGarvey

On 9/2/21 9:29 AM, Rohit Yadav wrote:
> Hi Hean,
>
> Yes I think the old approach of registering systemvm template prior to 
> upgrade as well as the option to switch between systemvmtemplate continues to 
> be supported. What this feature primarily aims is to make CloudStack turnkey 
> operationally.
>
> May I ask if anyone has any objections on the increased package size? Due to 
> the trade off of including systemvmtemplates in the management package the 
> size increased to about 1-1.5GB which is the only thing I didn't like. 
> However I think this can be optimised in future releases.
>
> Regards.
> 
> From: Hean Seng 
> Sent: Thursday, September 2, 2021 7:34:32 AM
> To: users@cloudstack.apache.org 
> Cc: d...@cloudstack.apache.org 
> Subject: Re: [DISCUSS] SystemVM template upgrade improvements
>
> This is good idea.  Or else , we shall allow  manual upload via. GUI, and
> mark for system template .
>
> On Wed, Sep 1, 2021 at 9:08 PM Pearl d'Silva 
> wrote:
>
>> I probably missed adding the PR link to the feature -
>> https://github.com/apache/cloudstack/pull/4329. Please do provide you
>> inputs.
>>
>>
>> Thanks,
>> Pearl
>>
>> 
>> From: Pearl d'Silva 
>> Sent: Wednesday, September 1, 2021 5:49 PM
>> To: d...@cloudstack.apache.org 
>> Subject: [DISCUSS] SystemVM template upgrade improvements
>>
>> Hi All,
>>
>> We have been working on a feature that simplifies SystemVM template
>> install and upgrades for CloudStack. Historically we've required users to
>> seed the template on secondary storage during fresh installation and
>> register the template before an upgrade - this really does not make
>> CloudStack turnkey, as we end up maintaining and managing them as a
>> separate component - for example, users can't simply do an apt-get upgrade
>> or yum upgrade to upgrade CloudStack.
>>
>> The feature works by automatically initiating registration of the SystemVM
>> templates during upgrades or when the first secondary storage is added to a
>> zone where the SystemVM template hasn't been seeded. This feature addresses
>> several operational pain points for example, when the admin user forgets to
>> register the SystemVM template prior to an upgrade and faces the issue of
>> having to roll back the database midway during the upgrade process. With
>> this feature the upgrade process is seamless, such that the end users do
>> not need to worry about having to perform template registration, but rather
>> have the upgrade process take care of everything that is required.
>>
>> In order to facilitate this feature, the SystemVM templates have to be
>> bundled with the cloudstack-management rpm/deb package which causes the
>> total noredist cloudstack-management package size to increase to about
>> 1.6GB. We currently are packaging templates of only the three widely
>> supported hypervisors - KVM, XenServer/XCP-ng and VMWare.
>> (These templates are only packaged if the build is initiated with the
>> noredist flag.)
>>
>> We'd like to get your o

Re: [DISCUSS] SystemVM template upgrade improvements

2021-09-03 Thread Pearl d'Silva
Hi Hean,

Thanks for your response. As mentioned, the usual upgrade procedure, of admins 
registering template prior upgrade continues to be supported and if we find the 
new systemvm template already registered the new logic for setting up the 
templates doesn't kick in.

Thanks,
Pearl



From: Hean Seng 
Sent: Thursday, September 2, 2021 7:34 AM
To: users@cloudstack.apache.org 
Cc: d...@cloudstack.apache.org 
Subject: Re: [DISCUSS] SystemVM template upgrade improvements

This is good idea.  Or else , we shall allow  manual upload via. GUI, and
mark for system template .

On Wed, Sep 1, 2021 at 9:08 PM Pearl d'Silva 
wrote:

> I probably missed adding the PR link to the feature -
> https://github.com/apache/cloudstack/pull/4329. Please do provide you
> inputs.
>
>
> Thanks,
> Pearl
>
> 
> From: Pearl d'Silva 
> Sent: Wednesday, September 1, 2021 5:49 PM
> To: d...@cloudstack.apache.org 
> Subject: [DISCUSS] SystemVM template upgrade improvements
>
> Hi All,
>
> We have been working on a feature that simplifies SystemVM template
> install and upgrades for CloudStack. Historically we've required users to
> seed the template on secondary storage during fresh installation and
> register the template before an upgrade - this really does not make
> CloudStack turnkey, as we end up maintaining and managing them as a
> separate component - for example, users can't simply do an apt-get upgrade
> or yum upgrade to upgrade CloudStack.
>
> The feature works by automatically initiating registration of the SystemVM
> templates during upgrades or when the first secondary storage is added to a
> zone where the SystemVM template hasn't been seeded. This feature addresses
> several operational pain points for example, when the admin user forgets to
> register the SystemVM template prior to an upgrade and faces the issue of
> having to roll back the database midway during the upgrade process. With
> this feature the upgrade process is seamless, such that the end users do
> not need to worry about having to perform template registration, but rather
> have the upgrade process take care of everything that is required.
>
> In order to facilitate this feature, the SystemVM templates have to be
> bundled with the cloudstack-management rpm/deb package which causes the
> total noredist cloudstack-management package size to increase to about
> 1.6GB. We currently are packaging templates of only the three widely
> supported hypervisors - KVM, XenServer/XCP-ng and VMWare.
> (These templates are only packaged if the build is initiated with the
> noredist flag.)
>
> We'd like to get your opinion on this idea.
>
> Thanks & Regards,
> Pearl Dsilva
>
>
>
>
>
>
>

--
Regards,
Hean Seng