Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Hean Seng
Hi

For CEPH, it is not expected to have all power down, or a sudden of power
down,  for a proper data center environment.

NFS is good, however other then the high availability limitation of it,
 NFS is filesystem formatted at storage end,  This indeed may cause to very
high CPU usage of Storage server if the IO requirement is high for VM.
Performance  issues may occur if  this happens.  This especially if you
hosted  database server and Email server, which require a lot of write of a
small files .

ISCSI and SANS is better for block storage requirement.  However in this
Cloudstack support of this ISCSI or SANS, it can only configure as local
storage,  Cluster Filesystem is nightmare .




On Sat, Oct 30, 2021 at 3:35 AM Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> wrote:

> Ignazio, many thanks for your feedback.
>
> In the past we try ceph and it works great, until an electrical outage
> broken it and we don't want to continue with this technology at least at
> it get better or we can geo replicate it in othe site.  Other thing is,
> when something big occurs ceph take a lot of time to recovery and
> repair, so this will leave you offline until the process finish, but you
> never know if your information is safe until finish, we can say, is not.
> For a cluster of replica 3, of 80TB it can take a week or more. This is
> not an option for us.
>
> Previusly we use NFS as separated primary storages, and now we still
> with NFS until we get a replacement. NFS is great too, because you can
> get an stable solution with KVM and QCOW2, if something happends you
> have lot of chances of start all again with low risk of degradation. You
> can start all again in hours. The main problem is the performance
> bottleneck and high availability of the VMs at storage side.
>
> That is the main reason we want to test linstor, because it promise some
> features, like replication with DRDB, HA, and performance all in one. At
> this point we cannot finish the configuration in ACS 4.16 RC2, because
> there is not documentation and we are having some problem with Linstor,
> ZFS and ACS that we are not able to discover.
>
> What solution recommends for a ACS cluster for deploy aprox 1000 VMs?.
>
> Regards,
>
> Mauro
>
> El 29/10/2021 a las 15:56, Ignazio Cassano escribió:
> > Hi Mauro, what would you like to store on the clustered file system ?
> > If you want use it for virtual machine disks I think nfs is a good
> > solution.
> > Clustered file system could be used if your virtualization nodes have
> > a lot of disks.
> > I usually I prefer use a nas or a San.
> > If you have a San you can use iscsi with clustered logical volumes.
> > Each logical volume can host a virtual machine volume and clustered
> > lvm can handle locks.
> > Ignazio
> >
> >
> >
> > Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting
> >  ha scritto:
> >
> > Hi,
> >
> > We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> > finish the tests we can give you some approach for the results. Are
> > someone already try this technology?.
> >
> > Regards,
> >
> > El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> > >
> > > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng 
> > wrote:
> > >
> > >> I have similar consideration when start exploring Cloudstack ,
> > but in
> > >> reality  Clustered Filesystem is not easy to maintain.  You
> > seems have
> > >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in
> > redhat ,  ocfs
> > >> recently only maintained in oracle linux.  I believe you do not
> > want to
> > >> choose solution that is very propriety .   Thus just SAN or
> > ISCSI o is not
> > >> really a direct solution here , except you want to encapsulate
> > it in NFS
> > >> and facing Cloudstack Storage.
> > >>
> > >> It work good on CEPH and NFS , but performance wise, NFS is
> > better . And
> > >> all documentation and features you saw  in Cloudstack , it work
> > perfectly
> > >> on NFS.
> > >>
> > >> If you choose CEPH,  may be you have to compensate with some
> > performance
> > >> degradation,
> > >>
> > >>
> > >>
> > >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
> > 
> > >> wrote:
> > >>
> > >>> I've been using Ceph in prod for volumes for some time. Note that
> > >> although
> > >>> I had several cloudstack installations,  this one runs on top
> > of Cinder,
> > >>> but it basic translates as libvirt and rados.
> > >>>
> > >>> It is totally stable and performance IMHO is enough for
> > virtualized
> > >>> services.
> > >>>
> > >>> IO might suffer some penalization due the data replication
> > inside Ceph.
> > >>> Elasticsearch for instance, the degradation would be a bit
> > worse as there
> > >>> is replication 

Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Mauro Ferraro - G2K Hosting

Ignazio, many thanks for your feedback.

In the past we try ceph and it works great, until an electrical outage 
broken it and we don't want to continue with this technology at least at 
it get better or we can geo replicate it in othe site.  Other thing is, 
when something big occurs ceph take a lot of time to recovery and 
repair, so this will leave you offline until the process finish, but you 
never know if your information is safe until finish, we can say, is not. 
For a cluster of replica 3, of 80TB it can take a week or more. This is 
not an option for us.


Previusly we use NFS as separated primary storages, and now we still 
with NFS until we get a replacement. NFS is great too, because you can 
get an stable solution with KVM and QCOW2, if something happends you 
have lot of chances of start all again with low risk of degradation. You 
can start all again in hours. The main problem is the performance 
bottleneck and high availability of the VMs at storage side.


That is the main reason we want to test linstor, because it promise some 
features, like replication with DRDB, HA, and performance all in one. At 
this point we cannot finish the configuration in ACS 4.16 RC2, because 
there is not documentation and we are having some problem with Linstor, 
ZFS and ACS that we are not able to discover.


What solution recommends for a ACS cluster for deploy aprox 1000 VMs?.

Regards,

Mauro

El 29/10/2021 a las 15:56, Ignazio Cassano escribió:

Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good 
solution.
Clustered file system could be used if your virtualization nodes have 
a lot of disks.

I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered 
lvm can handle locks.

Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting 
 ha scritto:


Hi,

We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
finish the tests we can give you some approach for the results. Are
someone already try this technology?.

Regards,

El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
>
> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng 
wrote:
>
>> I have similar consideration when start exploring Cloudstack ,
but in
>> reality  Clustered Filesystem is not easy to maintain.  You
seems have
>> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in
redhat ,  ocfs
>> recently only maintained in oracle linux.  I believe you do not
want to
>> choose solution that is very propriety .   Thus just SAN or
ISCSI o is not
>> really a direct solution here , except you want to encapsulate
it in NFS
>> and facing Cloudstack Storage.
>>
>> It work good on CEPH and NFS , but performance wise, NFS is
better . And
>> all documentation and features you saw  in Cloudstack , it work
perfectly
>> on NFS.
>>
>> If you choose CEPH,  may be you have to compensate with some
performance
>> degradation,
>>
>>
>>
>> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes

>> wrote:
>>
>>> I've been using Ceph in prod for volumes for some time. Note that
>> although
>>> I had several cloudstack installations,  this one runs on top
of Cinder,
>>> but it basic translates as libvirt and rados.
>>>
>>> It is totally stable and performance IMHO is enough for
virtualized
>>> services.
>>>
>>> IO might suffer some penalization due the data replication
inside Ceph.
>>> Elasticsearch for instance, the degradation would be a bit
worse as there
>>> is replication also in the application size, but IMHO, unless
you need
>>> extreme low latency it would be ok.
>>>
>>>
>>> Best,
>>>
>>> Leandro.
>>>
>>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
>> michael.bru...@nttdata.com
>>> wrote:
>>>
 Hello community,

 today I need your experience and knowhow about clustered/shared
 filesystems based on SAN storage to be used with KVM.
 We need to consider about a clustered/shared filesystem based
on SAN
 storage (no NFS or iSCSI), but do not have any knowhow or
experience
>> with
 this.
 Those I would like to ask if there any productive used
environments out
 there based on SAN storage on KVM?
 If so, which clustered/shared filesystem you are using and
how is your
 experience with that (stability, reliability, maintainability,
>>> performance,
 useability,...)?
 Furthermore, if you had already to consider in the past
between SAN
 storage or CEPH, I would also like to participate 

Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Ignazio Cassano
Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good solution.
Clustered file system could be used if your virtualization nodes have a lot
of disks.
I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered lvm can
handle locks.
Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> ha scritto:

> Hi,
>
> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> finish the tests we can give you some approach for the results. Are
> someone already try this technology?.
>
> Regards,
>
> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >
> > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
> >
> >> I have similar consideration when start exploring  Cloudstack , but in
> >> reality  Clustered Filesystem is not easy to maintain.  You seems have
> >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat ,  ocfs
> >> recently only maintained in oracle linux.  I believe you do not want to
> >> choose solution that is very propriety .   Thus just SAN or ISCSI o is
> not
> >> really a direct solution here , except you want to encapsulate it in NFS
> >> and facing Cloudstack Storage.
> >>
> >> It work good on CEPH and NFS , but performance wise,  NFS is better .
> And
> >> all documentation and features you saw  in Cloudstack , it work
> perfectly
> >> on NFS.
> >>
> >> If you choose CEPH,  may be you have to compensate with some performance
> >> degradation,
> >>
> >>
> >>
> >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
> >> wrote:
> >>
> >>> I've been using Ceph in prod for volumes for some time. Note that
> >> although
> >>> I had several cloudstack installations,  this one runs on top of
> Cinder,
> >>> but it basic translates as libvirt and rados.
> >>>
> >>> It is totally stable and performance IMHO is enough for virtualized
> >>> services.
> >>>
> >>> IO might suffer some penalization due the data replication inside Ceph.
> >>> Elasticsearch for instance, the degradation would be a bit worse as
> there
> >>> is replication also in the application size, but IMHO, unless you need
> >>> extreme low latency it would be ok.
> >>>
> >>>
> >>> Best,
> >>>
> >>> Leandro.
> >>>
> >>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
> >> michael.bru...@nttdata.com
> >>> wrote:
> >>>
>  Hello community,
> 
>  today I need your experience and knowhow about clustered/shared
>  filesystems based on SAN storage to be used with KVM.
>  We need to consider about a clustered/shared filesystem based on SAN
>  storage (no NFS or iSCSI), but do not have any knowhow or experience
> >> with
>  this.
>  Those I would like to ask if there any productive used environments
> out
>  there based on SAN storage on KVM?
>  If so, which clustered/shared filesystem you are using and how is your
>  experience with that (stability, reliability, maintainability,
> >>> performance,
>  useability,...)?
>  Furthermore, if you had already to consider in the past between SAN
>  storage or CEPH, I would also like to participate on your
> >> considerations
>  and results :)
> 
>  Regards,
>  Michael
> 
> >>
> >> --
> >> Regards,
> >> Hean Seng
> >>
> >
>


Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Hean Seng
Hi Vivek

Which part of XCP xen better then  KVM ?  Performance ?Is tht NFS for
XCP also ?

On Fri, Oct 29, 2021 at 4:14 PM Vivek Kumar 
wrote:

> I have been using GFS2 with shared mount point in production KVM since
> long, Trust me you need to have an expert to manage your whole cluster
> otherwise it becomes very hard to manage, NFS works pretty fine with KVM,
> if you are planning to use ISCSi or FC,  XenServer/XCP and VMware works far
> far better then KVM  and very easy to manage.
>
>
>
>
> Vivek Kumar
> Sr. Manager - Cloud & DevOps
> IndiQus Technologies
> M +91 7503460090
> www.indiqus.com
>
>
>
>
> > On 29-Oct-2021, at 1:14 PM, Hean Seng  wrote:
> >
> > For primitive way for NFS HA,  you can consider is just using DRDB .
> >
> > I think is not yet supported linstor here.
> >
> >
> >
> > On Fri, Oct 29, 2021 at 2:29 PM Piotr Pisz  wrote:
> >
> >> Hi
> >>
> >> So we plan to use linstor in parallel to ceph as a fast resource on nvme
> >> cards.
> >> Its advantage is that it natively supports zfs with deduplication and
> >> compression :-)
> >> The test results were more than passable.
> >>
> >> Regards,
> >> Piotr
> >>
> >>
> >> -Original Message-
> >> From: Mauro Ferraro - G2K Hosting 
> >> Sent: Thursday, October 28, 2021 2:02 PM
> >> To: users@cloudstack.apache.org; Pratik Chandrakar <
> >> chandrakarpra...@gmail.com>
> >> Subject: Re: Experience with clustered/shared filesystems based on SAN
> >> storage on KVM?
> >>
> >> Hi,
> >>
> >> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> >> finish the tests we can give you some approach for the results. Are
> someone
> >> already try this technology?.
> >>
> >> Regards,
> >>
> >> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> >>> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >>>
> >>> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
> >>>
>  I have similar consideration when start exploring  Cloudstack , but
>  in reality  Clustered Filesystem is not easy to maintain.  You seems
>  have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat
>  ,  ocfs recently only maintained in oracle linux.  I believe you do
> not
> >> want to
>  choose solution that is very propriety .   Thus just SAN or ISCSI o is
> >> not
>  really a direct solution here , except you want to encapsulate it in
>  NFS and facing Cloudstack Storage.
> 
>  It work good on CEPH and NFS , but performance wise,  NFS is better .
>  And all documentation and features you saw  in Cloudstack , it work
>  perfectly on NFS.
> 
>  If you choose CEPH,  may be you have to compensate with some
>  performance degradation,
> 
> 
> 
>  On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
>  
>  wrote:
> 
> > I've been using Ceph in prod for volumes for some time. Note that
>  although
> > I had several cloudstack installations,  this one runs on top of
> > Cinder, but it basic translates as libvirt and rados.
> >
> > It is totally stable and performance IMHO is enough for virtualized
> > services.
> >
> > IO might suffer some penalization due the data replication inside
> Ceph.
> > Elasticsearch for instance, the degradation would be a bit worse as
> > there is replication also in the application size, but IMHO, unless
> > you need extreme low latency it would be ok.
> >
> >
> > Best,
> >
> > Leandro.
> >
> > On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
>  michael.bru...@nttdata.com
> > wrote:
> >
> >> Hello community,
> >>
> >> today I need your experience and knowhow about clustered/shared
> >> filesystems based on SAN storage to be used with KVM.
> >> We need to consider about a clustered/shared filesystem based on
> >> SAN storage (no NFS or iSCSI), but do not have any knowhow or
> >> experience
>  with
> >> this.
> >> Those I would like to ask if there any productive used environments
> >> out there based on SAN storage on KVM?
> >> If so, which clustered/shared filesystem you are using and how is
> >> your experience with that (stability, reliability, maintainability,
> > performance,
> >> useability,...)?
> >> Furthermore, if you had already to consider in the past between SAN
> >> storage or CEPH, I would also like to participate on your
>  considerations
> >> and results :)
> >>
> >> Regards,
> >> Michael
> >>
> 
>  --
>  Regards,
>  Hean Seng
> 
> >>>
> >>
> >>
> >
> > --
> > Regards,
> > Hean Seng
>
>

-- 
Regards,
Hean Seng


Re: [VOTE] Apache CloudStack 4.16.0.0 (RC2)

2021-10-29 Thread Rohit Yadav
Nicolas, all,

Please check if we've a regression around test_nic for VMware, see 
https://github.com/apache/cloudstack/pull/5201


Regards.


From: Nicolas Vazquez 
Sent: Friday, October 29, 2021 19:07
To: d...@cloudstack.apache.org ; users 

Subject: Re: [VOTE] Apache CloudStack 4.16.0.0 (RC2)

Thanks for reporting Abhishek, this will need to be included on RC3

Regards,
Nicolas Vazquez


From: Abhishek Kumar 
Date: Friday, 29 October 2021 at 10:25
To: d...@cloudstack.apache.org , users 

Subject: Re: [VOTE] Apache CloudStack 4.16.0.0 (RC2)
Hi all,

-1

I'm hitting an issue with deploying a CKS cluster in an upgraded env.

2021-10-29 12:27:30,544 ERROR [c.c.k.c.a.KubernetesClusterActionWorker] 
(API-Job-Executor-7:ctx-62c6bc28 job-79 ctx-f80558ff) (logid:7a78d08a) 
Provisioning the control VM failed in the Kubernetes cluster : c3
com.cloud.exception.InvalidParameterValueException: The template 203 is not 
available for use
at 
com.cloud.vm.UserVmManagerImpl.createVirtualMachine(UserVmManagerImpl.java:3935)
at 
com.cloud.vm.UserVmManagerImpl.createAdvancedVirtualMachine(UserVmManagerImpl.java:3634)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

Deployed a 4.14.1 env with 2 clusters (2 ESXi 6.7 hosts in 1st cluster and 1 
ESXi 6.7 host in 2nd cluster); centos7 MS; Advanced zone
Upgraded the env to 4.16RC2 using 
https://download.cloudstack.org/testing/41600-RC2/ (it didn't has centos7 
directory/symlink there)
I did not pre-register 4.16 system VM template.
After the upgrade, almost everything else worked fine except the error 
mentioned with CKS cluster deployment above.
I've created a Github ticket for the same here, 
https://github.com/apache/cloudstack/issues/5641

Other things that worked fine for me during testing:

  *   VM lifecycle - deploy, start, stop, migrate, etc
  *   Networks
  *   Templates
  *   Offerings
  *   Infrastructure operations

Regards,
Abhishek


From: Nicolas Vazquez 
Sent: 25 October 2021 19:25
To: d...@cloudstack.apache.org ; users 

Subject: [VOTE] Apache CloudStack 4.16.0.0 (RC2)

Hi All,

I have created a 4.16.0.0 release (RC2), with the following artifacts up for 
testing and a vote:

Git Branch and Commit SHA:
https://github.com/apache/cloudstack/tree/4.16.0.0-RC20211025T0851
Commit: 1e070be4c9a87650f48707a44efff2796dfa802a

Source release (checksums and signatures are available at the same location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.16.0.0/

PGP release keys (signed using 656E1BCC8CB54F84):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open until 28th October 2021, 16.00 CET (72h).

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

For users convenience, the packages from this release candidate (RC2) and
4.16.0 systemvmtemplates are available here:
https://download.cloudstack.org/testing/41600-RC2/
https://download.cloudstack.org/systemvm/4.16/

Regards,
Nicolas Vazquez










 



Configure Out-of-band Management?

2021-10-29 Thread James Steele
Hi all,


I have just configured out of band IPMI(v2) management on some Ubuntu 20.04 
Hosts, which is handy as it lets root admins power-cycle hosts and see their 
power state etc. from within the CloudStack webUI.

>From my DELL, iDRAC9 Settings, Configure Network Settings, IPMI settings: I 
>can set IPMI Encryption Keys, however there is no corresponding option in the 
>CloudStack webUI.

Is it possible to configure encrypted IPMI traffic in cloudstack? The only 
options I have in 'Configure Out-of-band Management' are:

Address: 10.250.x.x
Port: 623
Username: 
Password: 
Driver: ipmitool

The above settings do work well, but the traffic is not encrypted.

I also see that both DELL and CloudStack support Redfish. Is this perhaps the 
best way forward?


Anyone have any experience of this? Thanks, Jim


Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

2021-10-29 Thread Ivet Petrova
Sorry to hear this Nicolas. Still I can start preparation of the campaign and 
we can coordinate when we are all ready to announce.

Kind regards,


 

On 29 Oct 2021, at 16:39, Nicolas Vazquez 
mailto:nicolas.vazq...@shapeblue.com>> wrote:

+1, thanks Ivet.

Unfortunately a blocker was found after the voting passed. We will need to have 
an RC3 to fix this blocker

Regards,
Nicolas Vazquez


From: Ivet Petrova 
mailto:ivet.petr...@shapeblue.com>>
Date: Friday, 29 October 2021 at 10:23
To: users@cloudstack.apache.org 
mailto:users@cloudstack.apache.org>>
Cc: d...@cloudstack.apache.org 
mailto:d...@cloudstack.apache.org>>
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0
Yes, Rohit, correct.
Technical work is out of my scope and you know the best. I am talking mainly 
for the official announcement and communication.

Kind regards,







On 29 Oct 2021, at 15:51, Rohit Yadav 
mailto:rohit.ya...@shapeblue.com>>
 wrote:

+1 Ivet, great initiative!

I think logistically it would take about a week for the RM/community to do wrap 
up the release with build/publish pkgs, update the website, create/publish 
release notes, draft announcement/blog. I suppose what you're saying is to not 
hold any technical work (like we still build/publish pkgs etc) but hold on 
marketing work around making changes on website/release notes live and doing 
blogs/announcements. We haven't attempted a broader campaign before on a 
specific date before but let's try it.


Regards.


From: Ivet Petrova 
mailto:ivet.petr...@shapeblue.com>>
Sent: Friday, October 29, 2021 16:21
To: 
users@cloudstack.apache.org
 
mailto:users@cloudstack.apache.org>>
Cc: 
d...@cloudstack.apache.org
 
mailto:d...@cloudstack.apache.org>>
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

Hi all,

Firstly congrats on the hard work and I am happy to see the new release is 
coming soon.

I decided to share an idea here. It is great we have such a rapid development 
and new releases often with so much great features insight. I believe the world 
(inside and outside of our community) should learn more about them, when there 
is a release.

On this point, I was wondering if we can structure a process around announcing 
new releases. It will be great if when we announce the release we also have the 
following things prepared:
- Images for the social media announcements
- PR article for the release
- A more detailed article presenting why the new features and improvements are 
bringing value and talking more for use cases, customer benefits and added value

I would be happy to take these at my side, but all they need some time for 
preparation.
If you agree for the proposed process and bringing more noise around the 
release, is it possible to fix a release date, when we can also launch 
simultaneously the marketing initiatives?
As all the marketing preparations will take around a week, what about doing a 
broader campaign on 8th of November?

Kind regards,







On 29 Oct 2021, at 3:59, Nicolas Vazquez 
mailto:nicolas.vazq...@shapeblue.com>>
 wrote:

Hi all,

The vote for CloudStack 4.16.0.0 *passes* with
3 PMC + 2 non-PMC votes.

+1 (PMC / binding)
3 persons (Rohit, Gabriel, Wei)

+1 (non binding)
2 persons (Daniel, Suresh)

0
none

-1
none

Thanks to everyone participating.

I will now prepare the release announcement to go out after 24-48 hours to give 
the mirrors time to catch up.

Regards,
Nicolas Vazquez



Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

2021-10-29 Thread Nicolas Vazquez
+1, thanks Ivet.

Unfortunately a blocker was found after the voting passed. We will need to have 
an RC3 to fix this blocker

Regards,
Nicolas Vazquez


From: Ivet Petrova 
Date: Friday, 29 October 2021 at 10:23
To: users@cloudstack.apache.org 
Cc: d...@cloudstack.apache.org 
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0
Yes, Rohit, correct.
Technical work is out of my scope and you know the best. I am talking mainly 
for the official announcement and communication.

Kind regards,





 

On 29 Oct 2021, at 15:51, Rohit Yadav 
mailto:rohit.ya...@shapeblue.com>> wrote:

+1 Ivet, great initiative!

I think logistically it would take about a week for the RM/community to do wrap 
up the release with build/publish pkgs, update the website, create/publish 
release notes, draft announcement/blog. I suppose what you're saying is to not 
hold any technical work (like we still build/publish pkgs etc) but hold on 
marketing work around making changes on website/release notes live and doing 
blogs/announcements. We haven't attempted a broader campaign before on a 
specific date before but let's try it.


Regards.


From: Ivet Petrova 
mailto:ivet.petr...@shapeblue.com>>
Sent: Friday, October 29, 2021 16:21
To: users@cloudstack.apache.org 
mailto:users@cloudstack.apache.org>>
Cc: d...@cloudstack.apache.org 
mailto:d...@cloudstack.apache.org>>
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

Hi all,

Firstly congrats on the hard work and I am happy to see the new release is 
coming soon.

I decided to share an idea here. It is great we have such a rapid development 
and new releases often with so much great features insight. I believe the world 
(inside and outside of our community) should learn more about them, when there 
is a release.

On this point, I was wondering if we can structure a process around announcing 
new releases. It will be great if when we announce the release we also have the 
following things prepared:
- Images for the social media announcements
- PR article for the release
- A more detailed article presenting why the new features and improvements are 
bringing value and talking more for use cases, customer benefits and added value

I would be happy to take these at my side, but all they need some time for 
preparation.
If you agree for the proposed process and bringing more noise around the 
release, is it possible to fix a release date, when we can also launch 
simultaneously the marketing initiatives?
As all the marketing preparations will take around a week, what about doing a 
broader campaign on 8th of November?

Kind regards,







On 29 Oct 2021, at 3:59, Nicolas Vazquez 
mailto:nicolas.vazq...@shapeblue.com>>
 wrote:

Hi all,

The vote for CloudStack 4.16.0.0 *passes* with
3 PMC + 2 non-PMC votes.

+1 (PMC / binding)
3 persons (Rohit, Gabriel, Wei)

+1 (non binding)
2 persons (Daniel, Suresh)

0
none

-1
none

Thanks to everyone participating.

I will now prepare the release announcement to go out after 24-48 hours to give 
the mirrors time to catch up.

Regards,
Nicolas Vazquez


Re: [VOTE] Apache CloudStack 4.16.0.0 (RC2)

2021-10-29 Thread Nicolas Vazquez
Thanks for reporting Abhishek, this will need to be included on RC3

Regards,
Nicolas Vazquez


From: Abhishek Kumar 
Date: Friday, 29 October 2021 at 10:25
To: d...@cloudstack.apache.org , users 

Subject: Re: [VOTE] Apache CloudStack 4.16.0.0 (RC2)
Hi all,

-1

I'm hitting an issue with deploying a CKS cluster in an upgraded env.

2021-10-29 12:27:30,544 ERROR [c.c.k.c.a.KubernetesClusterActionWorker] 
(API-Job-Executor-7:ctx-62c6bc28 job-79 ctx-f80558ff) (logid:7a78d08a) 
Provisioning the control VM failed in the Kubernetes cluster : c3
com.cloud.exception.InvalidParameterValueException: The template 203 is not 
available for use
at 
com.cloud.vm.UserVmManagerImpl.createVirtualMachine(UserVmManagerImpl.java:3935)
at 
com.cloud.vm.UserVmManagerImpl.createAdvancedVirtualMachine(UserVmManagerImpl.java:3634)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

Deployed a 4.14.1 env with 2 clusters (2 ESXi 6.7 hosts in 1st cluster and 1 
ESXi 6.7 host in 2nd cluster); centos7 MS; Advanced zone
Upgraded the env to 4.16RC2 using 
https://download.cloudstack.org/testing/41600-RC2/ (it didn't has centos7 
directory/symlink there)
I did not pre-register 4.16 system VM template.
After the upgrade, almost everything else worked fine except the error 
mentioned with CKS cluster deployment above.
I've created a Github ticket for the same here, 
https://github.com/apache/cloudstack/issues/5641

Other things that worked fine for me during testing:

  *   VM lifecycle - deploy, start, stop, migrate, etc
  *   Networks
  *   Templates
  *   Offerings
  *   Infrastructure operations

Regards,
Abhishek


From: Nicolas Vazquez 
Sent: 25 October 2021 19:25
To: d...@cloudstack.apache.org ; users 

Subject: [VOTE] Apache CloudStack 4.16.0.0 (RC2)

Hi All,

I have created a 4.16.0.0 release (RC2), with the following artifacts up for 
testing and a vote:

Git Branch and Commit SHA:
https://github.com/apache/cloudstack/tree/4.16.0.0-RC20211025T0851
Commit: 1e070be4c9a87650f48707a44efff2796dfa802a

Source release (checksums and signatures are available at the same location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.16.0.0/

PGP release keys (signed using 656E1BCC8CB54F84):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open until 28th October 2021, 16.00 CET (72h).

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

For users convenience, the packages from this release candidate (RC2) and
4.16.0 systemvmtemplates are available here:
https://download.cloudstack.org/testing/41600-RC2/
https://download.cloudstack.org/systemvm/4.16/

Regards,
Nicolas Vazquez







 



Re: [VOTE] Apache CloudStack 4.16.0.0 (RC2)

2021-10-29 Thread Abhishek Kumar
Hi all,

-1

I'm hitting an issue with deploying a CKS cluster in an upgraded env.

2021-10-29 12:27:30,544 ERROR [c.c.k.c.a.KubernetesClusterActionWorker] 
(API-Job-Executor-7:ctx-62c6bc28 job-79 ctx-f80558ff) (logid:7a78d08a) 
Provisioning the control VM failed in the Kubernetes cluster : c3
com.cloud.exception.InvalidParameterValueException: The template 203 is not 
available for use
at 
com.cloud.vm.UserVmManagerImpl.createVirtualMachine(UserVmManagerImpl.java:3935)
at 
com.cloud.vm.UserVmManagerImpl.createAdvancedVirtualMachine(UserVmManagerImpl.java:3634)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

Deployed a 4.14.1 env with 2 clusters (2 ESXi 6.7 hosts in 1st cluster and 1 
ESXi 6.7 host in 2nd cluster); centos7 MS; Advanced zone
Upgraded the env to 4.16RC2 using 
https://download.cloudstack.org/testing/41600-RC2/ (it didn't has centos7 
directory/symlink there)
I did not pre-register 4.16 system VM template.
After the upgrade, almost everything else worked fine except the error 
mentioned with CKS cluster deployment above.
I've created a Github ticket for the same here, 
https://github.com/apache/cloudstack/issues/5641

Other things that worked fine for me during testing:

  *   VM lifecycle - deploy, start, stop, migrate, etc
  *   Networks
  *   Templates
  *   Offerings
  *   Infrastructure operations

Regards,
Abhishek


From: Nicolas Vazquez 
Sent: 25 October 2021 19:25
To: d...@cloudstack.apache.org ; users 

Subject: [VOTE] Apache CloudStack 4.16.0.0 (RC2)

Hi All,

I have created a 4.16.0.0 release (RC2), with the following artifacts up for 
testing and a vote:

Git Branch and Commit SHA:
https://github.com/apache/cloudstack/tree/4.16.0.0-RC20211025T0851
Commit: 1e070be4c9a87650f48707a44efff2796dfa802a

Source release (checksums and signatures are available at the same location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.16.0.0/

PGP release keys (signed using 656E1BCC8CB54F84):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open until 28th October 2021, 16.00 CET (72h).

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

For users convenience, the packages from this release candidate (RC2) and
4.16.0 systemvmtemplates are available here:
https://download.cloudstack.org/testing/41600-RC2/
https://download.cloudstack.org/systemvm/4.16/

Regards,
Nicolas Vazquez





 



Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

2021-10-29 Thread Ivet Petrova
Yes, Rohit, correct.
Technical work is out of my scope and you know the best. I am talking mainly 
for the official announcement and communication.

Kind regards,


 

On 29 Oct 2021, at 15:51, Rohit Yadav 
mailto:rohit.ya...@shapeblue.com>> wrote:

+1 Ivet, great initiative!

I think logistically it would take about a week for the RM/community to do wrap 
up the release with build/publish pkgs, update the website, create/publish 
release notes, draft announcement/blog. I suppose what you're saying is to not 
hold any technical work (like we still build/publish pkgs etc) but hold on 
marketing work around making changes on website/release notes live and doing 
blogs/announcements. We haven't attempted a broader campaign before on a 
specific date before but let's try it.


Regards.


From: Ivet Petrova 
mailto:ivet.petr...@shapeblue.com>>
Sent: Friday, October 29, 2021 16:21
To: users@cloudstack.apache.org 
mailto:users@cloudstack.apache.org>>
Cc: d...@cloudstack.apache.org 
mailto:d...@cloudstack.apache.org>>
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

Hi all,

Firstly congrats on the hard work and I am happy to see the new release is 
coming soon.

I decided to share an idea here. It is great we have such a rapid development 
and new releases often with so much great features insight. I believe the world 
(inside and outside of our community) should learn more about them, when there 
is a release.

On this point, I was wondering if we can structure a process around announcing 
new releases. It will be great if when we announce the release we also have the 
following things prepared:
- Images for the social media announcements
- PR article for the release
- A more detailed article presenting why the new features and improvements are 
bringing value and talking more for use cases, customer benefits and added value

I would be happy to take these at my side, but all they need some time for 
preparation.
If you agree for the proposed process and bringing more noise around the 
release, is it possible to fix a release date, when we can also launch 
simultaneously the marketing initiatives?
As all the marketing preparations will take around a week, what about doing a 
broader campaign on 8th of November?

Kind regards,







On 29 Oct 2021, at 3:59, Nicolas Vazquez 
mailto:nicolas.vazq...@shapeblue.com>>
 wrote:

Hi all,

The vote for CloudStack 4.16.0.0 *passes* with
3 PMC + 2 non-PMC votes.

+1 (PMC / binding)
3 persons (Rohit, Gabriel, Wei)

+1 (non binding)
2 persons (Daniel, Suresh)

0
none

-1
none

Thanks to everyone participating.

I will now prepare the release announcement to go out after 24-48 hours to give 
the mirrors time to catch up.

Regards,
Nicolas Vazquez



Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

2021-10-29 Thread Rohit Yadav
+1 Ivet, great initiative!

I think logistically it would take about a week for the RM/community to do wrap 
up the release with build/publish pkgs, update the website, create/publish 
release notes, draft announcement/blog. I suppose what you're saying is to not 
hold any technical work (like we still build/publish pkgs etc) but hold on 
marketing work around making changes on website/release notes live and doing 
blogs/announcements. We haven't attempted a broader campaign before on a 
specific date before but let's try it.


Regards.


From: Ivet Petrova 
Sent: Friday, October 29, 2021 16:21
To: users@cloudstack.apache.org 
Cc: d...@cloudstack.apache.org 
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

Hi all,

Firstly congrats on the hard work and I am happy to see the new release is 
coming soon.

I decided to share an idea here. It is great we have such a rapid development 
and new releases often with so much great features insight. I believe the world 
(inside and outside of our community) should learn more about them, when there 
is a release.

On this point, I was wondering if we can structure a process around announcing 
new releases. It will be great if when we announce the release we also have the 
following things prepared:
- Images for the social media announcements
- PR article for the release
- A more detailed article presenting why the new features and improvements are 
bringing value and talking more for use cases, customer benefits and added value

I would be happy to take these at my side, but all they need some time for 
preparation.
If you agree for the proposed process and bringing more noise around the 
release, is it possible to fix a release date, when we can also launch 
simultaneously the marketing initiatives?
As all the marketing preparations will take around a week, what about doing a 
broader campaign on 8th of November?

Kind regards,





 

On 29 Oct 2021, at 3:59, Nicolas Vazquez 
mailto:nicolas.vazq...@shapeblue.com>> wrote:

Hi all,

The vote for CloudStack 4.16.0.0 *passes* with
3 PMC + 2 non-PMC votes.

+1 (PMC / binding)
3 persons (Rohit, Gabriel, Wei)

+1 (non binding)
2 persons (Daniel, Suresh)

0
none

-1
none

Thanks to everyone participating.

I will now prepare the release announcement to go out after 24-48 hours to give 
the mirrors time to catch up.

Regards,
Nicolas Vazquez






Re: [RESULT][VOTE] Apache CloudStack 4.16.0.0

2021-10-29 Thread Ivet Petrova
Hi all,

Firstly congrats on the hard work and I am happy to see the new release is 
coming soon.

I decided to share an idea here. It is great we have such a rapid development 
and new releases often with so much great features insight. I believe the world 
(inside and outside of our community) should learn more about them, when there 
is a release.

On this point, I was wondering if we can structure a process around announcing 
new releases. It will be great if when we announce the release we also have the 
following things prepared:
- Images for the social media announcements
- PR article for the release
- A more detailed article presenting why the new features and improvements are 
bringing value and talking more for use cases, customer benefits and added value

I would be happy to take these at my side, but all they need some time for 
preparation.
If you agree for the proposed process and bringing more noise around the 
release, is it possible to fix a release date, when we can also launch 
simultaneously the marketing initiatives?
As all the marketing preparations will take around a week, what about doing a 
broader campaign on 8th of November?

Kind regards,


 

On 29 Oct 2021, at 3:59, Nicolas Vazquez 
mailto:nicolas.vazq...@shapeblue.com>> wrote:

Hi all,

The vote for CloudStack 4.16.0.0 *passes* with
3 PMC + 2 non-PMC votes.

+1 (PMC / binding)
3 persons (Rohit, Gabriel, Wei)

+1 (non binding)
2 persons (Daniel, Suresh)

0
none

-1
none

Thanks to everyone participating.

I will now prepare the release announcement to go out after 24-48 hours to give 
the mirrors time to catch up.

Regards,
Nicolas Vazquez






Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Vivek Kumar
I have been using GFS2 with shared mount point in production KVM since long, 
Trust me you need to have an expert to manage your whole cluster otherwise it 
becomes very hard to manage, NFS works pretty fine with KVM, if you are 
planning to use ISCSi or FC,  XenServer/XCP and VMware works far far better 
then KVM  and very easy to manage. 




Vivek Kumar
Sr. Manager - Cloud & DevOps 
IndiQus Technologies
M +91 7503460090 
www.indiqus.com




> On 29-Oct-2021, at 1:14 PM, Hean Seng  wrote:
> 
> For primitive way for NFS HA,  you can consider is just using DRDB .
> 
> I think is not yet supported linstor here.
> 
> 
> 
> On Fri, Oct 29, 2021 at 2:29 PM Piotr Pisz  wrote:
> 
>> Hi
>> 
>> So we plan to use linstor in parallel to ceph as a fast resource on nvme
>> cards.
>> Its advantage is that it natively supports zfs with deduplication and
>> compression :-)
>> The test results were more than passable.
>> 
>> Regards,
>> Piotr
>> 
>> 
>> -Original Message-
>> From: Mauro Ferraro - G2K Hosting 
>> Sent: Thursday, October 28, 2021 2:02 PM
>> To: users@cloudstack.apache.org; Pratik Chandrakar <
>> chandrakarpra...@gmail.com>
>> Subject: Re: Experience with clustered/shared filesystems based on SAN
>> storage on KVM?
>> 
>> Hi,
>> 
>> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
>> finish the tests we can give you some approach for the results. Are someone
>> already try this technology?.
>> 
>> Regards,
>> 
>> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
>>> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
>>> 
>>> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
>>> 
 I have similar consideration when start exploring  Cloudstack , but
 in reality  Clustered Filesystem is not easy to maintain.  You seems
 have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat
 ,  ocfs recently only maintained in oracle linux.  I believe you do not
>> want to
 choose solution that is very propriety .   Thus just SAN or ISCSI o is
>> not
 really a direct solution here , except you want to encapsulate it in
 NFS and facing Cloudstack Storage.
 
 It work good on CEPH and NFS , but performance wise,  NFS is better .
 And all documentation and features you saw  in Cloudstack , it work
 perfectly on NFS.
 
 If you choose CEPH,  may be you have to compensate with some
 performance degradation,
 
 
 
 On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
 
 wrote:
 
> I've been using Ceph in prod for volumes for some time. Note that
 although
> I had several cloudstack installations,  this one runs on top of
> Cinder, but it basic translates as libvirt and rados.
> 
> It is totally stable and performance IMHO is enough for virtualized
> services.
> 
> IO might suffer some penalization due the data replication inside Ceph.
> Elasticsearch for instance, the degradation would be a bit worse as
> there is replication also in the application size, but IMHO, unless
> you need extreme low latency it would be ok.
> 
> 
> Best,
> 
> Leandro.
> 
> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
 michael.bru...@nttdata.com
> wrote:
> 
>> Hello community,
>> 
>> today I need your experience and knowhow about clustered/shared
>> filesystems based on SAN storage to be used with KVM.
>> We need to consider about a clustered/shared filesystem based on
>> SAN storage (no NFS or iSCSI), but do not have any knowhow or
>> experience
 with
>> this.
>> Those I would like to ask if there any productive used environments
>> out there based on SAN storage on KVM?
>> If so, which clustered/shared filesystem you are using and how is
>> your experience with that (stability, reliability, maintainability,
> performance,
>> useability,...)?
>> Furthermore, if you had already to consider in the past between SAN
>> storage or CEPH, I would also like to participate on your
 considerations
>> and results :)
>> 
>> Regards,
>> Michael
>> 
 
 --
 Regards,
 Hean Seng
 
>>> 
>> 
>> 
> 
> -- 
> Regards,
> Hean Seng



Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Hean Seng
For primitive way for NFS HA,  you can consider is just using DRDB .

I think is not yet supported linstor here.



On Fri, Oct 29, 2021 at 2:29 PM Piotr Pisz  wrote:

> Hi
>
> So we plan to use linstor in parallel to ceph as a fast resource on nvme
> cards.
> Its advantage is that it natively supports zfs with deduplication and
> compression :-)
> The test results were more than passable.
>
> Regards,
> Piotr
>
>
> -Original Message-
> From: Mauro Ferraro - G2K Hosting 
> Sent: Thursday, October 28, 2021 2:02 PM
> To: users@cloudstack.apache.org; Pratik Chandrakar <
> chandrakarpra...@gmail.com>
> Subject: Re: Experience with clustered/shared filesystems based on SAN
> storage on KVM?
>
> Hi,
>
> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> finish the tests we can give you some approach for the results. Are someone
> already try this technology?.
>
> Regards,
>
> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >
> > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
> >
> >> I have similar consideration when start exploring  Cloudstack , but
> >> in reality  Clustered Filesystem is not easy to maintain.  You seems
> >> have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat
> >> ,  ocfs recently only maintained in oracle linux.  I believe you do not
> want to
> >> choose solution that is very propriety .   Thus just SAN or ISCSI o is
> not
> >> really a direct solution here , except you want to encapsulate it in
> >> NFS and facing Cloudstack Storage.
> >>
> >> It work good on CEPH and NFS , but performance wise,  NFS is better .
> >> And all documentation and features you saw  in Cloudstack , it work
> >> perfectly on NFS.
> >>
> >> If you choose CEPH,  may be you have to compensate with some
> >> performance degradation,
> >>
> >>
> >>
> >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
> >> 
> >> wrote:
> >>
> >>> I've been using Ceph in prod for volumes for some time. Note that
> >> although
> >>> I had several cloudstack installations,  this one runs on top of
> >>> Cinder, but it basic translates as libvirt and rados.
> >>>
> >>> It is totally stable and performance IMHO is enough for virtualized
> >>> services.
> >>>
> >>> IO might suffer some penalization due the data replication inside Ceph.
> >>> Elasticsearch for instance, the degradation would be a bit worse as
> >>> there is replication also in the application size, but IMHO, unless
> >>> you need extreme low latency it would be ok.
> >>>
> >>>
> >>> Best,
> >>>
> >>> Leandro.
> >>>
> >>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
> >> michael.bru...@nttdata.com
> >>> wrote:
> >>>
>  Hello community,
> 
>  today I need your experience and knowhow about clustered/shared
>  filesystems based on SAN storage to be used with KVM.
>  We need to consider about a clustered/shared filesystem based on
>  SAN storage (no NFS or iSCSI), but do not have any knowhow or
>  experience
> >> with
>  this.
>  Those I would like to ask if there any productive used environments
>  out there based on SAN storage on KVM?
>  If so, which clustered/shared filesystem you are using and how is
>  your experience with that (stability, reliability, maintainability,
> >>> performance,
>  useability,...)?
>  Furthermore, if you had already to consider in the past between SAN
>  storage or CEPH, I would also like to participate on your
> >> considerations
>  and results :)
> 
>  Regards,
>  Michael
> 
> >>
> >> --
> >> Regards,
> >> Hean Seng
> >>
> >
>
>

-- 
Regards,
Hean Seng


RE: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Piotr Pisz
Hi

So we plan to use linstor in parallel to ceph as a fast resource on nvme cards.
Its advantage is that it natively supports zfs with deduplication and 
compression :-)
The test results were more than passable.

Regards,
Piotr


-Original Message-
From: Mauro Ferraro - G2K Hosting  
Sent: Thursday, October 28, 2021 2:02 PM
To: users@cloudstack.apache.org; Pratik Chandrakar 
Subject: Re: Experience with clustered/shared filesystems based on SAN storage 
on KVM?

Hi,

We are trying to make a lab with ACS 4.16 and Linstor. As soon as we finish the 
tests we can give you some approach for the results. Are someone already try 
this technology?.

Regards,

El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
>
> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
>
>> I have similar consideration when start exploring  Cloudstack , but 
>> in reality  Clustered Filesystem is not easy to maintain.  You seems 
>> have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat 
>> ,  ocfs recently only maintained in oracle linux.  I believe you do not want 
>> to
>> choose solution that is very propriety .   Thus just SAN or ISCSI o is not
>> really a direct solution here , except you want to encapsulate it in 
>> NFS and facing Cloudstack Storage.
>>
>> It work good on CEPH and NFS , but performance wise,  NFS is better . 
>> And all documentation and features you saw  in Cloudstack , it work 
>> perfectly on NFS.
>>
>> If you choose CEPH,  may be you have to compensate with some 
>> performance degradation,
>>
>>
>>
>> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
>> 
>> wrote:
>>
>>> I've been using Ceph in prod for volumes for some time. Note that
>> although
>>> I had several cloudstack installations,  this one runs on top of 
>>> Cinder, but it basic translates as libvirt and rados.
>>>
>>> It is totally stable and performance IMHO is enough for virtualized 
>>> services.
>>>
>>> IO might suffer some penalization due the data replication inside Ceph.
>>> Elasticsearch for instance, the degradation would be a bit worse as 
>>> there is replication also in the application size, but IMHO, unless 
>>> you need extreme low latency it would be ok.
>>>
>>>
>>> Best,
>>>
>>> Leandro.
>>>
>>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
>> michael.bru...@nttdata.com
>>> wrote:
>>>
 Hello community,

 today I need your experience and knowhow about clustered/shared 
 filesystems based on SAN storage to be used with KVM.
 We need to consider about a clustered/shared filesystem based on 
 SAN storage (no NFS or iSCSI), but do not have any knowhow or 
 experience
>> with
 this.
 Those I would like to ask if there any productive used environments 
 out there based on SAN storage on KVM?
 If so, which clustered/shared filesystem you are using and how is 
 your experience with that (stability, reliability, maintainability,
>>> performance,
 useability,...)?
 Furthermore, if you had already to consider in the past between SAN 
 storage or CEPH, I would also like to participate on your
>> considerations
 and results :)

 Regards,
 Michael

>>
>> --
>> Regards,
>> Hean Seng
>>
>