Op 08/08/2024 om 10:18 schreef Muhammad Hanis Irfan Mohd Zaid:
Gotcha! Thanks Rohit and Wido for the info. Is it okay for me to update the Best Practice section <https://github.com/apache/cloudstack-documentation/blob/main/source/conceptsandterminology/choosing_deployment_architecture.rst> in the docs to put a disclaimer for Primary storage mountpoints specifically for Ceph based on Wido explanation?

I can create a pull request for the docs repo. This is the only thing I can contribute to the project for now.


Sounds like a good idea! Such limitation doesn't exist for Ceph. Ceph technically isn't a mountpoint nor LUN either btw.

Wido


On Thu, 8 Aug 2024 at 16:11, Wido den Hollander <[email protected] <mailto:[email protected]>> wrote:



    Op 08/08/2024 om 09:53 schreef Rohit Yadav:
     > I think in CloudStack ceph storage pool you'll need to update its
     > capacity if/when you increase the storage capacity after adding it
     > initially. Wido and other Ceph gurus can advise other best practices.
     >
     >

    Libvirt will automatically detect the increased capacity of a Ceph
    cluster when you add capacity.

    There is no limit on how large a pool can be in Ceph and thus
    CloudStack. You can store multiple PB in a single Ceph pool. No need to
    split into smaller pools.

    Wido

     > Regards.
     >
     >
     >
    ------------------------------------------------------------------------
     > *From:* Muhammad Hanis Irfan Mohd Zaid <[email protected]
    <mailto:[email protected]>>
     > *Sent:* Thursday, August 8, 2024 11:43
     > *To:* [email protected]
    <mailto:[email protected]> <[email protected]
    <mailto:[email protected]>>
     > *Cc:* Rohit Yadav <[email protected]
    <mailto:[email protected]>>
     > *Subject:* Re: Best steps to deploy a working KVM cluster with RHEL
     > The gist that you shared plus your blog really helps me to set up
    CS on
     > our Rocky Linux servers. It's now running great along with Ceph
    RBD (the
     > other email) for primary and Ceph NFS for secondary.  Thanks Rohit!
     >
     > Getting back to the pool size. Say I configured a Ceph RBD pool
    with no
     > quota, this means I can expand it indefinitely without the need
    to split
     > my Ceph storage into multiple pools of 6 TB based on the CS docs
    (best
     > practice section).
     >
     > I'll look at both Ceph and NFS limitations before actually
    designing the
     > production cluster.
     >
     > On Fri, 2 Aug 2024 at 20:46, Rohit Yadav
    <[email protected] <mailto:[email protected]>
     > <mailto:[email protected]
    <mailto:[email protected]>>> wrote:
     >
     >     Except the way you configure Linux bridges, using nmcli, on
    EL9 more
     >     or less all steps apply as earlier EL distros.
     >
     >     I've my old notes here -
     >
    
https://gist.github.com/rohityadavcloud/fc401a0fe8e8ea16b4b3a4e3d149ce0c#file-el9-or-rhel9-acs 
<https://gist.github.com/rohityadavcloud/fc401a0fe8e8ea16b4b3a4e3d149ce0c#file-el9-or-rhel9-acs>
 
<https://gist.github.com/rohityadavcloud/fc401a0fe8e8ea16b4b3a4e3d149ce0c#file-el9-or-rhel9-acs
 
<https://gist.github.com/rohityadavcloud/fc401a0fe8e8ea16b4b3a4e3d149ce0c#file-el9-or-rhel9-acs>>
     >
     >     CloudStack-Ceph users have easily used 100-1000s of TB of storage
     >     (RBD) in production, so that's fine. For NFS you can refer to the
     >     Ceph-NFS specific limitations (if any).
     >
     >
     >     Regards.
     >
     >
     >
     >
     >     ________________________________
     >     From: Muhammad Hanis Irfan Mohd Zaid
    <[email protected] <mailto:[email protected]>
     >     <mailto:[email protected]
    <mailto:[email protected]>>>
     >     Sent: Friday, August 2, 2024 07:54
     >     To: [email protected]
    <mailto:[email protected]>
    <mailto:[email protected]
    <mailto:[email protected]>>
     >     <[email protected]
    <mailto:[email protected]>
    <mailto:[email protected]
    <mailto:[email protected]>>>
     >     Subject: Re: Best steps to deploy a working KVM cluster with RHEL
     >
     >     Oh say we're using Rocky Linux 9 or AlmaLinux 9, is there any
    workable
     >     steps that can be shared that works in production?
     >
     >     We're going to be working mostly with 25G LACP bonded
    interfaces. And,
     >     we're planning to use Ceph RBD for primary and Ceph NFS for
    secondary
     >     storage. Does this mean provisioning more than 10 TB for both are
     >     okay with
     >     CloudStack?
     >
     >     On Thu, 1 Aug 2024 at 14:50, Rohit Yadav
    <[email protected] <mailto:[email protected]>
     >     <mailto:[email protected]
    <mailto:[email protected]>>> wrote:
     >
     >      > Hi Hanis,
     >      >
     >      > The docs may be a bit outdated and were originally written in
     >     scope for
     >      > XenServer - thanks for sharing that. It appears you're
    using KVM,
     >     so you
     >      > should look at the max-limitations and specific
    recommendations
     >     of your KVM
     >      > distro and NFS vendor.
     >      >
     >      > Majority of NFS datastore (both primary & secondary storage
     >     pools) there
     >      > days are in the 10s of TB in size/range, with even 100s of TBs
     >     also seen in
     >      > production usage.
     >      >
     >      > While using NFS, it's equally important to also consider
    networking
     >      > aspects such as switching capacity in the (KVM) cluster,
    the switch &
     >      > host-nic capabilities such as 1G, 10G, teaming/bond, LACP etc.
     >      >
     >      >
     >      > Regards.
     >      >
     >      >
     >      >
     >      >
     >      > ________________________________
     >      > From: Muhammad Hanis Irfan Mohd Zaid
    <[email protected] <mailto:[email protected]>
     >     <mailto:[email protected]
    <mailto:[email protected]>>>
     >      > Sent: Thursday, August 1, 2024 07:01
     >      > To: [email protected]
    <mailto:[email protected]>
     >     <mailto:[email protected]
    <mailto:[email protected]>> <[email protected]
    <mailto:[email protected]>
     >     <mailto:[email protected]
    <mailto:[email protected]>>>
     >      > Subject: Re: Best steps to deploy a working KVM cluster
    with RHEL
     >      >
     >      > Does anyone have thoughts on this?
     >      >
     >      >
     >      >
     >
    
https://docs.cloudstack.apache.org/en/4.19.1.0/conceptsandterminology/choosing_deployment_architecture.html#best-practices
 
<https://docs.cloudstack.apache.org/en/4.19.1.0/conceptsandterminology/choosing_deployment_architecture.html#best-practices>
 
<https://docs.cloudstack.apache.org/en/4.19.1.0/conceptsandterminology/choosing_deployment_architecture.html#best-practices
 
<https://docs.cloudstack.apache.org/en/4.19.1.0/conceptsandterminology/choosing_deployment_architecture.html#best-practices>>
     >      >
     >      > Btw after reading that page, it looks like for primary storage
     >     the size
     >      > should be < 6 TB. What about secondary storage? Assumes
    both are
     >     using NFS.
     >      >
     >      > On Wed, 31 Jul 2024 at 16:52, Muhammad Hanis Irfan Mohd Zaid <
     >      > [email protected]
    <mailto:[email protected]> <mailto:[email protected]
    <mailto:[email protected]>>> wrote:
     >      >
     >      > > Hi CloudStack community!
     >      > >
     >      > > I'm currently testing out a POC with VLAN on our current
     >     vSphere cluster.
     >      > > As someone with a mostly VMware background, setting up each
     >     individual
     >      > KVM
     >      > > host and adding it to the CS management server is a bit of a
     >     hard task
     >      > for
     >      > > me. I've hit a few roadblocks and am hoping the
    community can
     >     assist me
     >      > in
     >      > > my journey. You can refer to the steps that I took to
    configure
     >     a KVM
     >      > node
     >      > > here: https://pastebin.com/MpSUq5mF
    <https://pastebin.com/MpSUq5mF> <https://pastebin.com/MpSUq5mF
    <https://pastebin.com/MpSUq5mF>>
     >      > >
     >      > > One of the issues that I'm having is that after the
    setup that
     >     I ran on
     >      > > the pastebin, an error occurred which I'm sure should be
     >     resolved with
     >      > > libvirtd sockets masking, which proved it's not. I've to
    reboot
     >     the host
     >      > > while the UI is still adding the host so it can be
    successfully
     >     added.
     >      > >
     >      > >
     >      > >
     >      > >
     >      > >
     >      > >
     >      > >
     >      > >
     >      > >
     >      > >
>      > > *2024-07-30 03:56:37,871 INFO [kvm.resource.LibvirtConnection]
     >      > > (main:null) (logid:) No existing libvirtd connection found.
     >     Opening a new
     >      > > one2024-07-30 03:56:38,109 ERROR [cloud.agent.AgentShell]
     >     (main:null)
     >      > > (logid:) Unable to start
     >      > > agent:com.cloud.utils.exception.CloudRuntimeException:
    Failed
     >     to connect
     >      > > socket to '/var/run/libvirt/virtqemud-sock': Connection
     >     refused        at
     >      > >
     >      >
>  com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:1153) >      > >       at com.cloud.agent.Agent.<init>(Agent.java:193)     at
     >      > >
    com.cloud.agent.AgentShell.launchNewAgent(AgentShell.java:452)
     >            at
     >      > >
>  com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:431)
     >      > >     at
    com.cloud.agent.AgentShell.launchAgent(AgentShell.java:415)
     >      > > at
    com.cloud.agent.AgentShell.start(AgentShell.java:511)        at
     >      > > com.cloud.agent.AgentShell.main(AgentShell.java:541)*
     >      > >
     >      > > Another issue that I'm having is that VNC doesn't work the
     >     first time.
     >      > > I've to do these steps to get VNC working for newly
    added hosts:
     >      > >
     >      > >    - Need to migrate a VM to a newly added host.
     >      > >    - Try to use VNC (doesn't work).
     >      > >    - Migrate it back out.
     >      > >    - Reboot the new host.
     >      > >    - Migrate the VM back into the new host.
     >      > >    - Try to use VNC (now it works).
     >      > >
     >      > >
     >      > > I humbly request, is there anyone that can share any
    steps that
     >     I can
     >      > > follow to deploy a POC or even production capable
    cluster for
     >     KVM running
     >      > > on RHEL-based OS or even Ubuntu. Thanks :)
     >      > >
     >      > >
     >      > >
     >      >
     >

Reply via email to