Google "Dell Open manage Linux".  If you're using Proxmox VE then use the Debian packages.

-Adam


On 9/28/2020 2:57 PM, dave wrote:
I have not messed with the agent for the dells but Ill give it a look. Is it a bios Installed application?


On 9/28/20 1:40 PM, Josh Baird wrote:
You can easily alert on failed disks in the PERC controllers.  You just need to install Dell's OpenManage agent which makes these notifications possible via integrated interfaces or SNMP, etc.

On Mon, Sep 28, 2020 at 2:38 PM dave <dmilho...@wletc.com <mailto:dmilho...@wletc.com>> wrote:

    The only thing I hate On a host is the Percs I wish Dell would
    alert or something to notify of a failing drive.


    On 9/28/20 10:18 AM, Mike Hammett wrote:
    I'm not a CEPH exert, but that is my understanding of it at a
    high level.



    -----
    Mike Hammett
    Intelligent Computing Solutions <http://www.ics-il.com/>
    
<https://www.facebook.com/ICSIL><https://plus.google.com/+IntelligentComputingSolutionsDeKalb><https://www.linkedin.com/company/intelligent-computing-solutions><https://twitter.com/ICSIL>
    Midwest Internet Exchange <http://www.midwest-ix.com/>
    
<https://www.facebook.com/mdwestix><https://www.linkedin.com/company/midwest-internet-exchange><https://twitter.com/mdwestix>
    The Brothers WISP <http://www.thebrotherswisp.com/>
    <https://www.facebook.com/thebrotherswisp>


    <https://www.youtube.com/channel/UCXSdfxQv7SpoRQYNyLwntZg>
    ------------------------------------------------------------------------
    *From: *"Lewis Bergman" <lewis.berg...@gmail.com>
    <mailto:lewis.berg...@gmail.com>
    *To: *"AnimalFarm Microwave Users Group" <af@af.afmug.com>
    <mailto:af@af.afmug.com>
    *Sent: *Monday, September 28, 2020 8:05:35 AM
    *Subject: *Re: [AFMUG] Virtual machines

    I would assume CEPH takes the physical disks from each host and
    combines them into one logical storage for use by the entire
    cluster?

    On Mon, Sep 28, 2020 at 7:39 AM Mike Hammett <af...@ics-il.net
    <mailto:af...@ics-il.net>> wrote:

        CEPH kind of fills the void where you don't need a
        dedicated, shared storage box.



        -----
        Mike Hammett
        Intelligent Computing Solutions <http://www.ics-il.com/>
        
<https://www.facebook.com/ICSIL><https://plus.google.com/+IntelligentComputingSolutionsDeKalb><https://www.linkedin.com/company/intelligent-computing-solutions><https://twitter.com/ICSIL>
        Midwest Internet Exchange <http://www.midwest-ix.com/>
        
<https://www.facebook.com/mdwestix><https://www.linkedin.com/company/midwest-internet-exchange><https://twitter.com/mdwestix>
        The Brothers WISP <http://www.thebrotherswisp.com/>
        <https://www.facebook.com/thebrotherswisp>


        <https://www.youtube.com/channel/UCXSdfxQv7SpoRQYNyLwntZg>
        ------------------------------------------------------------------------
        *From: *"Adam Moffett" <dmmoff...@gmail.com
        <mailto:dmmoff...@gmail.com>>
        *To: *af@af.afmug.com <mailto:af@af.afmug.com>
        *Sent: *Monday, September 28, 2020 7:34:14 AM
        *Subject: *Re: [AFMUG] Virtual machines

        If you're going to have multiple physical VM hosts then fast
        shared storage is very helpful.  When you want to reboot a
        physical machine for OS upgrade and the VM's are on shared
        storage then you can migrate them off that box in a few
        seconds.   Do your maintenance, reboot, migrate VM's back.
        No downtime.

        On 9/27/2020 11:43 AM, Lewis Bergman wrote:

            Thanks guys. Proxmox didn't even come up in my searches.
            I'll look into it. If anyone really knows the space and
            wouldn't mind spending 15 minutes discussing what we
            need I would appreciate it.

            On Sun, Sep 27, 2020, 10:21 AM Bill Prince
            <part15...@gmail.com <mailto:part15...@gmail.com>> wrote:

                VMs are a great way to go depending on the job(s)
                you need to do. As it happens a lot of jobs (e.g.
                DNS) are not particularly compute intensive, so it's
                a great way to stretch resources. We find we can run
                3 or 4 virtual machines on each physical machine.

                We used VMware from the get-go, but did not get many
                of the paid-for bells and whistles. VMware can
                become pretty expensive, where other solutions (e.g.
                Proxmox) has an advantage because of open source.

                The other consideration is containers, which can be
                thought of as VM-lite. Containers provide almost all
                of the advantages of VMs with a significantly
                lighter load on the hardware. As a result, you can
                load up more applications on less hardware. The
                leading contender in the container space is
                Kubernetes and it's also open source.

                Pick your poison with someone you know who can go
                over your requirements.


                bp
                <part15sbs{at}gmail{dot}com>

                On 9/27/2020 7:27 AM, Lewis Bergman wrote:

                    I have decided I needed to get on the VM train.
                    I know, I am only 15 years behind. Honestly,
                    till now I haven't had a compelling reason.

                    I want something that will at least do some
                    monitoring of VM's, backups, snapshots, etc.
                    Managed upgrading would be great but not as big
                    a priority for me (at least I don't think so).

                    Since I don't know what I don't know, I am
                    asking the experienced crowd.

                    It seems the two real choices are VMWare and
                    Zen. Are there others? Commercial support seems
                    nice, is it worth paying for? What I will run is
                    important for sure.

                    I spent a few hours last night and I more
                    confused now than when I started.



-- AF mailing list
                AF@af.afmug.com <mailto:AF@af.afmug.com>
                http://af.afmug.com/mailman/listinfo/af_af.afmug.com
                <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>



-- AF mailing list
        AF@af.afmug.com <mailto:AF@af.afmug.com>
        http://af.afmug.com/mailman/listinfo/af_af.afmug.com
        <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>

-- AF mailing list
        AF@af.afmug.com <mailto:AF@af.afmug.com>
        http://af.afmug.com/mailman/listinfo/af_af.afmug.com
        <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>



-- Lewis Bergman
    325-439-0533 Cell

-- AF mailing list
    AF@af.afmug.com <mailto:AF@af.afmug.com>
    http://af.afmug.com/mailman/listinfo/af_af.afmug.com
    <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>



-- AF mailing list
    AF@af.afmug.com <mailto:AF@af.afmug.com>
    http://af.afmug.com/mailman/listinfo/af_af.afmug.com
    <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>




-- 
AF mailing list
AF@af.afmug.com
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

Reply via email to