Am 08.01.2015 um 23:54 schrieb Stephen John Smoogen:
On 8 January 2015 at 15:19, Reindl Harald wrote:


    Am 08.01.2015 um 21:34 schrieb Stephen John Smoogen:

        In most of the cases, we end up requiring someone to go to the
        system
        physically and doing some initial work if we run into any of 0-3. Of
        course that works great if you have a physical server. We virtualize
        most of our servers which ends up with even more weird problems of
        trying to get working

    than you do something wrong

Of course I do Harald. Very few of us are perfect. Thank you for
reminding me of my failures. It has made me a better person.

i ignore the cynicism :-)

    especially om virtualized systems remote management is far easier
    because you have *one* remote console and if it is regular tested
    and all clients have the needed access you reach 100,1000,10000
    virtual servers without any exception


Another thread, but it would be useful if you explained how this is
accomplished.

in my "serious" environments which are all virtualized it is simple:

 * a central VMware vCenter Server for the HA cluster
 * that thing is sadly a windows machine, don't matter
   because it's only purpose is to run a RDP session
   and all day long the VMware client connected to the
   vCenter
 * for sure you can achieve the same with pure open-source

well, and if it comes to connections:

* normally VPN over one of the guests
* as fallback SSH-Forwarding to 3389 from one of the guests
  dedicated for that and on a different host another guest
  which is allowed to connect to the admin machine on 3389
* if you once get a RDP connection wether via VPN or fallback
  to SSH forwarding you have a single management interface to
  watch and maintain the cluster including a virtual console
  to each VM with a single klick
* the virtual console is from the view of the guest the same
  as if you would sit in front of it - enter root-pwd and
  you are in a from the guests from local terminal
* there is even a Linux based vCenter server running itself on
  top of the HA cluster and so have also failover, not sure
  about the client access just because the current admin server
  is running since summer 2008 up-to-date and unchanged

finally the only thing i need to take care of is having the same hypervisor versions on the bare metal hosts, if a host goes down
the guests are started from a available one like after a power outage

if the host get unreachable over the network the hosts are configured to issue a clean shutdown command on the guests because the bare metal hypervisor will start them within a few minutes on a remaining host

well, as said, you can for sure achieve the same with pure open source but since i have to maintain all the stuff practically alone i prefer a froma trusted local vendor supported solution for virtualization, hardware and shared storage

    but back to topic: yes it is *way* too optimistic assume KVM or
    similar everywhere - for a small business you typically have a
    *server* as router/firewall *because* you want to avoid the security
    problems of make crap without regular updates directly reachable
    from the internet and that includes:

    * SOHO routers
    * KVM devices
    * any embedded device
    * VMware consoles

    so guess what there is running: a ordinary Linux setup (in my case)
    Fedora and the only way to access some of them hundrets of
    kilometers away is just SSH

this we agree on

Attachment: signature.asc
Description: OpenPGP digital signature

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Reply via email to