The setup is 4 1TB drives running RAID10. I was using the Gnome Disk
Utility to verify the integrity of the array (which is a 500MB mirrored md0
and the rest a R10 md1).
I believe one of the drives is bad but prior to the system going offline it
showed that 2 drives were detached from the array
It's been fixed. The drives were ok but nothing would reassemble. The
drives were marked as faulty so I followed the suggestions here:
http://anders.com/cms/411/Linux/Software.RAID/inactive/mdadm
Thanks!
On Tue, Sep 4, 2012 at 12:25 PM, Jacob Hydeman jhyde...@gmail.com wrote:
The setup is 4
CentOS 6.2, using latest KVM drivers for Windows guests. Using LVM for
storage, software raid 10, RAW with write cache disabled in the virt
manager.
When using virtio drivers for disk drives Windows Server complains about
write caching being enabled. I can't disable it either. Using IDE prevents
No. But you have to tell via config (I don't know how KVM does this) which
bridge the VM is going to use. And if you want it to be reachable via two
bridges you have to set it up for two bridges and give it two IP numbers.
And the packet then has to come in thru the correct interface/bridge,
I'm using the same setup for multiple bridges for Xen, no problem. But I
use static IP addresses. You *have* to use IP numbers from different
subnets.
I've setup just the IPADDR= and NETMASK= to have different static IPs in
different subnets and changed to the BOOTPROTO=static in each of the
Running CentOS 5.4 x64.
Have successfully bridged eth2 with br2 by following the instructions here:
http://wiki.libvirt.org/page/Networking (under the RHEL section)
Have been running several KVM VMs successfully via this bridge.
I am now trying to bridge additional interfaces by using the same
6 matches
Mail list logo