On 23/06/16 01:05 AM, Andrew Dent wrote:
> Hi
>
> We have a Single box running Ovirt 3.5.6.2-1 + CentOS7.
> The Engine, VDSM Host and the Storage is all on the Single box, which
> contains 2 * RAID1 arrays.
>
> We are looking to purchase a second box, and I'm wondering if someone
> can please
Hi
We have a Single box running Ovirt 3.5.6.2-1 + CentOS7.
The Engine, VDSM Host and the Storage is all on the Single box, which
contains 2 * RAID1 arrays.
We are looking to purchase a second box, and I'm wondering if someone
can please help me to understand of how best to migrate to a HA
RHEV is a cloud solution with some HA features. It is not an actual HA
solution.
digimer
On 23/06/16 12:08 AM, Eero Volotinen wrote:
> How about trying commercial RHEV?
>
> Eero
> 22.6.2016 8.02 ap. "Tom Robinson" kirjoitti:
>
>> Hi,
>>
>> I have two KVM hosts
How about trying commercial RHEV?
Eero
22.6.2016 8.02 ap. "Tom Robinson" kirjoitti:
> Hi,
>
> I have two KVM hosts (CentOS 7) and would like them to operate as High
> Availability servers,
> automatically migrating guests when one of the hosts goes down.
>
> My
On 23.06.2016 02:52, listmail wrote:
According to the compatibility chart over here:
https://access.redhat.com/support/policy/intel
...anything later than 6.3 (6.4 and up) should work with the E3-12xx v3
family of processors. But those are not the results I am seeing.
Does anyone have
Hi All,
Hopefully someone with broad overview of CentOS compatibility issues can
comment on this:
I am evaluating a Supermicro X10SLM motherboard with an Intel E3-1231 v3
CPU. Testing with boots from Live DVDs, the CentOS 6.x family is panicking
at boot time. I have tried 6.8, 6.5, and 6.3,
I had no real reason to doubt. I was just being lazy. I figured that,
if anyone knew the correct answer, it you be the people on this list.
Thank you for your gracious forbearance.
On 06/21/16 20:01, Boris Epstein wrote:
> I would think the same as Gordon that as long as your 64-bit VM
>
On 22/06/16 02:36 PM, Paul Heinlein wrote:
> On Wed, 22 Jun 2016, Digimer wrote:
>
>> The nodes are not important, the hosted services are.
>
> The only time this isn't true is when you're using the node to heat the
> room.
>
> Otherwise, the service is always the important thing. (The node may
On 22/06/16 02:34 PM, m.r...@5-cent.us wrote:
> Digimer wrote:
>> On 22/06/16 02:01 PM, Chris Adams wrote:
>>> Once upon a time, John R Pierce said:
On 6/22/2016 10:47 AM, Digimer wrote:
> This is called "fabric fencing" and was originally the only supported
>
On 22/06/16 02:31 PM, John R Pierce wrote:
> On 6/22/2016 11:06 AM, Digimer wrote:
>> I know this goes against the
>> grain of sysadmins to yank power, but in an HA setup, nodes should be
>> disposable and replaceable. The nodes are not important, the hosted
>> services are.
>
> of course, the
Once upon a time, John R Pierce said:
> of course, the really tricky problem is implementing an ISCSI
> storage infrastructure thats fully redundant and has no single point
> of failure. this requires the redundant storage controllers to
> have shared write-back cache,
On Wed, 22 Jun 2016, Digimer wrote:
The nodes are not important, the hosted services are.
The only time this isn't true is when you're using the node to heat
the room.
Otherwise, the service is always the important thing. (The node may
become as synonymous with the service because there's
Digimer wrote:
> On 22/06/16 02:01 PM, Chris Adams wrote:
>> Once upon a time, John R Pierce said:
>>> On 6/22/2016 10:47 AM, Digimer wrote:
This is called "fabric fencing" and was originally the only supported
option in the very early days of HA. It has fallen out
On 6/22/2016 11:06 AM, Digimer wrote:
I know this goes against the
grain of sysadmins to yank power, but in an HA setup, nodes should be
disposable and replaceable. The nodes are not important, the hosted
services are.
of course, the really tricky problem is implementing an ISCSI storage
On 22/06/16 02:12 PM, Chris Adams wrote:
> Once upon a time, Digimer said:
>> The cluster software and any hosted services aren't running. It's not
>> that they think they're wrong, they just have no existing state so they
>> won't try to touch anything without first ensuring it
Once upon a time, Digimer said:
> The cluster software and any hosted services aren't running. It's not
> that they think they're wrong, they just have no existing state so they
> won't try to touch anything without first ensuring it is safe to do so.
Well, I was being short;
On 22/06/16 02:01 PM, Chris Adams wrote:
> Once upon a time, John R Pierce said:
>> On 6/22/2016 10:47 AM, Digimer wrote:
>>> This is called "fabric fencing" and was originally the only supported
>>> option in the very early days of HA. It has fallen out of favour for
>>>
Once upon a time, John R Pierce said:
> On 6/22/2016 10:47 AM, Digimer wrote:
> >This is called "fabric fencing" and was originally the only supported
> >option in the very early days of HA. It has fallen out of favour for
> >several reasons, but it does still work fine. The
On 6/22/2016 10:47 AM, Digimer wrote:
This is called "fabric fencing" and was originally the only supported
option in the very early days of HA. It has fallen out of favour for
several reasons, but it does still work fine. The main issues is that it
leaves the node in an unclean state. If an
On 22/06/16 01:38 PM, John R Pierce wrote:
> On 6/21/2016 10:01 PM, Tom Robinson wrote:
>> Currently when I migrate a guest, I can all too easily start it up on
>> both hosts! There must be some
>> way to fence these off but I'm just not sure how to do this.
>
> in addition to power fencing as
I have multiple VMs that are hanging on boot. Sometimes they'll boot
fine after 5 mins and other times it'll take over an hour. The problem
seems to be related to journald but I'd like to figure out how I can
get more information.
The VMs are running CentOS 7.1.1503. systemd and journald are both
Any of you guys ever seen an issue with Xen 4.4 were xm cannot create a
guest because of what looks like an issue allocating memory even though
xm info shows like 5x the amount of free memory needed? We are still
unfortunately still using xm... it's on my list, i know..
We've had this happen
On 6/21/2016 10:01 PM, Tom Robinson wrote:
Currently when I migrate a guest, I can all too easily start it up on both
hosts! There must be some
way to fence these off but I'm just not sure how to do this.
in addition to power fencing as described by others, you can also fence
at the ethernet
Igual puede restringirle el acceso solo a tu red, así el cuidado será mayor...
El día 21 de junio de 2016, 19:43, Eliud Cardenas
escribió:
> Hola a todos,
>
> Me respondo a mi mismo, es el puerto que abre Percona mysql para la
> comunicación grupal.
>
> Nada de
Hi,
I'm considering to add virt7-docker repository as a dependency for the
openstack repo from Cloud SIG.
It seems that it's not consumable yet and some packages like
Kubernetes are outdated.
Could you update me with the current status and if there is anything
we can do to help?
Regards,
H.
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
More information...
I have pcifront showing as a module in the DomU and the usb shows in dmesg as:
[ 3.167543] usbcore: registered new interface driver usbfs
[ 3.167563] usbcore: registered new interface driver hub
[ 3.167585] usbcore: registered new device driver usb
[ 3.196056] usb usb1:
Further to my messages back in May I have at last got round to trying to get my
DomU to recognise USB devices.
I am using Xen 4.6 with CentOS kernel 3.18.34-20.el7.x86_64.
I have to manually make the port available before creating the DomU by issuing
the command:
xl pci-assignable-add
On 22 June 2016 at 09:03, Indunil Jayasooriya wrote:
>
> When an UNCLEAN SHUDWON happens or ifdown eth0 in node1 , can OVIRT
> migrate VMs from node1 to node2?
Yep.
> in that case, Is power management such as ILO needed?
It needs a way to ensure the host is down to
On 2015-10-07 15:21, Johnny Hughes wrote:
> On 10/06/2015 05:30 PM, Kay Schenk wrote:
>> Well I haven't tested out the CentOS 7 for i386 yet as sent in the
>> message of 06/02--
>>
>> https://lists.centos.org/pipermail/centos-devel/2015-June/013426.html
>>
>> Nor have I seen any additional
On 22/06/16 02:03 AM, Indunil Jayasooriya wrote:
> On Wed, Jun 22, 2016 at 11:08 AM, Barak Korren wrote:
>
>>>
>>> My question is: Is this even possible? All the documentation for HA that
>> I've found appears to not
>>> do this. Am I missing something?
>>
>> You can use
On 22/06/16 02:10 AM, Tom Robinson wrote:
> Hi Digimer,
>
> Thanks for your reply.
>
> On 22/06/16 15:20, Digimer wrote:
>> On 22/06/16 01:01 AM, Tom Robinson wrote:
>>> Hi,
>>>
>>> I have two KVM hosts (CentOS 7) and would like them to operate as High
>>> Availability servers,
>>>
Hi Digimer,
Thanks for your reply.
On 22/06/16 15:20, Digimer wrote:
> On 22/06/16 01:01 AM, Tom Robinson wrote:
>> Hi,
>>
>> I have two KVM hosts (CentOS 7) and would like them to operate as High
>> Availability servers,
>> automatically migrating guests when one of the hosts goes down.
>>
>>
On Wed, Jun 22, 2016 at 11:08 AM, Barak Korren wrote:
> >
> > My question is: Is this even possible? All the documentation for HA that
> I've found appears to not
> > do this. Am I missing something?
>
> You can use oVirt for that (www.ovirt.org).
>
When an UNCLEAN SHUDWON
34 matches
Mail list logo