On 7/20/20 5:05 PM, Stefan Schmitz wrote:
> Hello,
>
> I have now deleted the previous stonith resource and added two new
> ones, one for each server. The commands I used for that:
>
> # pcs -f stonith_cfg stonith create stonith_id_1 external/libvirt
> hostlist="host2"
Am 20.07.2020 um 13:36 schrieb Klaus Wenninger:
On 7/20/20 1:10 PM, Stefan Schmitz wrote:
Hello,
thank you all very much for your help so far!
We have no managed to capture the mulitcast traffic originating from
one host when issuing the command "fence_xvm -o list" on the other
host. Now
On 7/20/20 1:10 PM, Stefan Schmitz wrote:
> Hello,
>
> thank you all very much for your help so far!
>
> We have no managed to capture the mulitcast traffic originating from
> one host when issuing the command "fence_xvm -o list" on the other
> host. Now the tcpdump at least looks exactly the same
Hello,
thank you all very much for your help so far!
We have no managed to capture the mulitcast traffic originating from one
host when issuing the command "fence_xvm -o list" on the other host. Now
the tcpdump at least looks exactly the same on all 4 servers, hosts and
guest. I can not tell
02.07.2020 18:18, stefan.schm...@farmpartner-tec.com пишет:
> Hello,
>
> I hope someone can help with this problem. We are (still) trying to get
> Stonith to achieve a running active/active HA Cluster, but sadly to no
> avail.
>
> There are 2 Centos Hosts. On each one there is a virtual Ubuntu
I'm not sure that the libvirt backend is intended to be used in this way,
with multiple hosts using the same multicast address. From the
fence_virt.conf man page:
~~~
BACKENDS
libvirt
The libvirt plugin is the simplest plugin. It is used in
environments where routing fencing
The simplest way to check if the libvirt's network is NAT (or not) is to try
to ssh from the first VM to the second one.
I should admit that I was lost when I tried to create a routed network in
KVM, so I can't help with that.
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 16:56:44
On 7/17/20 3:56 PM, stefan.schm...@farmpartner-tec.com wrote:
> Hello,
>
> I have now managed to get # fence_xvm -a 225.0.0.12 -o list to list at
> least its local Guest again. It seems the fence_virtd was not working
> properly anymore.
>
> Regarding the Network XML config
>
> # cat default.xml
>
Hello,
I have now managed to get # fence_xvm -a 225.0.0.12 -o list to list at
least its local Guest again. It seems the fence_virtd was not working
properly anymore.
Regarding the Network XML config
# cat default.xml
default
I have
If it is created by libvirt - this is NAT and you will never receive output
from the other host.
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 15:05:48 GMT+03:00, "stefan.schm...@farmpartner-tec.com"
написа:
>Hello,
>
>Am 15.07.2020 um 13:42 Strahil Nikolov wrote:
>> By default libvirt
By default libvirt is using NAT and not routed network - in such case, vm1
won't receive data from host2.
Can you provide the Networks' xml ?
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 13:19:59 GMT+03:00, Klaus Wenninger
написа:
>On 7/15/20 11:42 AM, stefan.schm...@farmpartner-tec.com
Am 15.07.2020 um 16:29 schrieb Klaus Wenninger:
On 7/15/20 4:21 PM, stefan.schm...@farmpartner-tec.com wrote:
Hello,
Am 15.07.2020 um 15:30 schrieb Klaus Wenninger:
On 7/15/20 3:15 PM, Strahil Nikolov wrote:
If it is created by libvirt - this is NAT and you will never
receive output from
On 7/15/20 4:21 PM, stefan.schm...@farmpartner-tec.com wrote:
> Hello,
>
>
> Am 15.07.2020 um 15:30 schrieb Klaus Wenninger:
>> On 7/15/20 3:15 PM, Strahil Nikolov wrote:
>>> If it is created by libvirt - this is NAT and you will never
>>> receive output from the other host.
>> And twice the
Hello,
Am 15.07.2020 um 15:30 schrieb Klaus Wenninger:
On 7/15/20 3:15 PM, Strahil Nikolov wrote:
If it is created by libvirt - this is NAT and you will never receive output
from the other host.
And twice the same subnet behind NAT is probably giving
issues at other places as well.
And
On 7/15/20 3:15 PM, Strahil Nikolov wrote:
> If it is created by libvirt - this is NAT and you will never receive output
> from the other host.
And twice the same subnet behind NAT is probably giving
issues at other places as well.
And if using DHCP you have to at least enforce that both sides
Hello,
Am 15.07.2020 um 13:42 Strahil Nikolov wrote:
By default libvirt is using NAT and not routed network - in such case, vm1
won't receive data from host2.
Can you provide the Networks' xml ?
Best Regards,
Strahil Nikolov
# cat default.xml
default
I
On 7/15/20 11:42 AM, stefan.schm...@farmpartner-tec.com wrote:
> Hello,
>
>
> Am 15.07.2020 um 06:32 Strahil Nikolov wrote:
>> How did you configure the network on your ubuntu 20.04 Hosts ? I
>> tried to setup bridged connection for the test setup , but obviously
>> I'm missing something.
>>
>>
Hello,
Am 15.07.2020 um 06:32 Strahil Nikolov wrote:
How did you configure the network on your ubuntu 20.04 Hosts ? I tried to
setup bridged connection for the test setup , but obviously I'm missing
something.
Best Regards,
Strahil Nikolov
on the hosts (CentOS) the bridge config looks
How did you configure the network on your ubuntu 20.04 Hosts ? I tried to
setup bridged connection for the test setup , but obviously I'm missing
something.
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 11:06:42 GMT+03:00, "stefan.schm...@farmpartner-tec.com"
написа:
>Hello,
>
>
>Am
On 7/14/20 10:06 AM, stefan.schm...@farmpartner-tec.com wrote:
> Hello,
>
>
> Am 09.07.2020 um 19:10 Strahil Nikolov wrote:
> >Have you run 'fence_virtd -c' ?
> Yes I had run that on both Hosts. The current config looks like that
> and is identical on both.
>
> cat fence_virt.conf
> fence_virtd
Hello,
Am 09.07.2020 um 19:10 Strahil Nikolov wrote:
>Have you run 'fence_virtd -c' ?
Yes I had run that on both Hosts. The current config looks like that and
is identical on both.
cat fence_virt.conf
fence_virtd {
listener = "multicast";
backend = "libvirt";
Have you run 'fence_virtd -c' ?
I made a silly mistake last time when I deployed it and the daemon was not
listening on the right interface.
Netstat can check this out.
Also, As far as I know hosts use unicast to reply to the VMs (thus tcp/1229
and not udp/1229).
If you have a developer
On 7/9/20 5:17 PM, stefan.schm...@farmpartner-tec.com wrote:
> Hello,
>
> > Well, theory still holds I would say.
> >
> > I guess that the multicast-traffic from the other host
> > or the guestsdoesn't get to the daemon on the host.
> > Can't you just simply check if there are any firewall
> >
On 7/9/20 8:18 PM, Vladislav Bogdanov wrote:
> Hi.
>
> This thread is getting too long.
>
> First, you need to ensure that your switch (or all switches in the
> path) have igmp snooping enabled on host ports (and probably
> interconnects along the path between your hosts).
>
> Second, you need an
Hi.
This thread is getting too long.
First, you need to ensure that your switch (or all switches in the
path) have igmp snooping enabled on host ports (and probably
interconnects along the path between your hosts).
Second, you need an igmp querier to be enabled somewhere near (better
to have it
Hello,
> Well, theory still holds I would say.
>
> I guess that the multicast-traffic from the other host
> or the guestsdoesn't get to the daemon on the host.
> Can't you just simply check if there are any firewall
> rules configuredon the host kernel?
I hope I did understand you corretcly and
On 7/9/20 4:01 PM, stefan.schm...@farmpartner-tec.com wrote:
> Hello,
>
> thanks for the advise. I have worked through that list as follows:
>
> > - key deployed on the Hypervisours
> > - key deployed on the VMs
> I created the key file a while ago once on one host and distributed it
> to every
Hello,
thanks for the advise. I have worked through that list as follows:
> - key deployed on the Hypervisours
> - key deployed on the VMs
I created the key file a while ago once on one host and distributed it
to every other host and guest. Right now it resides on all 4 machines in
the same
On 7/8/20 8:24 PM, Strahil Nikolov wrote:
> Erm...network/firewall is always "green". Run tcpdump on Host1 and VM2
> (not on the same host).
> Then run again 'fence_xvm -o list' and check what is captured.
>
> In summary, you need:
> - key deployed on the Hypervisours
> - key deployed
Erm...network/firewall is always "green". Run tcpdump on Host1 and VM2 (not
on the same host).
Then run again 'fence_xvm -o list' and check what is captured.
In summary, you need:
- key deployed on the Hypervisours
- key deployed on the VMs
- fence_virtd running on both Hypervisours
Erm...network/firewall is always "green". Run tcpdump on Host1 and VM2 (not
on the same host).
Then run again 'fence_xvm -o list' and check what is captured.
In summary, you need:
- key deployed on the Hypervisours
- key deployed on the VMs
- fence_virtd running on both Hypervisours
Hello,
>I can't find fence_virtd for Ubuntu18, but it is available for >Ubuntu20.
We have now upgraded our Server to Ubuntu 20.04 LTS and installed the
packages fence-virt and fence-virtd.
The command "fence_xvm -a 225.0.0.12 -o list" on the Hosts still just
returns the single local VM.
>With kvm please use the qemu-watchdog and try to
>prevent using softdogwith SBD.
>Especially if you are aiming for a production-cluster ...
You can tell it to the previous company I worked for :D .
All clusters were using softdog on SLES 11/12 despite the hardware had it's
own.
We had
I can't find fence_virtd for Ubuntu18, but it is available for Ubuntu20.
Your other option is to get an iSCSI from your quorum system and use that for
SBD.
For watchdog, you can use 'softdog' kernel module or you can use KVM to present
one to the VMs.
You can also check the '-P' flag for SBD.
On 7/7/20 11:12 AM, Strahil Nikolov wrote:
>> With kvm please use the qemu-watchdog and try to
>> prevent using softdogwith SBD.
>> Especially if you are aiming for a production-cluster ...
> You can tell it to the previous company I worked for :D .
> All clusters were using softdog on SLES
On 7/7/20 10:33 AM, Strahil Nikolov wrote:
> I can't find fence_virtd for Ubuntu18, but it is available for Ubuntu20.
>
> Your other option is to get an iSCSI from your quorum system and use that for
> SBD.
> For watchdog, you can use 'softdog' kernel module or you can use KVM to
> present one
>What does 'virsh list'
>give you onthe 2 hosts? Hopefully different names for
>the VMs ...
Yes, each host shows its own
# virsh list
IdName Status
2 kvm101 running
# virsh list
Id
As far as I know fence_xvm supports multiple hosts, but you need to open
the port on both Hypervisour (udp) and Guest (tcp). 'fence_xvm -o list'
should provide a list of VMs from all hosts that responded (and have the key).
Usually, the biggest problem is the multicast traffic - as in
On 7/6/20 10:10 AM, stefan.schm...@farmpartner-tec.com wrote:
> Hello,
>
> >> # fence_xvm -o list
> >> kvm102 bab3749c-15fc-40b7-8b6c-d4267b9f0eb9
> >> on
>
> >This should show both VMs, so getting to that point will likely solve
> >your problem. fence_xvm relies on
Hello,
>> # fence_xvm -o list
>> kvm102 bab3749c-15fc-40b7-8b6c-d4267b9f0eb9
>> on
>This should show both VMs, so getting to that point will likely solve
>your problem. fence_xvm relies on multicast, there could be some
>obscure network configuration to get that
On Thu, 2020-07-02 at 17:18 +0200, stefan.schm...@farmpartner-tec.com
wrote:
> Hello,
>
> I hope someone can help with this problem. We are (still) trying to
> get
> Stonith to achieve a running active/active HA Cluster, but sadly to
> no
> avail.
>
> There are 2 Centos Hosts. On each one
Hello,
I hope someone can help with this problem. We are (still) trying to get
Stonith to achieve a running active/active HA Cluster, but sadly to no
avail.
There are 2 Centos Hosts. On each one there is a virtual Ubuntu VM. The
Ubuntu VMs are the ones which should form the HA Cluster.
42 matches
Mail list logo