Re: [ClusterLabs] Clustered LVM with iptables issue

2015-09-11 Thread Vladislav Bogdanov

Hi Digimer,

Be aware that SCTP support in both kernel and DLM _may_ have issues (as 
long as I remember it was not recommended to use at least in cman's 
version of DLM at least because of the leak of testing).


I believe you can force use of TCP via dlm_controld parameters (or 
config options). Of course that could require some kind of bonding to be 
involved. Btw that is the main reason I prefer bonding over multi-ring 
configurations.


Best,
Vladislav

11.09.2015 02:43, Digimer wrote:

For the record;

   Noel helped me on IRC. The problem was that sctp was now allowed in
the firewall. The clue was:


[root@node1 ~]# /etc/init.d/clvmd start
Starting clvmd:
Activating VG(s):  [  OK  ]


] syslog
Sep 10 23:30:47 node1 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Sep 10 23:30:47 node1 kernel: nf_conntrack version 0.5.0 (16384 buckets,
65536 max)
*** Sep 10 23:31:02 node1 kernel: dlm: Using SCTP for communications
Sep 10 23:31:03 node1 clvmd: Cluster LVM daemon started - connected to CMAN



[root@node2 ~]# /etc/init.d/clvmd start
Starting clvmd: clvmd startup timed out


] syslog
Sep 10 23:31:03 node2 kernel: dlm: Using SCTP for communications
Sep 10 23:31:05 node2 corosync[3001]:   [TOTEM ] Incrementing problem
counter for seqid 5644 iface 10.20.10.2 to [1 of 3]
Sep 10 23:31:07 node2 corosync[3001]:   [TOTEM ] ring 0 active with no
faults


Adding;

iptables -I INPUT -p sctp -j ACCEPT

Got it working. Obviously, that needs to be tightened up.

digimer

On 10/09/15 07:01 PM, Digimer wrote:

On 10/09/15 06:54 PM, Noel Kuntze wrote:


Hello Digimer,

I initially assumed you were familiar with ss or netstat and simply
forgot about them.
Seems I was wrong.

Check the output of this: `ss -tpn` and `ss -upn`.
Those commands give you the current open TCP and UDP connections,
as well as the program that opened the connection.
Check listening sockets with `ss -tpnl` and `ss -upnl`


I'm not so strong on the network side of things, so I am not very
familiar with ss or netstat.

I have clvmd running:


[root@node1 ~]# /etc/init.d/clvmd status
clvmd (pid  3495) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)


Thought I don't seem to see anything:


[root@node1 ~]# ss -tpnl
State  Recv-Q Send-Q   Local Address:Port
   Peer Address:Port
LISTEN 0  5   :::1
 :::*  users:(("ricci",2482,3))
LISTEN 0  128  127.0.0.1:199
  *:*  users:(("snmpd",2020,8))
LISTEN 0  128 :::111
 :::*  users:(("rpcbind",1763,11))
LISTEN 0  128  *:111
  *:*  users:(("rpcbind",1763,8))
LISTEN 0  128  *:48976
  *:*  users:(("rpc.statd",1785,8))
LISTEN 0  5   :::16851
 :::*  users:(("modclusterd",2371,5))
LISTEN 0  128 :::55476
 :::*  users:(("rpc.statd",1785,10))
LISTEN 0  128 :::22
 :::*  users:(("sshd",2037,4))
LISTEN 0  128  *:22
  *:*  users:(("sshd",2037,3))
LISTEN 0  100::1:25
 :::*  users:(("master",2142,13))
LISTEN 0  100  127.0.0.1:25
  *:*  users:(("master",2142,12))



[root@node1 ~]# ss -tpn
State  Recv-Q Send-Q   Local Address:Port
   Peer Address:Port
ESTAB  0  0   192.168.122.10:22
  192.168.122.1:53935  users:(("sshd",2636,3))
ESTAB  0  0   192.168.122.10:22
  192.168.122.1:53934  users:(("sshd",2613,3))
ESTAB  0  0   10.10.10.1:48985
 10.10.10.2:7788
ESTAB  0  0   10.10.10.1:7788
 10.10.10.2:51681
ESTAB  0  0:::10.20.10.1:16851
  :::10.20.10.2:43553  users:(("modclusterd",2371,6))



[root@node1 ~]# ss -upn
State  Recv-Q Send-Q   Local Address:Port
   Peer Address:Port


I ran all three again and routed output to a file, stopped clvmd and
re-ran the three calls to a different file. I diff'ed the resulting
files and saw nothing of interest:


[root@node1 ~]# /etc/init.d/clvmd status
clvmd (pid  

Re: [ClusterLabs] Clustered LVM with iptables issue

2015-09-11 Thread Michele Baldessari
On Thu, Sep 10, 2015 at 07:43:34PM -0400, Digimer wrote:
> iptables -I INPUT -p sctp -j ACCEPT
> 
> Got it working. Obviously, that needs to be tightened up.

One potentially time-saving caveat:
event though there is an sctp conntrack module, it does not currently
support multi-homed connections (which I assume you are using via RRP).

Initial minimal support landed very recently via:
commit d7ee3519042798be6224e97f259ed47a63da4620
Author: Michal Kubeček 
Date:   Fri Jul 17 16:17:56 2015 +0200

netfilter: nf_ct_sctp: minimal multihoming support

cheers,
Michele
-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Clustered LVM with iptables issue

2015-09-10 Thread Digimer
On 10/09/15 06:31 PM, Noel Kuntze wrote:
> 
> Hello Digimer,
> 
> Pro tip: look at the 'multiport' module. You can substantially reduce the 
> number of rules with it.
> Right now, I'm scratching my eyes out.
> You can use `ss` or `netstat` to find out where clmvd wants to phone to. That 
> might be
> an additional lead. Or use tcpdump.
> But please, tidy up your rules.

The rules are as terse as I thought I could make them.

ss shows no difference:


[root@node1 ~]# /etc/init.d/clvmd start
Starting clvmd:
Activating VG(s):  [  OK  ]
[root@node1 ~]# ss
State  Recv-Q Send-Q Local Address:Port
Peer Address:Port
ESTAB  0  0 192.168.122.10:ssh
   192.168.122.1:53935
ESTAB  0  0 192.168.122.10:ssh
   192.168.122.1:53934
ESTAB  0  0 10.10.10.1:48985
  10.10.10.2:7788
ESTAB  0  0 10.10.10.1:7788
  10.10.10.2:51681
ESTAB  0  0  :::10.20.10.1:16851
   :::10.20.10.2:43553
[root@node1 ~]# /etc/init.d/clvmd stop
Signaling clvmd to exit[  OK  ]
clvmd terminated   [  OK  ]
[root@node1 ~]# ss
State  Recv-Q Send-Q Local Address:Port
Peer Address:Port
ESTAB  0  0 192.168.122.10:ssh
   192.168.122.1:53935
ESTAB  0  0 192.168.122.10:ssh
   192.168.122.1:53934
ESTAB  0  0 10.10.10.1:48985
  10.10.10.2:7788
ESTAB  0  0 10.10.10.1:7788
  10.10.10.2:51681
ESTAB  0  0  :::10.20.10.1:16851
   :::10.20.10.2:43553
[root@node1 ~]# netcat


netstat had a lot more output, so I pushed the output to files and
diff'ed them:


[root@node1 ~]# netstat > 1
[root@node1 ~]# /etc/init.d/clvmd start
Starting clvmd:
Activating VG(s):  [  OK  ]
[root@node1 ~]# netstat > 2
[root@node1 ~]# diff -U0 1 2
--- 1   2015-09-10 22:46:31.27503 +
+++ 2   2015-09-10 22:46:51.04411 +
@@ -7,0 +8,2 @@
+sctp   0  0 node1.bcn:21064 node2.bcn:21064
 ESTABLISHED
+node1.snnode2.sn

@@ -12 +14,6 @@
-unix  15 [ ] DGRAM12986  /dev/log
+unix  16 [ ] DGRAM12986  /dev/log
+unix  2  [ ] DGRAM23743
+unix  3  [ ] STREAM CONNECTED 23689  @corosync.ipc
+unix  3  [ ] STREAM CONNECTED 23688
+unix  3  [ ] STREAM CONNECTED 23685
/var/run/cman_client
+unix  3  [ ] STREAM CONNECTED 23684


I'm not familiar with netstat, so I'll need to read up to understand the
differences and how to translate them to iptables rules.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Clustered LVM with iptables issue

2015-09-10 Thread Digimer
On 10/09/15 06:54 PM, Noel Kuntze wrote:
> 
> Hello Digimer,
> 
> I initially assumed you were familiar with ss or netstat and simply
> forgot about them.
> Seems I was wrong.
> 
> Check the output of this: `ss -tpn` and `ss -upn`.
> Those commands give you the current open TCP and UDP connections,
> as well as the program that opened the connection.
> Check listening sockets with `ss -tpnl` and `ss -upnl`

I'm not so strong on the network side of things, so I am not very
familiar with ss or netstat.

I have clvmd running:


[root@node1 ~]# /etc/init.d/clvmd status
clvmd (pid  3495) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)


Thought I don't seem to see anything:


[root@node1 ~]# ss -tpnl
State  Recv-Q Send-Q   Local Address:Port
  Peer Address:Port
LISTEN 0  5   :::1
:::*  users:(("ricci",2482,3))
LISTEN 0  128  127.0.0.1:199
 *:*  users:(("snmpd",2020,8))
LISTEN 0  128 :::111
:::*  users:(("rpcbind",1763,11))
LISTEN 0  128  *:111
 *:*  users:(("rpcbind",1763,8))
LISTEN 0  128  *:48976
 *:*  users:(("rpc.statd",1785,8))
LISTEN 0  5   :::16851
:::*  users:(("modclusterd",2371,5))
LISTEN 0  128 :::55476
:::*  users:(("rpc.statd",1785,10))
LISTEN 0  128 :::22
:::*  users:(("sshd",2037,4))
LISTEN 0  128  *:22
 *:*  users:(("sshd",2037,3))
LISTEN 0  100::1:25
:::*  users:(("master",2142,13))
LISTEN 0  100  127.0.0.1:25
 *:*  users:(("master",2142,12))



[root@node1 ~]# ss -tpn
State  Recv-Q Send-Q   Local Address:Port
  Peer Address:Port
ESTAB  0  0   192.168.122.10:22
 192.168.122.1:53935  users:(("sshd",2636,3))
ESTAB  0  0   192.168.122.10:22
 192.168.122.1:53934  users:(("sshd",2613,3))
ESTAB  0  0   10.10.10.1:48985
10.10.10.2:7788
ESTAB  0  0   10.10.10.1:7788
10.10.10.2:51681
ESTAB  0  0:::10.20.10.1:16851
 :::10.20.10.2:43553  users:(("modclusterd",2371,6))



[root@node1 ~]# ss -upn
State  Recv-Q Send-Q   Local Address:Port
  Peer Address:Port


I ran all three again and routed output to a file, stopped clvmd and
re-ran the three calls to a different file. I diff'ed the resulting
files and saw nothing of interest:


[root@node1 ~]# /etc/init.d/clvmd status
clvmd (pid  3495) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)



[root@node1 ~]# ss -tpnl > tpnl.on
[root@node1 ~]# ss -tpn > tpn.on
[root@node1 ~]# ss -upn > upn.on


[root@node1 ~]# /etc/init.d/clvmd stop
Signaling clvmd to exit[  OK  ]
clvmd terminated   [  OK  ]



[root@node1 ~]# ss -tpnl > tpnl.off
[root@node1 ~]# ss -tpn > tpn.off
[root@node1 ~]# ss -upn > upn.off
[root@node1 ~]# diff -U0 tpnl.on tpnl.off
[root@node1 ~]# diff -U0 tpn.on tpn.off
[root@node1 ~]# diff -U0 upn.on upn.off


I'm reading up on 'multiport' now and will adjust my iptables. It does
look a lot cleaner.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Clustered LVM with iptables issue

2015-09-10 Thread Noel Kuntze

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hello Digimer,

Pro tip: look at the 'multiport' module. You can substantially reduce the 
number of rules with it.
Right now, I'm scratching my eyes out.
You can use `ss` or `netstat` to find out where clmvd wants to phone to. That 
might be
an additional lead. Or use tcpdump.
But please, tidy up your rules.

- -- 

Mit freundlichen Grüßen/Kind Regards,
Noel Kuntze

GPG Key ID: 0x63EC6658
Fingerprint: 23CA BB60 2146 05E7 7278 6592 3839 298F 63EC 6658

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJV8gTcAAoJEDg5KY9j7GZYeXYP/R6bRPrSYXUnRPOyES9bZmxJ
haxLVLW5YluFJS33YUYzWPaluzwQwcI9V4twfju7EeZJgh1Uo1kfA1fptduPaRK+
FuSi9TXPfVadLrbwksyyhgGp1UEFAqgcyd2TJlm+rQUbXTSxkS6kBJ0ypi/LHH8o
YR7CC40EfdkRokDRqgt92AZ2Cj/l1jItt7Z4JEfc2t2fMo6axHRNdG3XBu52p66F
+kQqqli8zfK8bRffmURVYiXW1qjzZlYfPIlbT4fqpdvvuC2lyXDdZvgajr/D3YOb
3EzGOmvEjB2G6IHFWKTUSpCJ7XOpR1+8iK1nb3jirTBa5sH8Rfbn/I0LQm0FfsrU
EODHCWMD5nRD/f81dBy2biQ+HkggCu3x23lZUMa89sVUcwXMtnyC+NtnBN5JXtEH
xD1hYemseKcPugEZJDDkjO/8NVG5V/sB/J4OWbKN1T7D3BQRyV9VU07wVFYToAyf
EtSeV7rMYXioS8Pif5v/uXOUHtya59OGYiP4MLf8lcm9dUOmf9A8geVweZMvDU8b
4moek00gunDJVasISAgS1o6h0exKz7+lkLtgrMV+rIhRJkpJFa94x4KcjZE6zPnG
6Rmi9t+GzPpy07R209Fqz/C8TvhRN9qHhjt5XaEveD2Wj8MHtT4id4qa65p4Fi35
nvivpAeBLwYJlnI8vokJ
=/Jd0
-END PGP SIGNATURE-


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Clustered LVM with iptables issue

2015-09-10 Thread Digimer
For the record;

  Noel helped me on IRC. The problem was that sctp was now allowed in
the firewall. The clue was:


[root@node1 ~]# /etc/init.d/clvmd start
Starting clvmd:
Activating VG(s):  [  OK  ]


] syslog
Sep 10 23:30:47 node1 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Sep 10 23:30:47 node1 kernel: nf_conntrack version 0.5.0 (16384 buckets,
65536 max)
*** Sep 10 23:31:02 node1 kernel: dlm: Using SCTP for communications
Sep 10 23:31:03 node1 clvmd: Cluster LVM daemon started - connected to CMAN



[root@node2 ~]# /etc/init.d/clvmd start
Starting clvmd: clvmd startup timed out


] syslog
Sep 10 23:31:03 node2 kernel: dlm: Using SCTP for communications
Sep 10 23:31:05 node2 corosync[3001]:   [TOTEM ] Incrementing problem
counter for seqid 5644 iface 10.20.10.2 to [1 of 3]
Sep 10 23:31:07 node2 corosync[3001]:   [TOTEM ] ring 0 active with no
faults


Adding;

iptables -I INPUT -p sctp -j ACCEPT

Got it working. Obviously, that needs to be tightened up.

digimer

On 10/09/15 07:01 PM, Digimer wrote:
> On 10/09/15 06:54 PM, Noel Kuntze wrote:
>>
>> Hello Digimer,
>>
>> I initially assumed you were familiar with ss or netstat and simply
>> forgot about them.
>> Seems I was wrong.
>>
>> Check the output of this: `ss -tpn` and `ss -upn`.
>> Those commands give you the current open TCP and UDP connections,
>> as well as the program that opened the connection.
>> Check listening sockets with `ss -tpnl` and `ss -upnl`
> 
> I'm not so strong on the network side of things, so I am not very
> familiar with ss or netstat.
> 
> I have clvmd running:
> 
> 
> [root@node1 ~]# /etc/init.d/clvmd status
> clvmd (pid  3495) is running...
> Clustered Volume Groups: (none)
> Active clustered Logical Volumes: (none)
> 
> 
> Thought I don't seem to see anything:
> 
> 
> [root@node1 ~]# ss -tpnl
> State  Recv-Q Send-Q   Local Address:Port
>   Peer Address:Port
> LISTEN 0  5   :::1
> :::*  users:(("ricci",2482,3))
> LISTEN 0  128  127.0.0.1:199
>  *:*  users:(("snmpd",2020,8))
> LISTEN 0  128 :::111
> :::*  users:(("rpcbind",1763,11))
> LISTEN 0  128  *:111
>  *:*  users:(("rpcbind",1763,8))
> LISTEN 0  128  *:48976
>  *:*  users:(("rpc.statd",1785,8))
> LISTEN 0  5   :::16851
> :::*  users:(("modclusterd",2371,5))
> LISTEN 0  128 :::55476
> :::*  users:(("rpc.statd",1785,10))
> LISTEN 0  128 :::22
> :::*  users:(("sshd",2037,4))
> LISTEN 0  128  *:22
>  *:*  users:(("sshd",2037,3))
> LISTEN 0  100::1:25
> :::*  users:(("master",2142,13))
> LISTEN 0  100  127.0.0.1:25
>  *:*  users:(("master",2142,12))
> 
> 
> 
> [root@node1 ~]# ss -tpn
> State  Recv-Q Send-Q   Local Address:Port
>   Peer Address:Port
> ESTAB  0  0   192.168.122.10:22
>  192.168.122.1:53935  users:(("sshd",2636,3))
> ESTAB  0  0   192.168.122.10:22
>  192.168.122.1:53934  users:(("sshd",2613,3))
> ESTAB  0  0   10.10.10.1:48985
> 10.10.10.2:7788
> ESTAB  0  0   10.10.10.1:7788
> 10.10.10.2:51681
> ESTAB  0  0:::10.20.10.1:16851
>  :::10.20.10.2:43553  users:(("modclusterd",2371,6))
> 
> 
> 
> [root@node1 ~]# ss -upn
> State  Recv-Q Send-Q   Local Address:Port
>   Peer Address:Port
> 
> 
> I ran all three again and routed output to a file, stopped clvmd and
> re-ran the three calls to a different file. I diff'ed the resulting
> files and saw nothing of interest:
> 
> 
> [root@node1 ~]# /etc/init.d/clvmd status
> clvmd (pid  3495) is running...
> Clustered Volume Groups: (none)
> Active clustered Logical Volumes: (none)
> 
> 
> 
> [root@node1 ~]# ss -tpnl > tpnl.on
> [root@node1 ~]# ss -tpn > tpn.on
> [root@node1 ~]# ss -upn > upn.on
> 
> 
> [root@node1 ~]# /etc/init.d/clvmd stop
> Signaling clvmd to exit[  OK  ]
>