Zitat von [email protected]:
Zitat von Andy Fletcher <[email protected]>:
On 03/24/2011 02:46 PM, [email protected] wrote:
Hello
since some days i have a public ntp server in the pool. Today i
discovered that ntpd was using around 5% CPU power and found a constant
packet flow of around 500..1000 packets per second from a single IP
address.
Any hints how to deal with this beside dropping them by iptables
iptables is the way to go but you don't need to hardcode their address
but use the recent module to drop any packets from offenders who exceed
a given number of packed per second averaged over a period. After a
while they will give up and try a different server.
This has the advantage that it self resets once they get below the
threshold, the two lines below will do this (adjust -i to match your
interface)
iptables -A INPUT -i eth0 -p udp -m udp --dport 123 \
-m recent --set --name NTPTRAFFIC --rsource
iptables -A INPUT -i eth0 -p udp -m udp --dport 123 \
-m recent --update --seconds 60 --hitcount 7 \
--name NTPTRAFFIC --rsource -j DROP
You can view the connecting hosts by looking at the conntrack table:
cat /proc/net/ip_conntrack | grep dport=123
And you can see what sort of performance you are getting by looking at
the iptables stats
iptables -n -L -v | grep 123
I've been running it for a while on a server in Amsterdam and the
abusive clients disappeared almost instantly. If I check now it shows
very few attempts:
iptables -n -L -v | grep 123
1038K 79M DROP udp -- eth0 * 0.0.0.0/0
0.0.0.0/0 udp dpt:123 state NEW recent: UPDATE seconds: 60
hit_count: 7 name: NTPTRAFFIC side: source
74M 5613M udp -- eth0 * 0.0.0.0/0
0.0.0.0/0 udp dpt:123 state NEW recent: SET name: NTPTRAFFIC
side: source
But I'm serving a lot of ntp clients (over 5k in the last minute):
cat /proc/net/ip_conntrack | grep dport=123 | wc -l
5615
There is a balance in conntrack table size and count period. A limit of
7 packets in one minute for a client appears to work well and allows
clients to use iburst without being dropped.
I'd love to hear comments on this.
This reminds me that i have already used ipt_recent some time ago to
protect a mailserver, but this module was not available inside the
OpenVZ container i'm now using. I will recheck, thanks for the hint.
The offender is now gone after some 16M Packets (!1,2GB Traffic) dropped.
Chain INPUT (policy DROP 2213 packets, 132K bytes)
pkts bytes target prot opt in out source destination
16M 1209M DROP all -- * * 194.28.28.7 0.0.0.0/0
This clearly shows why the pool is needed ;-)
Okay, done this for now:
# RECENT check fuer NTPD muss vor ESTABLISHED stattfinden
$IPTABLES -A INPUT -p udp --dport 123 -m recent --set
$IPTABLES -A INPUT -p udp --dport 123 -m recent --rcheck --seconds 4
--hitcount 16 -m limit --limit 1/m -j LOG --log-level info
--log-prefix "RECENT "
$IPTABLES -A INPUT -p udp --dport 123 -m recent --rcheck --seconds 4
--hitcount 16 -j DROP
and it get to drop some packets
Chain INPUT (policy DROP 4117 packets, 270K bytes)
pkts bytes target prot opt in out source destination
5677K 431M udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:123 recent: SET name: DEFAULT side: source
2510 191K LOG udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:123 recent: CHECK seconds: 4 hit_count: 16 name:
DEFAULT side: source limit: avg 1/min burst 5 LOG flags 0 level 6
prefix `RECENT '
146K 11M DROP udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:123 recent: CHECK seconds: 4 hit_count: 16 name: DEFAULT
side: source
So from 5677K requests 146K hit the limit of more that 16 request in 4
seconds. Is this too harsh or are this many crappy clients out there?
Regards
Andreas
_______________________________________________
pool mailing list
[email protected]
http://lists.ntp.org/listinfo/pool