On Tue, Dec 31, 2019 at 8:45 AM Kurt H Maier <k...@sciops.net> wrote:

> If you need this kind of functionality in Kubernetes you're much better
> off using a different CNI plugin to manage your networking.  There's no
> inherent NAT requirement imposed by Kubernetes itself.


This is not about CNI networking, inside cluster tftp is working pretty
fine.
This is more about service networking (kube-proxy) and accessing services
from outside.
Example: when you need to boot some node, on the early booting stage it
can't be member of cluster.

Of course you can use hostNetwork=true, but it is less secure and not
redundant.

That approach is dangerously broken.  The transfer IDs and the ports are
> supposed to match; ramming everything over a single port is going to
> break down when you have a lot of transfers happening simultaneously.
>

The packets are always sending to the client specific port. There is no put
requests.
What is actually broken? Example tcpdump:

This is standard mode:
IP 172.17.0.2.42447 > 172.17.0.1.69:  22 RRQ "/some_file" netascii
IP 172.17.0.1.56457 > 172.17.0.2.42447: UDP, length 15
IP 172.17.0.2.42447 > 172.17.0.1.56457: UDP, length 4

This is single port mode:
IP 172.17.0.2.56296 > 172.17.0.1.69:  22 RRQ "/some_file" netascii
IP 172.17.0.1.69 > 172.17.0.2.56296:  15 DATA block 1
IP 172.17.0.2.56296 > 172.17.0.1.69:  4 ACK block 1

- kvaps


On Tue, Dec 31, 2019 at 8:45 AM Kurt H Maier <k...@sciops.net> wrote:

> On Mon, Dec 30, 2019 at 12:51:30PM +0100, kvaps wrote:
> >
> > Note that Kubernetes uses NAT for external services, so it's not possible
> > to run TFTP-server for external clients there. There is one proposed
> > solution for that, it suggests moving away from the RFC and implement
> > --single-port option for always reply from the same port which was
> > requested by the client.
>
> That approach is dangerously broken.  The transfer IDs and the ports are
> supposed to match; ramming everything over a single port is going to
> break down when you have a lot of transfers happening simultaneously.
>
> If you need this kind of functionality in Kubernetes you're much better
> off using a different CNI plugin to manage your networking.  There's no
> inherent NAT requirement imposed by Kubernetes itself.
>
> khm
>
_______________________________________________
Dnsmasq-discuss mailing list
Dnsmasq-discuss@lists.thekelleys.org.uk
http://lists.thekelleys.org.uk/mailman/listinfo/dnsmasq-discuss

Reply via email to