Re: [Architecture discussion] IPv6 and best practices for DNS naming and the MX/SMTP problem

2013-06-06 Thread Andreas Meile

Hello Carsten and Kevin

Thanks for your answers. As a short summary, I will use (and recommend) the 
following ways:


- consider .local/.loc/.intra/.lan etc. as legacy which should be eliminated 
(Microsoft officially supports Active Directory domain renaming procedures 
for that).
- preferred way is to use intra.example.com, dmz.example.com etc. so 
example.com itself can stay fully public while the sub DNS zones can be 
setup restricted but the correct DNS delegation chains must be complete so 
every DNS resolver on the world on a authorized system (this can also be a 
friend company or local office over VPN, not only the LAN behind the 
firewall itself) can resolve the names and IP(v6) adresses successfully in 
both directions.
- In BIND this list of authorized resolvers can be setup with the 
allow-query directive, so unauthorized systems don't get a DNS timeout, they 
just get a refused answer when trying to resolve internal resources.
- a smart relay host with both public IPv4 and IPv6 addresses on the network 
interfaces eliminates the dual stack MX / EHLO hostname IPv4-NAT problem 
because I fully can control the way between my internal mail server and the 
smart relay host (they always can [and should] communicate over IPv6 for 
example so there is no need to point the MX record to the firewall instead 
internal mail server itself because of NAT) = this even allows me to put 
the smart relay host as a friend system for my internal DNS server so the 
MTA on the smart relay host knows mailserv.intra.example.com as valid EHLO 
hostname and can send i...@example.com to 
infou...@mailserv.intra.example.com for example (forwarding rule).


In my own network I already started to implement several of these measures. 
My current goal is to implement dual-stack for every component/network 
segment so I can give some feedback in a later time. When everything works 
well, another goal is to implement that in my customer's networks (I am 
working as freelancer for several regional customers) as part of future IT 
migration projects.


Corrections and additions are welcome. :-)

Andreas

- Original Message - 
From: Carsten Strotmann c...@strotmann.de

To: Andreas Meile mailingli...@andreas-meile.ch
Cc: bind-users@lists.isc.org
Sent: Monday, May 27, 2013 8:20 AM
Subject: Re: [Architecture discussion] IPv6 and best practices for DNS 
naming and the MX/SMTP problem




Hello Andreas,

[...]
--
Teste die PC-Sicherheit mit www.sec-check.net 



___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: [Architecture discussion] IPv6 and best practices for DNS naming and the MX/SMTP problem

2013-05-28 Thread Kevin Darcy

On 5/26/2013 2:36 PM, Andreas Meile wrote:

Hello BIND users

The following post discusses some complexer questions in context with
enabling dual-stack in corporate networks. It's very TCP/IP generic 
but has also a lot to do with DNS (and of course BIND which I use to 
implement it = all examples are in BIND syntax) so I hope it's not 
too offtopic.


Introduction: In the Internet's origins it was intended that any 
device uses

a public IPv4 address and has a FQGH (full qualified hostname) inside the
worldwide public DNS hierarchy.

With the predictable IPv4 address depletion, NAT and private networks 
using

IPv4 addresses according RFC 1918 are common which resulted in a new
problem: The FQHN for my corporate server or workstation cannot longer
inserted into the public DNS hirarchy so separate internal DNS
infrastructures based on TLD .local/.loc got common in many larger 
network

installations (the famous split DNS scenarios).

Today after the IPv4 address exhaustion has become reality, enabling
dual-stack on existing networks, i.e. adding IPv6 will become a topic to
virtually every Internet user. Because IPv6 offers more than enough IPv6
addresses to really everyone, networking with a working end-to-end
communication with public IP address for every device as in the 
Internet's

first ages gets back.

Let's assume a simple network situation: segmented network inside a 
company

with a small routed public IPv4 range (a /29 subnet for example) and an
internal network behind a firewall. Pure IPv4 past situation first:

Public webserver (Linux running Apache and BIND for example):
webserv.example.com  192.0.2.21

Firewall:
Internal: vpn.example.local 10.0.0.1
External: vpn.example.com 192.0.2.30

File server (running as Microsoft ActiveDirectory for example but 
could be also a Linux box running Samba and BIND):

fileserv.example.local   10.0.0.12

Everything extended to dual-stack:
webserv.example.com  192.0.2.21 + 2001:db8:0:1::21

Firewall:
Internal: vpn.example.local 10.0.0.1 + 2001:db8:0:2::1
External: vpn.example.com 192.0.2.30 + 2001:db8:0:1::30
(doing NAT for IPv4 but routing IPv6)

fileserv.example.local   10.0.0.12 + 2001:db8:0:2::12

First question for discussion: Is it recommended to replace 
example.local

into intra.example.com for example because it's now possible to restore
the public DNS hierarchy? See the following:

$ORIGIN example.com.
intraIN  NSfileserv.intra.example.com.
; Glue record
fileserver.intra  IN    2001:db8:0:2::12
; fileserver.intra  IN  A 10.0.0.12 would violate some RFCs because of
; publishing non-routed IPv4 addresses but omit it breaks the worldwide
; hierarchy, i.e. intra.example.com from IPv4 sight is flying free 
somewhere...


; assume a /56 from ISP and delegated from ISP
$ORIGIN 0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
1.0IN   NS  webserv.example.com.
2.0IN   NS  fileserv.intra.example.com.
$ORIGIN 1.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
1.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR  webserv.example.com.
0.3.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR  vpn.example.com.

; managed by ActiveDirectory (or BIND, too)
$ORIGIN 2.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR  vpn.intra.example.com.
2.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR fileserv.intra.example.com.

Because of confidence reasons: Is it wise the setup a query 
restriction for

intra.example.com as well as 2.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa to
allow dns querys for trusted networks only? Is there a not allowed 
answer

in DNS standard to avoid waiting until timeout for an external host doing
gethostbyaddr()? (the firewall might disallow DNS from extern to
fileserv.intra.example.com so blocking may be problematic)

Another problem: e-mails/SMTP and MTA. Assume a mail server inside the
corporate network (or even a DMZ behind a NAT!)

Early before dual-stacking:
mailserv.example.local  10.0.0.14
Now after dual-stacking:
mailserv.intra.example.com  10.0.0.14 + 2001:db8:0:2::14

In the past, something like

define(`confDOMAIN_NAME', `vpn.example.com')dnl

(Sendmail) was common to get a matching visible host name to outside MTAs
and spam filters (beware of the IPv4 NAT) and for incoming mail

$ORIGIN example.com.
@INMX10vpn.example.com.

was very common. With the removal of NAT in IPv6, we don't longer need an
overwritten MTA's domain name, instead we can use

$ORIGIN example.com.
@INMX10mailserv.intra.example.com.

directly in that case. But this causes the next problem: not dual-stack
compliant (IPv4 MTA gets an non-routed IP address). 


I don't see what the problem is here: if the IPv4-only MTA is really old 
and crude, it'll just discard the  records in the DNS response, 
because it doesn't understand them at all. If it's more modern, then 
it'll presumably implement RFC 6724 (address selection RFC, perhaps some 
might recognize this as RFC 3484bis), in which case the algorithm 
dictates that it'll connect to the IPv4 

Re: [Architecture discussion] IPv6 and best practices for DNS naming and the MX/SMTP problem

2013-05-27 Thread Carsten Strotmann
Hello Andreas,

Andreas Meile mailingli...@andreas-meile.ch writes:


 First question for discussion: Is it recommended to replace example.local
 into intra.example.com for example because it's now possible to restore
 the public DNS hierarchy? See the following:

In my view, using a namespace that you own (intra.example.com, where
example.com is you domain name that you own in the Internet) is always
preferred over a non-existing TLD (such as .local, .corp or
.intra). This is also the case when using split-DNS with IPv4
only. 

Many problems go away when using a proper delegated DNS name, and the
Internet DNS servers (the root-dns servers) are not polluted by
requests for non-existing TLDs that escape improper configured internal
networks.

The non-public part of the owned namespace (intra.example.com) should be
delegated to internal DNS servers. This can be done with split-DNS in
a way that private IP addresses do not appear in the Internet, but are
used internally only.


 $ORIGIN example.com.
 intraIN  NSfileserv.intra.example.com.
 ; Glue record
 fileserver.intra  IN    2001:db8:0:2::12
 ; fileserver.intra  IN  A 10.0.0.12 would violate some RFCs because of
 ; publishing non-routed IPv4 addresses but omit it breaks the worldwide
 ; hierarchy, i.e. intra.example.com from IPv4 sight is flying free
 somewhere...

 ; assume a /56 from ISP and delegated from ISP
 $ORIGIN 0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
 1.0IN   NS  webserv.example.com.
 2.0IN   NS  fileserv.intra.example.com.
 $ORIGIN 1.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
 1.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR  webserv.example.com.
 0.3.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR  vpn.example.com.

 ; managed by ActiveDirectory (or BIND, too)
 $ORIGIN 2.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR  vpn.intra.example.com.
 2.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN  PTR  fileserv.intra.example.com.

 Because of confidence reasons: Is it wise the setup a query restriction for
 intra.example.com as well as 2.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa to
 allow dns querys for trusted networks only? Is there a not allowed answer
 in DNS standard to avoid waiting until timeout for an external host doing
 gethostbyaddr()? (the firewall might disallow DNS from extern to
 fileserv.intra.example.com so blocking may be problematic)

The not allowed answer is the DNS refused return code, and that will
be send back whenever you restrict queries using allow-query. Only if
you put IP addresses into an blackhole
(http://ftp.isc.org/isc/bind9/cur/9.9/doc/arm/Bv9ARM.ch06.html#id2564022)
(or if you block DNS queries in the firewall) the BIND DNS server will
not send any responses back and the client has to wait for a timeout.


 Another problem: e-mails/SMTP and MTA. Assume a mail server inside the
 corporate network (or even a DMZ behind a NAT!)

 Early before dual-stacking:
 mailserv.example.local  10.0.0.14
 Now after dual-stacking:
 mailserv.intra.example.com  10.0.0.14 + 2001:db8:0:2::14

 In the past, something like

 define(`confDOMAIN_NAME', `vpn.example.com')dnl

 (Sendmail) was common to get a matching visible host name to outside MTAs
 and spam filters (beware of the IPv4 NAT) and for incoming mail

 $ORIGIN example.com.
 @INMX10vpn.example.com.

 was very common. With the removal of NAT in IPv6, we don't longer need an
 overwritten MTA's domain name, instead we can use

 $ORIGIN example.com.
 @INMX10mailserv.intra.example.com.

 directly in that case. But this causes the next problem: not dual-stack
 compliant (IPv4 MTA gets an non-routed IP address). A workaround may be
 announce both hosts:

 $ORIGIN example.com.
 ; for IPv4
 @INMX10vpn.example.com.
 ; for IPv6
 @INMX10mailserv.intra.example.com.

 but this may cause timeouts (IPv6 host is trying to connect to the firewall
 instead the mail server). Another way might be

 $ORIGIN example.com.
 @INMX10mailmx.example.com.
 mailmx  IN   A 192.0.2.30
 mailmx  IN     2001:db8:0:2::14

 but this violates the RFCs saying that A/ entries should have a
 corresponding PTR entry.


I don't see this violating an RFC. Both address entries for mailmx can (and 
should) have a
proper PTR record (one in in-addr.arpa, and one in ip6.arpa.)

 A third way might be to use smart relay hosts so the actual outgoing mail
 server always runs with public IPv4 address, the same for the incoming way.


That is a good idea, for multiple reasons.

I don't had time to prepare examples for my suggestions here, but I
could come up with config examples if you would like to see them.

Best regards

Carsten Strotmann


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users