Re: [DNSOP] Fundamental ANAME problems

2018-11-21 Thread Tim Wicinski
Thomas

Thanks for the analysis.   It's a good data point.

And the Nameserver usage is useful - from a quick view of the graph if one
adds Cloudflare + GoDaddy is approx 60% of the nameservers.




On Wed, Nov 21, 2018 at 6:59 AM Thomas Peterson 
wrote:

> To hopefully awaken and further inform the discussion around the ANAME and
> HTTP draft specifications that have been put forward, I've done some
> further analysis across the Alexa top 1 million domains - my initial
> findings are available at https://thpts.github.io/a_or_cname/ .
>
> A brief summary of what I have found across the entire dataset:
>
> * 51% of www records return an A record
> * 47% of www records return a CNAME
>   * 64% of those point www back to apex (i.e. www.example.com. IN CNAME
> example.com.)
> * 17 www records are DNAME
>
> Any feedback, corrections, and suggestions would be greatly appreciated.
>
> Regards
>
> On Tue, 6 Nov 2018 at 10:22, Thomas Peterson 
> wrote:
>
>> That may be the case from your own (presumably anecdotal) experience,
>> however I took the Alexa top 1 million websites and queried for A* and
>> CNAME against the www records for the top 10 000 domains. What I found is
>> that approximately 44% returned CNAME records, 56% returning A records.
>>
>>
>>
>> Code is https://gist.github.com/thpts/eb5cec361867170a0ffd6ede136c6649
>> here if anyone wishes to look.
>>
>>
>>
>> Regards
>>
>>
>>
>> * I realise that I could have added . My presumption is that the top
>> 10k websites are not v6 only and at least have an A record in place.
>>
>>
>>
>> *From: *DNSOP  on behalf of Olli Vanhoja <
>> o...@zeit.co>
>> *Date: *Tuesday, 6 November 2018 at 08:24
>> *To: *
>> *Subject: *Re: [DNSOP] Fundamental ANAME problems
>>
>>
>>
>> In fact if you look at the DNS records some big Internet companies
>>
>> they rarely use CNAMEs for www but instead you'll see an A record, that
>> might
>>
>> be even backed by a proprietary ANAME solution.
>>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-21 Thread Thomas Peterson
To hopefully awaken and further inform the discussion around the ANAME 
and HTTP draft specifications that have been put forward, I've done some 
further analysis across the Alexa top 1 million domains - my initial 
findings are available at https://thpts.github.io/a_or_cname/ .


A brief summary of what I have found across the entire dataset:

* 51% of www records return an A record
* 47% of www records return a CNAME
  * 64% of those point www back to apex (i.e. www.example.com. IN CNAME 
example.com.)

* 17 www records are DNAME

Any feedback, corrections, and suggestions would be greatly appreciated.

Regards

On Tue, 6 Nov 2018 at 10:22, Thomas Peterson <mailto:hidinginthe...@gmail.com>> wrote:


   That may be the case from your own (presumably anecdotal)
   experience, however I took the Alexa top 1 million websites and
   queried for A* and CNAME against the www records for the top 10 000
   domains. What I found is that approximately 44% returned CNAME
   records, 56% returning A records.

   Code is
   https://gist.github.com/thpts/eb5cec361867170a0ffd6ede136c6649 here
   if anyone wishes to look.

   Regards

   * I realise that I could have added . My presumption is that the
   top 10k websites are not v6 only and at least have an A record in place.

   *From: *DNSOP mailto:dnsop-boun...@ietf.org>> on behalf of Olli Vanhoja
   mailto:o...@zeit.co>>
   *Date: *Tuesday, 6 November 2018 at 08:24
   *To: *mailto:dnsop@ietf.org>>
   *Subject: *Re: [DNSOP] Fundamental ANAME problems

   In fact if you look at the DNS records some big Internet companies

   they rarely use CNAMEs for www but instead you'll see an A record,
   that might

   be even backed by a proprietary ANAME solution.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-20 Thread Matthijs Mekking

Follow-up.

tldr:
- The first argument is just an issue of wording.
- Authoritative servers or provisioning scripts that do ANAME processing 
should honor the target address records TTL.
- Sibling address records should be used as a default if ANAME 
processing fails.



On 11/9/18 6:54 PM, Richard Gibson wrote:

Responses inline.

On 11/9/18 11:39, Tony Finch wrote:

Richard Gibson  wrote:

First, I am troubled by the requirement that ANAME forces the zone into a
dynamic zone.

I don't see how it is possible to implement ANAME without some form of
dymamic behaviour, either by UPDATEs on the primary, or on-demand
substitution on the secondaries, or some combination of the two.


I am advocating for the special behavior of ANAME be limited to 
processing of non-transfer queries (i.e., explicitly excluding AXFR and 
IXFR). For example: ANAME-aware authoritative servers MAY attempt 
sibling replacement in response to address or ANY queries and SHOULD add 
records to the additional section in response to address or ANAME 
queries; ANAME-aware resolvers SHOULD do both. But all authoritative 
servers should agree that the sibling records—including their original 
TTLs—are a non-special part of zone contents for the purposes of transfers.


It looks like there is a non-issue: The draft allows you to do ANAME 
resolution however you want:
1. An anamifier script that updates a zone file before loading it into 
the primary.

2. A tool that translates ANAME target lookups into Dynamic Update.
3. An authoritative server that implements ANAME resolution.
4. ...

The point is that we would like to standardize the ANAME resolution, and 
what it means on the wire.


Richard makes a good point though that is: ANAME on zone transfers 
should have no special logic.





Second, and relatedly, I think the TTLs of replacement records established for
non-transfer responses are too high. Respecting the TTL of every record in a
chain that starts with the ANAME requires the TTL of final replacement records
to be no higher than the minimum TTL encountered over the chain, potentially
/reduced/ nondeterministically to mitigate query bunching. I would therefore
add language encouraging resolvers synthesizing those records to engage in
best-effort determination of original TTLs by (e.g., by directly querying
authoritative servers and refreshing at 50% remaining), but *requiring* them
to decrement TTLs of records for which they are not authoritative.

>>>

I'm not sure I understand which TTLs you are worried about here. What are
"non-transfer responses"? There's certainly some rewording needed to make
it more clear, but the TTLs returned by resolvers that do sibling address
record substitution are decremented in the usual way, and resolvers make
no attempt to determine the original TTLs.

>>

non-transfer responses are responses for QTYPE != AXFR or IXFR.


I hope the above clarifies... my TTL concerns relate not to resolvers, 
but to authoritative servers. In particular, I take issue with the 
"/Sibling address records are committed to the zone/" and "/Sibling 
address records are served from authoritative servers with a fixed TTL/" 
text, which stretches the TTL of one or more RRSets along the target 
name's resolution chain.


Richard and I discussed this. In order for me to understand the issue I 
had to look this from the point of the resolver, that does ANAME resolution.


Suppose a resolver has an ANAME in its cache with a high TTL, say 3600, 
but not the target A and  records. It can do the lookup for the 
targets. If successful, it will retrieve the A and/or  records. 
Let's say they have a short TTL, 60. They time out after a minute, but 
the resolver can still use the ANAME record to do its own ANAME resolution.


In other words, if the resolver does ANAME resolution, the TTL of the 
target address records are honored. Now what does that mean for the 
authoritative side? What TTL should they use for the A and  records 
that have been expanded by ANAME? It would only be logical that the 
authoritative side does the same.


This means that when adding A and  records into the zone as a result 
of ANAME processing, the TTL to use is at most that of the TTL of the 
address target records. If you use a higher value, this will stretch the 
TTL which is undesired.





And finally, back on the question of what ANAME sibling address records
actually represent, I think that NXDOMAIN and NODATA results should be treated
as errors for the purposes of ANAME sibling replacement. This position can be
justified on both practical and principled grounds—replacing functional
records with an empty RRSet is undesirable for DNS users (or at least the
sample of them that are Oracle+Dyn customers),

>>>

Maybe so, but that's what happens with CNAME records.

>>
CNAME does not allow for siblings, and therefore its processing is 
incapable of replacing functional records with an empty RRSet. Further, 
clients are required to understan

Re: [DNSOP] Fundamental ANAME problems

2018-11-09 Thread Richard Gibson

Responses inline.

On 11/9/18 11:39, Tony Finch wrote:

Richard Gibson  wrote:

First, I am troubled by the requirement that ANAME forces the zone into a
dynamic zone.

I don't see how it is possible to implement ANAME without some form of
dymamic behaviour, either by UPDATEs on the primary, or on-demand
substitution on the secondaries, or some combination of the two.


I am advocating for the special behavior of ANAME be limited to 
processing of non-transfer queries (i.e., explicitly excluding AXFR and 
IXFR). For example: ANAME-aware authoritative servers MAY attempt 
sibling replacement in response to address or ANY queries and SHOULD add 
records to the additional section in response to address or ANAME 
queries; ANAME-aware resolvers SHOULD do both. But all authoritative 
servers should agree that the sibling records—including their original 
TTLs—are a non-special part of zone contents for the purposes of transfers.



Second, and relatedly, I think the TTLs of replacement records established for
non-transfer responses are too high. Respecting the TTL of every record in a
chain that starts with the ANAME requires the TTL of final replacement records
to be no higher than the minimum TTL encountered over the chain, potentially
/reduced/ nondeterministically to mitigate query bunching. I would therefore
add language encouraging resolvers synthesizing those records to engage in
best-effort determination of original TTLs by (e.g., by directly querying
authoritative servers and refreshing at 50% remaining), but *requiring* them
to decrement TTLs of records for which they are not authoritative.

I'm not sure I understand which TTLs you are worried about here. What are
"non-transfer responses"? There's certainly some rewording needed to make
it more clear, but the TTLs returned by resolvers that do sibling address
record substitution are decremented in the usual way, and resolvers make
no attempt to determine the original TTLs.
I hope the above clarifies... my TTL concerns relate not to resolvers, 
but to authoritative servers. In particular, I take issue with the 
"/Sibling address records are committed to the zone/" and "/Sibling 
address records are served from authoritative servers with a fixed TTL/" 
text, which stretches the TTL of one or more RRSets along the target 
name's resolution chain.

And finally, back on the question of what ANAME sibling address records
actually represent, I think that NXDOMAIN and NODATA results should be treated
as errors for the purposes of ANAME sibling replacement. This position can be
justified on both practical and principled grounds—replacing functional
records with an empty RRSet is undesirable for DNS users (or at least the
sample of them that are Oracle+Dyn customers),

Maybe so, but that's what happens with CNAME records.
CNAME does not allow for siblings, and therefore its processing is 
incapable of replacing functional records with an empty RRSet. Further, 
clients are required to understand CNAME and can therefore always 
identify at which domain name an issue lies (and in particular that it 
is not the queried name).

Let's please just eliminate all of that by specifying that ANAME
processing can never replace something with nothing.

So when the target goes away, you would prefer to leave behind zombie
address records, and stretch their TTL indefinitely? If the zone admin is
given only a target hostname (just like a CNAME) they don't have any
alternative addresses to use when the target goes away. So the options are
to copy the target by deleting the addresses, or ignore the target and
leave the addresses to rot.
When the target goes away, there is no longer anything to replace the 
sibling records, which are not zombies but rather part of the zone 
contents and always issued by authoritative servers with their original 
TTL (just like every other record, including the ANAME itself). Only if 
there are no sibling records could an authoritative server issue a 
NODATA response, and if a /resolver/ cannot successfully resolve an 
ANAME target to non-expired records, then it should re-resolve the 
requested RRSet anyway—it will get from upstream either refreshed 
fallbacks or a NODATA, and in either case then has a response for its 
own client.

I'm inclined to say that fallback records should remain a non-standard
feature. The semantics can be that when you see the target go AWOL, delete
the ANAME and its siblings, and replace them with the fallback records
that were specified by some other means. You can apply the same logic to
CNAMEs too, if you want :-)
A system that complicated would /have/ to be non-standard, but I think 
you're reading too much into my use of the term "fallback". It's not a 
specification of special treatment, but rather the absence of such that 
gives ANAME sibling records that status... they're what to serve when 
nothing replaces them (i.e., the current semantics of every record in 
every zone that is not itself occluded by a delega

Re: [DNSOP] Fundamental ANAME problems

2018-11-09 Thread Bob Harold
On Fri, Nov 9, 2018 at 11:39 AM Tony Finch  wrote:

> Richard Gibson  wrote:
> >
> > First, I am troubled by the requirement that ANAME forces the zone into a
> > dynamic zone.
>
> I don't see how it is possible to implement ANAME without some form of
> dymamic behaviour, either by UPDATEs on the primary, or on-demand
> substitution on the secondaries, or some combination of the two.
>

I think we have different viewpoints here:
- ANAME replaces existing CDN tricks, and the A/ records are always
dynamically generated.  If ANAME leads nowhere, then there is no answer.
Or:
- ANAME is a new feature, which can be used instead the standard A/
records by the few server where ANAME is implemented.
Updates to the A/ records can be done at the source, the same as any
normal zone update.  No special processing required on the authoritative
servers.  Only the recursive servers use ANAME if they support that new
feature.  If ANAME leads nowhere, then ignore the new broken feature and
return the standard A/ records.  This option could be implementation
dependent.  (This is the view I prefer, at least until ANAME becomes
widespread.)


> > Second, and relatedly, I think the TTLs of replacement records
> established for
> > non-transfer responses are too high. Respecting the TTL of every record
> in a
> > chain that starts with the ANAME requires the TTL of final replacement
> records
> > to be no higher than the minimum TTL encountered over the chain,
> potentially
> > /reduced/ nondeterministically to mitigate query bunching. I would
> therefore
> > add language encouraging resolvers synthesizing those records to engage
> in
> > best-effort determination of original TTLs by (e.g., by directly querying
> > authoritative servers and refreshing at 50% remaining), but *requiring*
> them
> > to decrement TTLs of records for which they are not authoritative.
>
> I'm not sure I understand which TTLs you are worried about here. What are
> "non-transfer responses"? There's certainly some rewording needed to make
> it more clear, but the TTLs returned by resolvers that do sibling address
> record substitution are decremented in the usual way, and resolvers make
> no attempt to determine the original TTLs.
>
> It isn't possible to require a resolver to query authoritative servers
> directly.
>

I am inclined to use the TTL of the sibling A/ records and avoid the
work and concerns of guessing the right ttl.  That gives the zone owner the
control, rather than the owner of the ANAME target.  (I am typically a zone
owner, so I prefer to have control.  Others may differ.)


> > And finally, back on the question of what ANAME sibling address records
> > actually represent, I think that NXDOMAIN and NODATA results should be
> treated
> > as errors for the purposes of ANAME sibling replacement. This position
> can be
> > justified on both practical and principled grounds—replacing functional
> > records with an empty RRSet is undesirable for DNS users (or at least the
> > sample of them that are Oracle+Dyn customers),
>
> Maybe so, but that's what happens with CNAME records.
>

If I view A/ as standard, and ANAME as optional (shiny new feature),
then I prefer the A/ if ANAME fails.  CNAME is standard, which is much
different than ANAME in this viewpoint.


> > and could inappropriately stretch the maximum specified ANAME sibling
> > TTL (on the ANAME record itself) to the SOA MINIMUM value (which is
> > doubly bad, because it results in extended caching of the /least/
> > valuable state).
>
> That's a very good point, thank you.
>
> > Let's please just eliminate all of that by specifying that ANAME
> > processing can never replace something with nothing.
>
> So when the target goes away, you would prefer to leave behind zombie
> address records, and stretch their TTL indefinitely? If the zone admin is
> given only a target hostname (just like a CNAME) they don't have any
> alternative addresses to use when the target goes away. So the options are
> to copy the target by deleting the addresses, or ignore the target and
> leave the addresses to rot.
>
> I'm inclined to say that fallback records should remain a non-standard
> feature. The semantics can be that when you see the target go AWOL, delete
> the ANAME and its siblings, and replace them with the fallback records
> that were specified by some other means. You can apply the same logic to
> CNAMEs too, if you want :-)
>
> > P.S. There is a typographical error in Appendix D; "RRGIG" should be
> "RRSIG".
>
> Thanks.
>
> > P.P.S. I think it has been discussed before, but this document should
> also
> > introduce and use a new "Address RTYPE" registry or subregistry, rather
> than
> > forever constraining ANAME exclusively to A and .
>
> The -01 draft specified a registry but I dropped that from -02 because I
> was not sure if it should include X25, ISDN, NSAP, ATMA, the ILNP types,
> the Nimrod types, etc. And now I realise that it needs a lot more thought

Re: [DNSOP] Fundamental ANAME problems

2018-11-09 Thread Tony Finch
Richard Gibson  wrote:
>
> First, I am troubled by the requirement that ANAME forces the zone into a
> dynamic zone.

I don't see how it is possible to implement ANAME without some form of
dymamic behaviour, either by UPDATEs on the primary, or on-demand
substitution on the secondaries, or some combination of the two.

> Second, and relatedly, I think the TTLs of replacement records established for
> non-transfer responses are too high. Respecting the TTL of every record in a
> chain that starts with the ANAME requires the TTL of final replacement records
> to be no higher than the minimum TTL encountered over the chain, potentially
> /reduced/ nondeterministically to mitigate query bunching. I would therefore
> add language encouraging resolvers synthesizing those records to engage in
> best-effort determination of original TTLs by (e.g., by directly querying
> authoritative servers and refreshing at 50% remaining), but *requiring* them
> to decrement TTLs of records for which they are not authoritative.

I'm not sure I understand which TTLs you are worried about here. What are
"non-transfer responses"? There's certainly some rewording needed to make
it more clear, but the TTLs returned by resolvers that do sibling address
record substitution are decremented in the usual way, and resolvers make
no attempt to determine the original TTLs.

It isn't possible to require a resolver to query authoritative servers
directly.

> And finally, back on the question of what ANAME sibling address records
> actually represent, I think that NXDOMAIN and NODATA results should be treated
> as errors for the purposes of ANAME sibling replacement. This position can be
> justified on both practical and principled grounds—replacing functional
> records with an empty RRSet is undesirable for DNS users (or at least the
> sample of them that are Oracle+Dyn customers),

Maybe so, but that's what happens with CNAME records.

> and could inappropriately stretch the maximum specified ANAME sibling
> TTL (on the ANAME record itself) to the SOA MINIMUM value (which is
> doubly bad, because it results in extended caching of the /least/
> valuable state).

That's a very good point, thank you.

> Let's please just eliminate all of that by specifying that ANAME
> processing can never replace something with nothing.

So when the target goes away, you would prefer to leave behind zombie
address records, and stretch their TTL indefinitely? If the zone admin is
given only a target hostname (just like a CNAME) they don't have any
alternative addresses to use when the target goes away. So the options are
to copy the target by deleting the addresses, or ignore the target and
leave the addresses to rot.

I'm inclined to say that fallback records should remain a non-standard
feature. The semantics can be that when you see the target go AWOL, delete
the ANAME and its siblings, and replace them with the fallback records
that were specified by some other means. You can apply the same logic to
CNAMEs too, if you want :-)

> P.S. There is a typographical error in Appendix D; "RRGIG" should be "RRSIG".

Thanks.

> P.P.S. I think it has been discussed before, but this document should also
> introduce and use a new "Address RTYPE" registry or subregistry, rather than
> forever constraining ANAME exclusively to A and .

The -01 draft specified a registry but I dropped that from -02 because I
was not sure if it should include X25, ISDN, NSAP, ATMA, the ILNP types,
the Nimrod types, etc. And now I realise that it needs a lot more thought
about what will happen to interoperability when the registry changes.

Tony.
-- 
f.anthony.n.finchhttp://dotat.at/
a fair, free and open society___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-09 Thread Tim Wicinski



On 11/9/18 05:03, Matthijs Mekking wrote:


It seems that everyone thinks that the latest ANAME draft requires DNS 
UPDATE. This is just a use case that Tony provides and would help him 
in his daily operations. However, it is not required to do so: ANAME 
resolution can also happen by updating the zone file before loading it 
into the primary server. Or it may happen in the authority server if 
people desire to implement it there.


I think the draft should be updated to make that absolutely clear. The 
draft should standardize how ANAME resolution is done, and what it 
means to have ANAME and sibling address records in the zone for 
address rtype (A, , ...) and ANAME query lookup.




Agreed.



Second, and relatedly, I think the TTLs of replacement records 
established for non-transfer responses are too high. Respecting the 
TTL of every record in a chain that starts with the ANAME requires 
the TTL of final replacement records to be no higher than the minimum 
TTL encountered over the chain, potentially /reduced/ 
nondeterministically to mitigate query bunching. I would therefore 
add language encouraging resolvers synthesizing those records to 
engage in best-effort determination of original TTLs by (e.g., by 
directly querying authoritative servers and refreshing at 50% 
remaining), but *requiring* them to decrement TTLs of records for 
which they are not authoritative.


I agree, the TTL language in this document is not ready and needs more 
discussion.


If folks have some suggested text, please should send it along.




And finally, back on the question of what ANAME sibling address 
records actually represent, I think that NXDOMAIN and NODATA results 
should be treated as errors for the purposes of ANAME sibling 
replacement. This position can be justified on both practical and 
principled grounds—replacing functional records with an empty RRSet 
is undesirable for DNS users (or at least the sample of them that are 
Oracle+Dyn customers), and could inappropriately stretch the maximum 
specified ANAME sibling TTL (on the ANAME record itself) to the SOA 
MINIMUM value (which is doubly bad, because it results in extended 
caching of the /least/ valuable state). And adding insult to injury, 
resolvers in general will not even have the SOA, and will need to 
perform more lookups in order to issue a proper negative response of 
their own. Let's please just eliminate all of that by specifying that 
ANAME processing can never replace something with nothing.


+1


"ANAME processing can never replace something with nothing"  should also 
be mentioned in the document.


Mr Gibson also mentioned this:
P.P.S. I think it has been discussed before, but this document should 
also introduce and use a new "Address RTYPE" registry or subregistry, 
rather than forever constraining ANAME exclusively to A and .




I think that should be discussed.

thanks Matthijs,

tim

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-09 Thread Matthijs Mekking


On 11/9/18 4:27 AM, Richard Gibson wrote:
I have finally reviewed the latest draft directly, and like the overall 
direction but have a small number of issues (however, the issues 
theirselves are somewhat fundamental). They broadly break down into 
concerns about zone transfers and TTL stretching, and ultimately seem to 
stem from a disagreement with my position that the proper conception of 
ANAME sibling address records is as fallback data to be used in cases 
where ANAME target resolution fails or is never attempted.


First, I am troubled by the requirement that ANAME forces the zone into 
a dynamic zone. That a primary master would dynamically replace sibling 
records on its own, update the zone serial, and then propagate that 
dynamic data via zone transfer overrides user conception about the state 
of a zone, induces undesirable churn between authoritative nameservers, 
/and/ stretches the TTLs of ANAME targets on downstream servers by the 
amount of time between successive updates. These consequences are just 
too much for what is supposed to be a low-impact feature. Anyone willing 
to opt-in to them should be updating the ANAME sibling address records 
on their own, not forcing authoritative server implementations to choose 
between taking on that dirty work or being labeled noncompliant.


It seems that everyone thinks that the latest ANAME draft requires DNS 
UPDATE. This is just a use case that Tony provides and would help him in 
his daily operations. However, it is not required to do so: ANAME 
resolution can also happen by updating the zone file before loading it 
into the primary server. Or it may happen in the authority server if 
people desire to implement it there.


I think the draft should be updated to make that absolutely clear. The 
draft should standardize how ANAME resolution is done, and what it means 
to have ANAME and sibling address records in the zone for address rtype 
(A, , ...) and ANAME query lookup.


The customer does not care about the address records, other than it may 
want to provide a default address. So in their provisioning dashboard 
they will only add a domain name that represents their CDN or whatever.


The DNS provider will perform ANAME resolution somewhere between where 
the customer provides the ANAME and hands out the addresses to the DNS 
client.



Second, and relatedly, I think the TTLs of replacement records 
established for non-transfer responses are too high. Respecting the TTL 
of every record in a chain that starts with the ANAME requires the TTL 
of final replacement records to be no higher than the minimum TTL 
encountered over the chain, potentially /reduced/ nondeterministically 
to mitigate query bunching. I would therefore add language encouraging 
resolvers synthesizing those records to engage in best-effort 
determination of original TTLs by (e.g., by directly querying 
authoritative servers and refreshing at 50% remaining), but *requiring* 
them to decrement TTLs of records for which they are not authoritative.


I agree, the TTL language in this document is not ready and needs more 
discussion.



And finally, back on the question of what ANAME sibling address records 
actually represent, I think that NXDOMAIN and NODATA results should be 
treated as errors for the purposes of ANAME sibling replacement. This 
position can be justified on both practical and principled 
grounds—replacing functional records with an empty RRSet is undesirable 
for DNS users (or at least the sample of them that are Oracle+Dyn 
customers), and could inappropriately stretch the maximum specified 
ANAME sibling TTL (on the ANAME record itself) to the SOA MINIMUM value 
(which is doubly bad, because it results in extended caching of the 
/least/ valuable state). And adding insult to injury, resolvers in 
general will not even have the SOA, and will need to perform more 
lookups in order to issue a proper negative response of their own. Let's 
please just eliminate all of that by specifying that ANAME processing 
can never replace something with nothing.


+1


Best regards,

Matthijs


P.S. There is a typographical error in Appendix D; "RRGIG" should be 
"RRSIG".


P.P.S. I think it has been discussed before, but this document should 
also introduce and use a new "Address RTYPE" registry or subregistry, 
rather than forever constraining ANAME exclusively to A and .


On 11/2/18 17:00, Richard Gibson wrote:


I haven't reviewed the full draft yet, but am happy to see some people 
echoing my sentiments from earlier versions [1]. I particularly wanted 
to agree with some statements from Bob Harold.


On 11/2/18 15:20, Bob Harold wrote:
Another option to give users is a non-updating fallback A record, 
that could point to a web redirect.  That saves all the hassle of 
updates.


YES! This means a slightly worse fallback-only experience for users 
behind ANAME-ignorant resolvers that query against ANAME-ignorant 
authoritatives (the introduction of ANAME a

Re: [DNSOP] Fundamental ANAME problems

2018-11-08 Thread Richard Gibson
I have finally reviewed the latest draft directly, and like the overall 
direction but have a small number of issues (however, the issues 
theirselves are somewhat fundamental). They broadly break down into 
concerns about zone transfers and TTL stretching, and ultimately seem to 
stem from a disagreement with my position that the proper conception of 
ANAME sibling address records is as fallback data to be used in cases 
where ANAME target resolution fails or is never attempted.


First, I am troubled by the requirement that ANAME forces the zone into 
a dynamic zone. That a primary master would dynamically replace sibling 
records on its own, update the zone serial, and then propagate that 
dynamic data via zone transfer overrides user conception about the state 
of a zone, induces undesirable churn between authoritative nameservers, 
/and/ stretches the TTLs of ANAME targets on downstream servers by the 
amount of time between successive updates. These consequences are just 
too much for what is supposed to be a low-impact feature. Anyone willing 
to opt-in to them should be updating the ANAME sibling address records 
on their own, not forcing authoritative server implementations to choose 
between taking on that dirty work or being labeled noncompliant.


Second, and relatedly, I think the TTLs of replacement records 
established for non-transfer responses are too high. Respecting the TTL 
of every record in a chain that starts with the ANAME requires the TTL 
of final replacement records to be no higher than the minimum TTL 
encountered over the chain, potentially /reduced/ nondeterministically 
to mitigate query bunching. I would therefore add language encouraging 
resolvers synthesizing those records to engage in best-effort 
determination of original TTLs by (e.g., by directly querying 
authoritative servers and refreshing at 50% remaining), but *requiring* 
them to decrement TTLs of records for which they are not authoritative.


And finally, back on the question of what ANAME sibling address records 
actually represent, I think that NXDOMAIN and NODATA results should be 
treated as errors for the purposes of ANAME sibling replacement. This 
position can be justified on both practical and principled 
grounds—replacing functional records with an empty RRSet is undesirable 
for DNS users (or at least the sample of them that are Oracle+Dyn 
customers), and could inappropriately stretch the maximum specified 
ANAME sibling TTL (on the ANAME record itself) to the SOA MINIMUM value 
(which is doubly bad, because it results in extended caching of the 
/least/ valuable state). And adding insult to injury, resolvers in 
general will not even have the SOA, and will need to perform more 
lookups in order to issue a proper negative response of their own. Let's 
please just eliminate all of that by specifying that ANAME processing 
can never replace something with nothing.


P.S. There is a typographical error in Appendix D; "RRGIG" should be 
"RRSIG".


P.P.S. I think it has been discussed before, but this document should 
also introduce and use a new "Address RTYPE" registry or subregistry, 
rather than forever constraining ANAME exclusively to A and .


On 11/2/18 17:00, Richard Gibson wrote:


I haven't reviewed the full draft yet, but am happy to see some people 
echoing my sentiments from earlier versions [1]. I particularly wanted 
to agree with some statements from Bob Harold.


On 11/2/18 15:20, Bob Harold wrote:
Another option to give users is a non-updating fallback A record, 
that could point to a web redirect.  That saves all the hassle of 
updates.


YES! This means a slightly worse fallback-only experience for users 
behind ANAME-ignorant resolvers that query against ANAME-ignorant 
authoritatives (the introduction of ANAME awareness to /either/ 
component allowing an opportunity to provide better address records by 
chasing the ANAME target), but provides a dramatic reduction in the 
amount of necessary XFR traffic. And even more importantly, it forces 
TTL stretching to be an explicit decision on the part of those 
administrators who choose to perform manual target resolution and 
update their zones to use them as fallback records (as they would do 
now to approximate ANAME anyway), rather than an inherent and enduring 
aspect of the functionality.


Treating ANAME-sibling address records as fallback data also supports 
better behavior for dealing with negative results from resolving ANAME 
targets (NODATA, NXDOMAIN, signature verification failure, response 
timeout, etc.)—serve the fallbacks.


My preference would be a *NAME record that specifies which record 
types it applies to.  So one could delegate A and  at apex to a 
web provider, MX to a mail provider, etc.  That would also be 
valuable at non-apex names.  But I am happy to support ANAME as part 
of the solution.
I agree on both counts (arbitrary type-specificity and deferment to a 
later date).



[1]: https://www.ietf.org/ma

Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Patrik Fältström
On 6 Nov 2018, at 22:30, Ray Bellis wrote:

> You can have wildcard support, or you can have prefixes (hence
> delegation), but you can't have both.

Thats exactly my point. URI solves "the other problem".

   Patrik


signature.asc
Description: OpenPGP digital signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Tony Finch
Ray Bellis  wrote:
> On 06/11/2018 20:44, Tony Finch wrote:
>
> > If you are using an _prefix without any meaning of its own but only to
> > move a record away from the apex (so that it can be delegated or CNAMEd)
> > and also using a specific RR type or an RDATA prefix, then wildcards do
> > not conflict.
>
> I believe they still do, e.g.
>
> _domainkey.*.example.com IN TXT ...

You obviously can't do that, but you can do:

*.example.com TXT ...

and it'll match queries for tag._domainkey.whatever.example.com.

Except that it won't work very well for the specific example of _domainkey
records, because of the tag selector in the qname.

It will probably work OK for DMARC records which do not have any selectors
in the qname and which have a nice prefix in the TXT RDATA.

For the runing LJ example, if you want to match

_http.fanf.livejournal.com HTTP ...

the zone admin can publish

*.livejournal.com HTTP ...

But for the HTTP case, the record itself provides enough indirection so
there isn't any need for a _prefix to allow delegation as you might want
to for DMARC.

Tony.
-- 
f.anthony.n.finchhttp://dotat.at/
Fair Isle, Faeroes, Southeast Southeast Iceland: Southeasterly 5 to 7,
occasionally gale 8 in Fair Isle. Rough, occasionally very rough. Occasional
rain. Moderate or good, occasionally poor.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Ray Bellis




On 06/11/2018 20:51, Joe Abley wrote:


Ray has wider aspirations than just the apex. This may well be
sensible, but I think it's worth calling out the scope creep.


It's in the intro text:

"This document specifies an "HTTP" resource record type for the DNS to
 facilitate the lookup of the server hostname of HTTP(s) URIs.  It is
 intended to replace the use of CNAME records for this purpose, and in
 the process provides a solution for the inability of the DNS to allow a
 CNAME to be placed at the apex of a domain name"

Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Ray Bellis




On 06/11/2018 20:44, Tony Finch wrote:


My understanding is that wildcards don't work for SRV because the
_prefixes are used to disambiguate which service you are asking for,
effectively to extend the RR TYPE number space. So if you wildcard a SRV
record then the target port has to support every possible protocol :-)


No, it's because you can't do:

_http._tcp.*.example.com IN SRV ...


If you are using an _prefix without any meaning of its own but only to
move a record away from the apex (so that it can be delegated or CNAMEd)
and also using a specific RR type or an RDATA prefix, then wildcards do
not conflict.


I believe they still do, e.g.

_domainkey.*.example.com IN TXT ...

Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Ray Bellis



On 06/11/2018 20:58, Patrik Fältström wrote:


We should also remember that there is a different goal as well, and
that is to be able to delegate the zone within which "the records
dealing with web" is managed so that the administrative
responsibility is separated between the one which run the zone for
example.com and the ones that run for _http._tcp.example.com (or
_tcp.example.com).


That's an implicit non-goal of my draft.  I may have to make it more 
explicit.


You can have wildcard support, or you can have prefixes (hence
delegation), but you can't have both.

Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Dan York
Olli,

> On Nov 6, 2018, at 3:23 PM, Olli Vanhoja  wrote:
> 
> In fact if you look at the DNS records some big Internet companies
> they rarely use CNAMEs for www but instead you'll see an A record, that might
> be even backed by a proprietary ANAME solution.

One detail about this is that if the CDN being used by the large Internet 
company is *also* providing the DNS hosting for the Internet company, then the 
CDN will do its resolution internally and return A /  records directly. 

I did not do the kind of large scale measurement that Thomas Peterson did, but 
in anecdotally looking at the www records for a number of sites returning 
A/ records, I often saw that the ones returning A /  records also had 
NS records pointing to name servers run by CDNs I could recognize.  (I 
mentioned this in a note currently as section 2.1 of 
https://datatracker.ietf.org/doc/draft-york-dnsop-cname-at-apex-publisher-view/ 

  )

So yes, in those cases the A record is being dynamically created by whatever 
(potentially proprietary) ANAME/CNAME-like solution the CDN vendor is using 
internally in their DNS hosting operations. 

Dan

--
Dan York
Director, Content & Web Strategy, Internet Society
y...@isoc.org   +1-802-735-1624 
Jabber: y...@jabber.isoc.org  Skype: danyork   http://twitter.com/danyork

http://www.internetsociety.org/



smime.p7s
Description: S/MIME cryptographic signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Patrik Fältström
On 6 Nov 2018, at 17:51, Joe Abley wrote:

>> On Nov 6, 2018, at 20:44, Tony Finch  wrote:
>>
>> Joe Abley  wrote:
>>>
>>> Specifically, I s the wildcard owner name a real problem in the grand
>>> scheme of things?
>>
>> My understanding is that wildcards don't work for SRV because the
>> _prefixes are used to disambiguate which service you are asking for,
>> effectively to extend the RR TYPE number space. So if you wildcard a SRV
>> record then the target port has to support every possible protocol :-)
>
> Right, but my point was that wildcard owner names aren't seen at the apex, so 
> a solution to the problem of what to do at the apex doesn't need to worry 
> about them.
>
> Ray has wider aspirations than just the apex. This may well be sensible, but 
> I think it's worth calling out the scope creep.

We should also remember that there is a different goal as well, and that is to 
be able to delegate the zone within which "the records dealing with web" is 
managed so that the administrative responsibility is separated between the one 
which run the zone for example.com and the ones that run for 
_http._tcp.example.com (or _tcp.example.com).

   Patrik


signature.asc
Description: OpenPGP digital signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Joe Abley
Hi Tony.

> On Nov 6, 2018, at 20:44, Tony Finch  wrote:
>
> Joe Abley  wrote:
>>
>> Specifically, I s the wildcard owner name a real problem in the grand
>> scheme of things?
>
> My understanding is that wildcards don't work for SRV because the
> _prefixes are used to disambiguate which service you are asking for,
> effectively to extend the RR TYPE number space. So if you wildcard a SRV
> record then the target port has to support every possible protocol :-)

Right, but my point was that wildcard owner names aren't seen at the
apex, so a solution to the problem of what to do at the apex doesn't
need to worry about them.

Ray has wider aspirations than just the apex. This may well be
sensible, but I think it's worth calling out the scope creep.


Joe

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Tony Finch
Joe Abley  wrote:
>
> Specifically, I s the wildcard owner name a real problem in the grand
> scheme of things?

My understanding is that wildcards don't work for SRV because the
_prefixes are used to disambiguate which service you are asking for,
effectively to extend the RR TYPE number space. So if you wildcard a SRV
record then the target port has to support every possible protocol :-)

If you are using an _prefix without any meaning of its own but only to
move a record away from the apex (so that it can be delegated or CNAMEd)
and also using a specific RR type or an RDATA prefix, then wildcards do
not conflict.

Tony.
-- 
f.anthony.n.finchhttp://dotat.at/
Viking, North Utsire, South Utsire, Northeast Forties: Southeasterly 6 to gale
8. Moderate or rough, occasionally very rough later. Occasional drizzle. Good,
occasionally poor.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Thomas Peterson
That may be the case from your own (presumably anecdotal) experience, however I 
took the Alexa top 1 million websites and queried for A* and CNAME against the 
www records for the top 10 000 domains. What I found is that approximately 44% 
returned CNAME records, 56% returning A records.

 

Code is https://gist.github.com/thpts/eb5cec361867170a0ffd6ede136c6649 here if 
anyone wishes to look.

 

Regards

 

* I realise that I could have added . My presumption is that the top 10k 
websites are not v6 only and at least have an A record in place.

 

From: DNSOP  on behalf of Olli Vanhoja 
Date: Tuesday, 6 November 2018 at 08:24
To: 
Subject: Re: [DNSOP] Fundamental ANAME problems

 

In fact if you look at the DNS records some big Internet companies

they rarely use CNAMEs for www but instead you'll see an A record, that might

be even backed by a proprietary ANAME solution.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-06 Thread Olli Vanhoja
> > The semantics is exactly like a CNAME + HTTP Redirect.
>
> The latter part is what I expected, and why I think it's a non-starter.
>
> HTTP Redirects cause the URI in the address bar to be changed.  A lot of
> the whole "CNAME at the Apex" issue arises because lots of marketing
> people don't want end users to have to type *or see* the www prefix.
>
> Those folks aren't going to stand for their nice clean "example.com" URL
> getting replaced with the real CDN address in the address bar.

It's not only about what is shown in the address bar but how fast the
website will
start rendering something on the screen. Even resolving a CNAME may add a
proportionally big delay to the TTFB, it could take about the same time as
TLS
negotiation. In fact if you look at the DNS records some big Internet
companies
they rarely use CNAMEs for www but instead you'll see an A record, that
might
be even backed by a proprietary ANAME solution.
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Ray Bellis




On 06/11/2018 04:07, Joe Abley wrote:


Specifically, I s the wildcard owner name a real problem in the grand
scheme of things? I understand that wildcards are used by some people
for names that feature in HTTP URIs, but I'm struggling to imagine
using a wildcard at a zone cut; [...]


You're not wrong, because most often the wildcard is indeed a label 
below that cut.


However, the intent is that this record would eventually replace *all* 
use of CNAME for web redirection regardless of whether at the zone cut 
or not.


This isn't a wildcard example, but here's a re-post of a currently 
impossible zone configuration from one of my emails Sunday:


$ORIGIN example.com
@IN SOA   ...
 IN NS...
company-division IN MX
company-division IN CNAME 

Replacing that CNAME with HTTP makes this configuration possible.


To be clear, the rules are clear and you should feel as empowered as
anybody to apply for an early assignment of an RRTYPE and start
writing code. If I sounded like I was arguing against that I
definitely apologise!


No worries! :)


However I think that a more coordinated approach that involves people
from both web and DNS communities to understand the problem space is
more sensible, though, and more likely to be productive for this
working group. It's not clear to me that either community has a great
track record just guessing at what the other one wants.


I've been actively socialising this with web people since Saturday even 
before the draft was submitted.  I'm going to be talking about it 
briefly at HTTP-bis this morning.


This draft is IMHO not so much a "guess", but a "starting point" based 
on what web folks said at the side meeting in Montreal.


Yes, it'll require browser implementors to update their code, but the 
alternative is breaking the camel's back.


cheers,

Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Joe Abley
Hi Ray,

> On Nov 5, 2018, at 22:38, Ray Bellis  wrote:
>
> There *is* a big failing of SRV that's independent of the CNAME apex use 
> case, and that is its lack of support for wildcards.  Since my proposal 
> doesn't use underscore prefix labels, wildcards will work, and this is an 
> important requirement for some large website operators.

I realise it's 4am and I shouldn't even be awake,  ever mind replying
to dnsop mail, but it's not clear to me what the use-case is, here.

Specifically, I s the wildcard owner name a real problem in the grand
scheme of things? I understand that wildcards are used by some people
for names that feature in HTTP URIs, but I'm struggling to imagine
using a wildcard at a zone cut; if a wildcard label doesn't correspond
to a zone apex, why is it a problem that needs fixing? Didn't Ed
already clarify the use of CNAME with wildcards in RFC 4592 twelve
years ago?

> The cost to the DNS community of *trying* my proposed HTTP record is pretty 
> negligible.

To be clear, the rules are clear and you should feel as empowered as
anybody to apply for an early assignment of an RRTYPE and start
writing code. If I sounded like I was arguing against that I
definitely apologise!

However I think that a more coordinated approach that involves people
from both web and DNS communities to understand the problem space is
more sensible, though, and more likely to be productive for this
working group. It's not clear to me that either community has a great
track record just guessing at what the other one wants.


Joe

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Patrik Fältström
On 3 Nov 2018, at 23:32, Måns Nilsson wrote:

> _http._tcp.example.org. IN URI10 20   
> "https://example-lb-frontend.hosting.namn.se:8090/path/down/in/filestructure/";

Btw, this is sort of what I am thinking of for URI, cooked up directly after 
dinner. Could be a wrapper around curl that fetches stuff. Probably hundreds of 
bad stuff in what I have below, but still...you get the point.

You can try with http://www.frobbit.se/ or http://frobbit.se/ for example, as I 
have URI records for them.

#!/bin/sh

PREFIX=_http._tcp
DOMAIN=`echo "$1" | sed 's/^.*\/\/\([^\/]*\)\/.*$/\1/'`
QUERY=`echo "$1" | sed 's/^.*\/\/[^\/]*\/\(.*\)$/\1/'`
URI=`dig "$PREFIX.$DOMAIN." URI +short`
if [ "x$URI" = x ]; then
   NEWURI="$1"
else
   NEWURI="`echo $URI | awk '-F"' '{ print $2 }'`${QUERY}"
fi

echo curl -H "HOST:$DOMAIN" "$NEWURI"


signature.asc
Description: OpenPGP digital signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Mark Andrews
You can measure when to stop publishing A/ records with HTTP by pointing 
the HTTP record at a different address.  Clients that are HTTP record aware 
will use one address and legacy clients will use the other address.

Mark
-- 
Mark Andrews

> On 6 Nov 2018, at 05:16, Tony Finch  wrote:
> 
> Joe Abley  wrote:
>> On Nov 5, 2018, at 15:35, Måns Nilsson  wrote:
>> 
 I think a resolver-side or client-side alternative (like the various
 web-specific record types we have been discussing) is defintely soemthing
 we should aim for in the long term, but that isn't what this work is
 about.
>>> 
>>> IMNSHO _any_ work on "fixing CNAMES at apex" that gets traction is
>>> a spanner in the works for what we seem to agree is a better solution.
>>> A interim fix will be deployed and stall every attempt at DTRT.
>> 
>> I think you are both right.
>> 
>> First, pragmatically speaking, there is clearly demand for something
>> that can do "CNAME at apex". DNS companies sell it, people buy it. It
>> already exists, but in as many flavours as there are providers that
>> support it, so interop is difficult. Having multiple providers is good;
>> interop makes that easier. Maybe there's work that the IETF could do
>> here, but I would suggest that a solution that nobody implements is not
>> much use.
> 
> Exactly.
> 
>> A reasonable starting point would be to survey the existing
>> implementations and ideally get the enterprise DNS providers responsible
>> to join in.
> 
> I've had informal chats with people from a number of big DNS providers.
> 
> * One or two big DNS providers see better interop as beneficial to
>  themselves as well as their customers. (I guess if two DNS providers
>  can get a customers to pay both of them then everyone is happy!)
> 
> * Standardization is a tempting opportunity to rationalize services and
>  reduce technical debt.
> 
> * Dynamic on-demand ANAME substitution is difficult to do with reliably
>  good performance at scale.
> 
> * Frequent UPDATEs are not something you want at large scale.
> 
> * One CDN said they wouldn't do general ANAME, only their existing
>  vertically-integrated setup where the CDN controls the (notional)
>  target.
> 
> So it's a mixture of great desire to have a solution to the problem, but
> the auth-side solutions are unappealing to many key players. My plan was
> basically to write a draft that waves its hands vigorously and says, yes!
> whatever you are doing can be made to fit ANAME! But that won't work if
> the response is, We aren't doing anything like ANAME and we don't want to
> do anything like THAT. So I'm feeling less confident that it's going to
> get consensus.
> 
>> Second, what is the longer-term solution that seems least likely to
>> cause painful intestinal cramping and bleeding eyes? I agree that if we
>> want a clean answer we should be looking at the clients, not the
>> authority servers.
> 
> Yes. I kind of feel in a weird superposition of states, believing both
> Evan's skepticism, and Ray's optimism.
> 
> I'm not sure what Web browser feature deployment timescales are like now.
> Picking a random example, from https://caniuse.com/#feat=flexbox it looks
> like full flexbox support started appearing in 2012 and it's now at about
> 96% availability (most of the gap being due to IE).
> 
> But for ANAME we're also concerned about other HTTP clients, especially
> dozens of programming-language-specific libraries and long-term-support
> enterpriseware with much slower deployment timescales. And ANAME is also
> useful for ssh clients, though they don't have such a large userbase, but
> ssh is very relevant to Tim's comments in his presentation about on-demand
> compute services.
> 
> Which implies that even if HTTP records become a thing, there will be a
> period of years during which zone admins will also have to provision
> address records for backwards compatibility. What we have seen again and
> again in this kind of situation is that operators will react with, What
> benefit do I get from the effort to learn this new thing and the extra
> work to support it? For HTTP records I fear the practical benefits in the
> short term will be zero.
> 
> Or maybe there are ways to get positive benefits. If ANAME stalls I'm
> inclined to implement auto-provisioning of address records driven by HTTP
> records instead, as a non-standard short-term backwards compatibility
> stop-gap.
> 
> Tony.
> -- 
> f.anthony.n.finchhttp://dotat.at/
> Biscay: Cyclonic, becoming westerly later, 5 to 7, increasing gale 8 or severe
> gale 9 for a time in south. Moderate or rough, occasionally very rough in
> south. Rain or thundery showers. Good, occasionally poor.
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Tony Finch
Joe Abley  wrote:
> On Nov 5, 2018, at 15:35, Måns Nilsson  wrote:
>
> >> I think a resolver-side or client-side alternative (like the various
> >> web-specific record types we have been discussing) is defintely soemthing
> >> we should aim for in the long term, but that isn't what this work is
> >> about.
> >
> > IMNSHO _any_ work on "fixing CNAMES at apex" that gets traction is
> > a spanner in the works for what we seem to agree is a better solution.
> > A interim fix will be deployed and stall every attempt at DTRT.
>
> I think you are both right.
>
> First, pragmatically speaking, there is clearly demand for something
> that can do "CNAME at apex". DNS companies sell it, people buy it. It
> already exists, but in as many flavours as there are providers that
> support it, so interop is difficult. Having multiple providers is good;
> interop makes that easier. Maybe there's work that the IETF could do
> here, but I would suggest that a solution that nobody implements is not
> much use.

Exactly.

> A reasonable starting point would be to survey the existing
> implementations and ideally get the enterprise DNS providers responsible
> to join in.

I've had informal chats with people from a number of big DNS providers.

* One or two big DNS providers see better interop as beneficial to
  themselves as well as their customers. (I guess if two DNS providers
  can get a customers to pay both of them then everyone is happy!)

* Standardization is a tempting opportunity to rationalize services and
  reduce technical debt.

* Dynamic on-demand ANAME substitution is difficult to do with reliably
  good performance at scale.

* Frequent UPDATEs are not something you want at large scale.

* One CDN said they wouldn't do general ANAME, only their existing
  vertically-integrated setup where the CDN controls the (notional)
  target.

So it's a mixture of great desire to have a solution to the problem, but
the auth-side solutions are unappealing to many key players. My plan was
basically to write a draft that waves its hands vigorously and says, yes!
whatever you are doing can be made to fit ANAME! But that won't work if
the response is, We aren't doing anything like ANAME and we don't want to
do anything like THAT. So I'm feeling less confident that it's going to
get consensus.

> Second, what is the longer-term solution that seems least likely to
> cause painful intestinal cramping and bleeding eyes? I agree that if we
> want a clean answer we should be looking at the clients, not the
> authority servers.

Yes. I kind of feel in a weird superposition of states, believing both
Evan's skepticism, and Ray's optimism.

I'm not sure what Web browser feature deployment timescales are like now.
Picking a random example, from https://caniuse.com/#feat=flexbox it looks
like full flexbox support started appearing in 2012 and it's now at about
96% availability (most of the gap being due to IE).

But for ANAME we're also concerned about other HTTP clients, especially
dozens of programming-language-specific libraries and long-term-support
enterpriseware with much slower deployment timescales. And ANAME is also
useful for ssh clients, though they don't have such a large userbase, but
ssh is very relevant to Tim's comments in his presentation about on-demand
compute services.

Which implies that even if HTTP records become a thing, there will be a
period of years during which zone admins will also have to provision
address records for backwards compatibility. What we have seen again and
again in this kind of situation is that operators will react with, What
benefit do I get from the effort to learn this new thing and the extra
work to support it? For HTTP records I fear the practical benefits in the
short term will be zero.

Or maybe there are ways to get positive benefits. If ANAME stalls I'm
inclined to implement auto-provisioning of address records driven by HTTP
records instead, as a non-standard short-term backwards compatibility
stop-gap.

Tony.
-- 
f.anthony.n.finchhttp://dotat.at/
Biscay: Cyclonic, becoming westerly later, 5 to 7, increasing gale 8 or severe
gale 9 for a time in south. Moderate or rough, occasionally very rough in
south. Rain or thundery showers. Good, occasionally poor.___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Mark Andrews
People don’t want CNAME at the apex. They want a pointer to a server for a 
service at the apex.
CNAME provided a pointer to a server when the prefix was www. 

This can be done a number of ways.

1) Prefix  + name in rdata.
2) Service specific type + name in rdata. 
3) Generic type + service and name in rdata. 

This is DNS choices from the IAB. 

ANAME is a extremely complicated version of 2 for HTTP which may be morphed 
into 1 with a well known service name for other services but will always have 
the risk of being misinterpreted. 

There are much simpler ways to achieve 2 than ANAME for HTTP.
 
Mark
-- 
Mark Andrews

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Paul Vixie




Ray Bellis wrote:



On 06/11/2018 00:36, Paul Vixie wrote:

second reply, on a more general topic:

the "HTTP URI" ...


The additional data is not mandatory.


then, according to marka, the web people won't use it.

--
P Vixie

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Ray Bellis




On 06/11/2018 00:36, Paul Vixie wrote:

second reply, on a more general topic:

the "HTTP URI" will require a change to bert's teaching resolver (tres), 
which correctly handles unrecognized code points and thus would need no 
changes at all if the additional data weren't mandatory. i think in 
modern terminology, if your proposed addition to the DNS protocol 
requires a change to "tres", it's (a) not "cheap", and (b) part of "the 
camel". we are adding state, logic, and signal. (ouch.)


The additional data is not mandatory.

more broadly: most ideas are bad, including mine, and especially when 
DNS is the subject area. self-deception about how cheap they will be 
looks wretched on us. let's not be that. if a change is to be made, let 
it be because there is _no_ existing way within the standard to 
accomplish some vital task. SRV's lack of wildcard support is adequate 
cause. two RTT's on a cache miss is not. apparent cheapness is not.


Ack, except on that very last point (see previous message) where I think 
we need to consider the relative cost-benefit-analysis of the alternatives.


Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Ray Bellis

On 06/11/2018 00:32, Paul Vixie wrote:

please don't think this way, and don't do the right thing for the wrong 
reasons. the paragraph above is how the camel came to be -- one draft at 
a time, all well-meaning.


The front running alternative (ANAME) shifts the entire and far more 
considerable complexity entirely into the DNS, and affects both 
authoritative and recursive servers.  Even then, I don't think it'll 
work properly with geo-locating CDNs nor with DNSSEC.


ANAME is less complex for the browsers (zero cost, even), but it's close 
to another whole hump's worth of complexity for The Camel.


I accept that the cost of the HTTP RR is not zero, but if it does 
succeed, that cost will be far far lower than any of the alternatives.


cheers,

Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Paul Vixie

second reply, on a more general topic:

the "HTTP URI" will require a change to bert's teaching resolver (tres), 
which correctly handles unrecognized code points and thus would need no 
changes at all if the additional data weren't mandatory. i think in 
modern terminology, if your proposed addition to the DNS protocol 
requires a change to "tres", it's (a) not "cheap", and (b) part of "the 
camel". we are adding state, logic, and signal. (ouch.)


more broadly: most ideas are bad, including mine, and especially when 
DNS is the subject area. self-deception about how cheap they will be 
looks wretched on us. let's not be that. if a change is to be made, let 
it be because there is _no_ existing way within the standard to 
accomplish some vital task. SRV's lack of wildcard support is adequate 
cause. two RTT's on a cache miss is not. apparent cheapness is not.


--
P Vixie

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Paul Vixie
there were several pro-HTTP-URI comments in this thread; i picked the 
one with the most technical meat. brian, jim, paul: thank you for your 
comments.


Ray Bellis wrote:

On 05/11/2018 18:55, Joe Abley wrote:


2. Find a client-side solution to this, and try really hard not to
invent something new that is really just SRV with a hat and a false
moustache.


There *is* a big failing of SRV that's independent of the CNAME apex use
case, and that is its lack of support for wildcards. Since my proposal
doesn't use underscore prefix labels, wildcards will work, and this is
an important requirement for some large website operators.


on that basis, i would ordinarily withdraw my objection, since the 
needed functionality is just not present in the existing code point.



The cost to the DNS community of *trying* my proposed HTTP record is
pretty negligible. Worst case, as Brian put it, is we burn a code point,
add a trivial amount of code to DNS servers, but the browsers don't
adopt it. It wouldn't be the first time, it won't be the last.


please don't think this way, and don't do the right thing for the wrong 
reasons. the paragraph above is how the camel came to be -- one draft at 
a time, all well-meaning.


the fully loaded long term cost to the economy of an "if" statement in a 
widely deployed C program is about USD 10.000. we should never add to 
this externalized cost unless we have a compelling reason to do so. "the 
web people don't want even one extra RTT, ever" is not compelling. "the 
web people need wildcards to work which they don't in SRV" however, is 
compelling.



However, it only takes one of the big browser vendors to decide they'll
support it and I think the rest will shortly thereafter follow suit.

NB: this proposal currently satisfies the criteria for assignment via
expert review per RFC 6895.


as may be, this looks like an argument over who has to fetch the data. 
the browser people don't want it to be them. i can't blame them, but the 
reason SRV specifies opportunistic rather than mandatory additional data 
is to keep the recursive DNS server from both fetching and caching (and 
now, validating) the /A, from having to keep the SRV transaction 
open longer (while its mandatory /A additionals are fetched and 
before the very first answer, a possibly soft cache-miss, is sent), and 
to avoid loading the network with a query whose answer might not be used 
(something else about the SRV response might have caused a failure in 
the www transaction such that the /A is never needed.)


those were solid and still-valid engineering-technology and 
engineering-economics considerations. when we add "HTTP URI" we do so at 
costs greater than a code point. and even the code point has to add some 
"if" statements to the globally deployed system. that won't be cheap, 
merely externalized. a cheeseburger that externalizes USD 10 of 
environmental cleanup costs onto the economy is still a USD 10 
cheeseburger, even if it only costs USD 0,25 to make and has a consumer 
price of USD 1.


--
P Vixie

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Jim Reid



> On 5 Nov 2018, at 15:38, Ray Bellis  wrote:
> 
> The cost to the DNS community of *trying* my proposed HTTP record is pretty 
> negligible.  Worst case, as Brian put it, is we burn a code point, add a 
> trivial amount of code to DNS servers, but the browsers don't adopt it.  It 
> wouldn't be the first time, it won't be the last.

I think this is worth a punt. The risks/costs are low and the benefits are more 
than worth it.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Paul Ebersman
mansaxel> IMNSHO _any_ work on "fixing CNAMES at apex" that gets
mansaxel> traction is a spanner in the works for what we seem to agree
mansaxel> is a better solution. A interim fix will be deployed and stall
mansaxel> every attempt at DTRT.

While I agree with this approach in principle, the reality is we've had
a couple of decades and never come up with anything enough better to get
used.

There are times when an 80% solution is better with 0%, even if it might
slow down perfect.

jabley> So for what it's worth, this is what I think we should be doing:

jabley> 1. Make the existing, proprietary, non-interoperable dumpster
jabley>fire better if we can (maybe we can't; the way to tell is
jabley>whether the enterprise DNS people are interested);

Yes. And get buyoff from the browser and large auth folks so it actually
gets used.

jabley> 2. Find a client-side solution to this, and try really hard not
jabley>to invent something new that is really just SRV with a hat
jabley>and a false moustache.

Also yes. Folks saying that SRV won't work for them aren't stupid. They
have their own agendas that don't consider DNS to be the most important
thing to them; to them it's a handy tool. We should respect that
attitude and come up with a legit new solution both sides can live with.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Ray Bellis




On 05/11/2018 18:55, Joe Abley wrote:

2. Find a client-side solution to this, and try really hard not to 
invent something new that is really just SRV with a hat and a false 
moustache.


There *is* a big failing of SRV that's independent of the CNAME apex use 
case, and that is its lack of support for wildcards.  Since my proposal 
doesn't use underscore prefix labels, wildcards will work, and this is 
an important requirement for some large website operators.


The cost to the DNS community of *trying* my proposed HTTP record is 
pretty negligible.  Worst case, as Brian put it, is we burn a code 
point, add a trivial amount of code to DNS servers, but the browsers 
don't adopt it.  It wouldn't be the first time, it won't be the last.


However, it only takes one of the big browser vendors to decide they'll 
support it and I think the rest will shortly thereafter follow suit.


NB: this proposal currently satisfies the criteria for assignment via
expert review per RFC 6895.

Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread manu tman
I like the ANAME idea and find it overall simple if what we are trying to
solve is CNAME at apex. If what is being solved is per service then it is
another story.
As much as I like it, I find the resolution at the auth nameserver a bad
thing for a couple of reasons.

As has been mentioned before:
1) it will add workload on the authoritative nameserver which so far was
mostly doing key/value lookups and now may need to either recurse or
forward to a recursor.
2) the resolution from such lookup will be wrong for resolver/ecs based
answers as you will now get an answer for the recursor at the authoritative
site instead of the client (recursor talking to the auth or ECS). While
doing per site ANAME resolution may make the answers a bit more accurate,
it will definitely not help with operations.

if someone want to do the chaining, I guess they could already do it with
some tooling on their side which will perform regular lookup and update
their zones so essentially making the ANAME resolution an out of band task.

If all that was required was to return an ANAME in the additional section,
it would be pretty straightforward to implement on the authoritative side
and add no complexity there neither workload (or very minimal).
On the recursor side, this will most likely heavily reuse the CNAME logic
and may not be that complex to implement (implementors may tell otherwise).

Recursors that understand ANAMES will be able to treat it as a CNAME and
follow the name chain just like for CNAME. If they don't, well nothing has
changed for them.
It may take time before it gets widely deployed, but it would be a simple
solution that could be easily implemented by the auth that are interested
in it, gets picked up as the recursors get upgraded and be backward
compatible during the transition phase.

Manu

On Mon, Nov 5, 2018 at 3:35 PM Måns Nilsson 
wrote:

> Subject: Re: [DNSOP] Fundamental ANAME problems Date: Fri, Nov 02, 2018 at
> 04:39:09PM + Quoting Tony Finch (d...@dotat.at):
> > It's really good to see more discussion about ANAME.
>
> I agree.
>
> > I think a resolver-side or client-side alternative (like the various
> > web-specific record types we have been discussing) is defintely soemthing
> > we should aim for in the long term, but that isn't what this work is
> > about.
>
> IMNSHO _any_ work on "fixing CNAMES at apex" that gets traction is
> a spanner in the works for what we seem to agree is a better solution.
> A interim fix will be deployed and stall every attempt at DTRT.
>
> I am well aware of "perfect being the enemy of good enough" but I'm not
> certain DNAME is "good enough".
>
> --
> Måns Nilsson primary/secondary/besserwisser/machina
> MN-1334-RIPE   SA0XLR+46 705 989668
> Now KEN and BARBIE are PERMANENTLY ADDICTED to MIND-ALTERING DRUGS ...
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Joe Abley
On Nov 5, 2018, at 15:35, Måns Nilsson  wrote:

>> I think a resolver-side or client-side alternative (like the various
>> web-specific record types we have been discussing) is defintely soemthing
>> we should aim for in the long term, but that isn't what this work is
>> about.
> 
> IMNSHO _any_ work on "fixing CNAMES at apex" that gets traction is 
> a spanner in the works for what we seem to agree is a better solution.
> A interim fix will be deployed and stall every attempt at DTRT.

I think you are both right.

First, pragmatically speaking, there is clearly demand for something that can 
do "CNAME at apex". DNS companies sell it, people buy it. It already exists, 
but in as many flavours as there are providers that support it, so interop is 
difficult. Having multiple providers is good; interop makes that easier. Maybe 
there's work that the IETF could do here, but I would suggest that a solution 
that nobody implements is not much use. A reasonable starting point would be to 
survey the existing implementations and ideally get the enterprise DNS 
providers responsible to join in.

Second, what is the longer-term solution that seems least likely to cause 
painful intestinal cramping and bleeding eyes? I agree that if we want a clean 
answer we should be looking at the clients, not the authority servers. We have 
application-specific records like this for mail; I think we can confidently 
call MX a good solution for that problem. We decided that creating an unbounded 
set of application-specific RRTYPEs for this (each with their own semantics, 
each implemented separately) was idiotic, and and hence SRV. Let's not abandon 
that thinking unless we really have to.

Various people have expressed dubious arguments against the use of SRV for 
this. I don't think the answer to that is to create something functionally 
identical with a different name and somehow expect to be able to trick people 
with sleight of hand. I think the answer is to document the use-cases and 
dispassionately assess each of the arguments and work out whether they are 
real. My suspicion is that for a significant proportion of the problem space 
SRV is quite sufficient, and that in pragmatic terms we're really only talking 
about something like three client-side codebases that would need to implement 
it before we could call it universally-deployed.

But I could be wrong and maybe there really is a convincing reason to design 
something HTTP-specific. Either way, I think we need to show our working here, 
and by "we" I mean web people and DNS people who are prepared to work together.

So for what it's worth, this is what I think we should be doing:

1. Make the existing, proprietary, non-interoperable dumpster fire better if we 
can (maybe we can't; the way to tell is whether the enterprise DNS people are 
interested);

2. Find a client-side solution to this, and try really hard not to invent 
something new that is really just SRV with a hat and a false moustache.


Joe
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-05 Thread Måns Nilsson
Subject: Re: [DNSOP] Fundamental ANAME problems Date: Fri, Nov 02, 2018 at 
04:39:09PM + Quoting Tony Finch (d...@dotat.at):
> It's really good to see more discussion about ANAME.

I agree. 
 
> I think a resolver-side or client-side alternative (like the various
> web-specific record types we have been discussing) is defintely soemthing
> we should aim for in the long term, but that isn't what this work is
> about.

IMNSHO _any_ work on "fixing CNAMES at apex" that gets traction is 
a spanner in the works for what we seem to agree is a better solution.
A interim fix will be deployed and stall every attempt at DTRT.

I am well aware of "perfect being the enemy of good enough" but I'm not
certain DNAME is "good enough". 

-- 
Måns Nilsson primary/secondary/besserwisser/machina
MN-1334-RIPE   SA0XLR+46 705 989668
Now KEN and BARBIE are PERMANENTLY ADDICTED to MIND-ALTERING DRUGS ...


signature.asc
Description: PGP signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Ray Bellis




On 04/11/2018 23:02, Paul Ebersman wrote:


Have you confirmed with the large CDNs doing geo-ip, load-balancing, etc
that this is what they want, since they are largely driving all of this?

I'd guess that they would prefer this in the auth layer, where they own
or have contractual relationship with the zone owner.

Yes, as DNS software folks, we'd like to keep auth doing auth and have
only recursive doing lookups but I'm not sure that solves the problem in
a way that will be accepted.


My expectation is that this would work for them exactly the way a CNAME 
does (i.e. via EDNS Client Subnet or similar) but without the restrictions.


Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Paul Ebersman
ray> Architecturally, the important part of my proposal is that
ray> resolution of the A and  records is done *at the recursive
ray> layer* of the DNS, with no interference with how authoritative
ray> resolution works.

Have you confirmed with the large CDNs doing geo-ip, load-balancing, etc
that this is what they want, since they are largely driving all of this?

I'd guess that they would prefer this in the auth layer, where they own
or have contractual relationship with the zone owner.

Yes, as DNS software folks, we'd like to keep auth doing auth and have
only recursive doing lookups but I'm not sure that solves the problem in
a way that will be accepted.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Paul Ebersman
ray> HTTP Redirects cause the URI in the address bar to be changed.  A
ray> lot of the whole "CNAME at the Apex" issue arises because lots of
ray> marketing people don't want end users to have to type *or see* the
ray> www prefix.

ray> Those folks aren't going to stand for their nice clean
ray> "example.com" URL getting replaced with the real CDN address in the
ray> address bar.

Last I heard, they're taking care of this by taking away the address bar
completely. You and I will have to set some kind of debug mode to ever
see this. So that in and of itself isn't a deal breaker. But let's get
comment from the firefox/chrome folks. I agree with you that we're
having some productive back and forth. I think that we've learned some but
not all of each other's spaces.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Ray Bellis

On 04/11/2018 18:16, Brian Dickson wrote:


Is the apex thing an optimization only (i.e. is it acceptable that
the mechanism for apex detection not be 100% effective)? I think
that's the input needed before it makes sense to go down any
particular branch of design work, by either the http folks or the dns
folks.


It's not a question of apex *detection*, it's that DNS simply doesn't
allow for the *provisioning* of a CNAME record at the apex.

Nor can you put a CNAME alongside any other "useful" DNS records, so you 
can't, for example, have a zone that looks like this:


$ORIGIN example.com
@IN SOA   ...
 IN NS...
company-division IN MX
company-division IN CNAME 

[I should perhaps put that as an example in the draft]


Is knowing when something is (or is at least expected to be) the
apex, one of the fundamental drivers on this issue?


No, the mechanism is general purpose and could be used for any
domain name that requires redirection (at the DNS / hostname level) to a
hostname that does not match the domain name in the URI.

[snipping irrelvant stuff about effective TLD lists]


Related, follow-on question: If that new record type were pointing to
the owner name (i.e. itself), or otherwise signaled that an A/ at
the owner name should be used, would having the authority server
return the A/ records as well fix the multiple-lookups issue,
i.e. not require the lookup of the A/ records if the new record
type was not present?


Although it's not documented as such yet (and I should, because it's an
important clarifaction) an HTTP record that points to itself would be an
error, in the same way that a CNAME loop would be.

Architecturally, the important part of my proposal is that resolution of
the A and  records is done *at the recursive layer* of the DNS, with
no interference with how authoritative resolution works.

[the only exception is if EDNS Client Subnet is in use, but that's a
case where the authoritatives already know how to generate the right
answer for any particular subnet]


[snippage]



I anticipate both the new record type and additional processing,
would be less problematic on authority operators than ANAME.


The new record type has *no* implications at all for authority operators
other than in their provision systems, and since it uses the same RDATA
format as a PTR or CNAME record the implications there should be minimal.

It adds more additional processing, but does not change the general 
model of mostly-static zone data, which plays nice with DNSSEC.


There's *no* additional processing done in authoritatives.  I suppose
theoretically if the target happens to be on the same server as the
owner name than an authority might also include the A and  records,
but it's not specified that way at the moment.
For the recursives, the incremental change is the same additional 
processing as authority servers (additional data if empty/self-ref, 
possibly with extra queries, or CNAME-type processing.)


Roughly, except per above, this is the *only* incremental change in the
DNS infrastructure.   The other necessary change is in the HTTP clients
themselves, which IMHO is how it should be.


Also: would this new record type (and query/response logic) make
sense to use everywhere, not just at a zone apex?


Yes, per above.


I think there would be nothing implicitly difficult in making it
universal, on both the authority and recursive servers. For the
recursive servers, I don't think they even have the ability to
distinguish whether a name is apex or not (!!). For authorities, I
don't think there's anything intrinsically apex-ish about what is
required, so it would probably be less work to support the new record
type anywhere.


It's not apex specific at all, but its design is specifically intended 
to address the CNAME at the apex issue.


kind regards,

Ray


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Ray Bellis



On 04/11/2018 18:31, Patrik Fältström wrote:


The semantics is exactly like a CNAME + HTTP Redirect.


The latter part is what I expected, and why I think it's a non-starter.

HTTP Redirects cause the URI in the address bar to be changed.  A lot of 
the whole "CNAME at the Apex" issue arises because lots of marketing 
people don't want end users to have to type *or see* the www prefix.


Those folks aren't going to stand for their nice clean "example.com" URL 
getting replaced with the real CDN address in the address bar.


Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Patrik Fältström
On 4 Nov 2018, at 11:10, Ray Bellis wrote:

> -1

:-)

> What are the semantics of this?

The semantics is exactly like a CNAME + HTTP Redirect.

Provisioning is like any provisioning in the DNS, with the advantage that you 
can delegate the prefix:ed domain just like you can do with any _tcp and 
similar prefix domain to whatever administrative entity that manage the web. 
You can that way separate the DNS and web administration between two different 
entities.

That some people want a record at the apex is a big mistake as one that way 
must mix explicitly the administration of that name between two entities that 
do different things.

See how well the AD delegation works where you can split the AD functionality 
from the DNS functionality by doing "the right delegations", which makes 
enterprise DNS much easier to set up than if (more) stuff is to be entered at 
the apex.

We have apex overload, and that must be taken care of.

   Patrik

> - What appears in the user's UI when the URI record completely replaces the 
> site name entered by the user?
>
> - Which domain name is the SSL cert validated against?
>
> - Which domain name appears in the HTTP Host: header?
>
> - What is the HTTP "Origin" of the resultint content,
>   and which domain's cookies are accepted / sent?
>
> - What if there's also a URI record for 'example-lb-frontend.hosting.namn.se' 
> ?
>
> - How do I provision a wildcard record for this?
>
> I see absolutely zero chance of the web community embracing this.
>
> Ray
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop


signature.asc
Description: OpenPGP digital signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Brian Dickson
Ray Bellis wrote:
>
> I don't think that either SRV or URI are usable for the primary use
> case, i.e. allowing a domain owner to put a record at the apex of
> their zone that points at the hostname of the web provider they want to
> use.
>
>

Is the apex thing an optimization only (i.e. is it acceptable that the
mechanism for apex detection not be 100% effective)? I think that's the
input needed before it makes sense to go down any particular branch of
design work, by either the http folks or the dns folks.

Is knowing when something is (or is at least expected to be) the apex, one
of the fundamental drivers on this issue?

Is the question of how the "effective TLD list" is maintained, published,
integrated, etc., something we could/should look closer at, or is that a
big can of worms we should stay away from? E.g. get it published by IANA,
through multiple mechanisms, and formalize the update to it through the
TLD/registry creation process at ICANN? Is it necessary to change/improve
the process, publication, ownership, etc. in order for browsers to reply on
it at a protocol level?

I.e. Rather than rely on hunting for SOA records in DNS and/or looking for
NSes for zone cuts, is making a first pass determination by looking at
whether the "authority" of the URI, aka the FQDN, is one level below an
"effective TLD"? If I have "example.com" or "example.co.jp", or "
example.com.au", and I have the current, comprehensive list of places
someone can register domains, I know that in those cases, "example" is an
apex name. Or does the solution to this problem need to handle apex names
for internal-to-a-registrant zone cuts?

Use of ETLDs could drive the initial query to be some new record type that
exists at a zone apex.
(The ETLD list gives an answer to "is it the apex" which is either "yes" or
"maybe", because zone cuts below the ETLD+1 level can exist, obviously.)

Related, follow-on question:
If that new record type were pointing to the owner name (i.e. itself), or
otherwise signaled that an A/ at the owner name should be used, would
having the authority server return the A/ records as well fix the
multiple-lookups issue, i.e. not require the lookup of the A/ records
if the new record type was not present?
(The distinction between a self-referential or empty RDATA answer, and a
NOERROR/NODATA lack of RR, would be the indicator on whether the zone owner
wanted to play nice with the optimizations. The client would probably also
need to do separate A/ queries in the NOERROR/NODATA case, since the
triggering URI would still need to its authority name to be resolved to an
address.)

I.e. :
@ NEWRRTYPE @ ; (or empty RDATA, signaling that the response needs to
include A/, and that the client should expect A/ in the response)
@ A 
@  

or
@ NEWRRTYPE www.myproviders.fqdn ; points out of zone, so no additional
data -- but client/recursive needs to chase RDATA down like it was a CNAME

I realize in this hypothetical model, both authority and recursive servers
need updates.
Or is the returning A/ on the empty/self-ref a strictly optional
optimization, and thus not a barrier to adoption?
(The client can't do a parallel query for the A/ records, because it
needs the first answer to get the name for the second query. But it can do
the queries sequentially without the assistance of the recursive.)

I anticipate both the new record type and additional processing, would be
less problematic on authority operators than ANAME.
It adds more additional processing, but does not change the general model
of mostly-static zone data, which plays nice with DNSSEC.

For the recursives, the incremental change is the same additional
processing as authority servers (additional data if empty/self-ref,
possibly with extra queries, or CNAME-type processing.)

Also: would this new record type (and query/response logic) make sense to
use everywhere, not just at a zone apex? I think there would be nothing
implicitly difficult in making it universal, on both the authority and
recursive servers. For the recursive servers, I don't think they even have
the ability to distinguish whether a name is apex or not (!!). For
authorities, I don't think there's anything intrinsically apex-ish about
what is required, so it would probably be less work to support the new
record type anywhere.

Brian
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Paul Vixie




Ray Bellis wrote:

> ... "URI RR" ...

I see absolutely zero chance of the web community embracing this.


as evidenced by RFC 8484, the web community seems to regret basing their 
work on the Internet System, and is now moving independently. this may 
mean that offering them something like "HTTP RR" which can't work better 
than SRV or URI already works, because they speciously refuse to embrace 
these working technologies, will buy you nothing.


--
P Vixie

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Ray Bellis




On 04/11/2018 15:05, Paul Vixie wrote:


as evidenced by RFC 8484, the web community seems to regret basing
their work on the Internet System, and is now moving independently.
this may mean that offering them something like "HTTP RR" which can't
work better than SRV or URI already works, because they speciously
refuse to embrace these working technologies, will buy you nothing.


Members of both communities had what I felt was a very productive side
meeting during the Montreal IETF, at which I also believe there was an
acceptance that both "sides" need to come together for a mutually
agreeable solution.

I don't think that either SRV or URI are usable for the primary use
case, i.e. allowing a domain owner to put a record at the apex of
their zone that points at the hostname of the web provider they want to
use.   I personally don't think that ANAME is a good solution either.

Hence my draft which I hope is a move towards that middle ground that we
can all work with.  I have already had positive feedback from some HTTP 
people, but antagonising them won't help.


Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-04 Thread Ray Bellis



On 04/11/2018 12:53, Patrik Fältström wrote:

On 3 Nov 2018, at 23:32, Måns Nilsson wrote:


_http._tcp.example.org. IN URI  10 20   
"https://example-lb-frontend.hosting.namn.se:8090/path/down/in/filestructure/";

We already have this. We need not build a new mechanism.


+1


-1

What are the semantics of this?

- What appears in the user's UI when the URI record completely replaces 
the site name entered by the user?


- Which domain name is the SSL cert validated against?

- Which domain name appears in the HTTP Host: header?

- What is the HTTP "Origin" of the resultint content,
  and which domain's cookies are accepted / sent?

- What if there's also a URI record for 
'example-lb-frontend.hosting.namn.se' ?


- How do I provision a wildcard record for this?

I see absolutely zero chance of the web community embracing this.

Ray

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-03 Thread Patrik Fältström
On 3 Nov 2018, at 23:32, Måns Nilsson wrote:

> _http._tcp.example.org. IN URI10 20   
> "https://example-lb-frontend.hosting.namn.se:8090/path/down/in/filestructure/";
>
> We already have this. We need not build a new mechanism.

+1

   Patrik


signature.asc
Description: OpenPGP digital signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-03 Thread Måns Nilsson
Subject: Re: [DNSOP] Fundamental ANAME problems Date: Sat, Nov 03, 2018 at 
12:04:18PM -0700 Quoting Joe Abley (jab...@hopcount.ca):
> On Nov 3, 2018, at 03:20, Bob Harold  wrote:
> 
> > My preference would be a *NAME record that specifies which record types it 
> > applies to.  So one could delegate A and  at apex to a web provider, MX 
> > to a mail provider, etc.  That would also be valuable at non-apex names.  
> > But I am happy to support ANAME as part of the solution.
> 
> I don't understand this suggestion.
> 
> Some use-cases (or even hypothetical examples) might help.

example.org. IN SOA foo bar 2018102802 300 3600 360 300
example.org. IN NS  primary.se.
example.org. IN NS  secondary.se.
example.org. IN MX  10 some.host.gmail.com
_http._tcp.example.org. IN URI  10 20   
"https://example-lb-frontend.hosting.namn.se:8090/path/down/in/filestructure/";
example.org. IN TXT "v=spf0x41 not valid. because SPF records belong in 
RRtype 99."
example.org. IN AFSDB   1 db0.example.org.
example.org. IN AFSDB   1 db1.example.org.
example.org. IN AFSDB   1 db2.example.org.
example.org. IN SPF "v=spf1 +all"


We already have this. We need not build a new mechanism. 

/Måns, sounding like a broken record. 
-- 
Måns Nilsson primary/secondary/besserwisser/machina
MN-1334-RIPE   SA0XLR+46 705 989668
Is this an out-take from the "BRADY BUNCH"?


signature.asc
Description: PGP signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-03 Thread Joe Abley
On Nov 3, 2018, at 03:20, Bob Harold  wrote:

> My preference would be a *NAME record that specifies which record types it 
> applies to.  So one could delegate A and  at apex to a web provider, MX 
> to a mail provider, etc.  That would also be valuable at non-apex names.  But 
> I am happy to support ANAME as part of the solution.

I don't understand this suggestion.

Some use-cases (or even hypothetical examples) might help.


Joe

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-03 Thread Lanlan Pan
Brian Dickson 于2018年11月2日周五 上午9:38写道:

> On Thu, Nov 1, 2018 at 5:14 PM John Levine  wrote:
>
>> I can't help but note that people all over the Internet do various
>> flavors of ANAME now, and the DNS hasn't fallen over.  Let us not make
>> the same mistake we did with NAT, and pretend that since we can't find
>> an elegant way to do it, we can put our fingers in our ears and it
>> will go away.
>>
>>
> Did you not read my full message?
> I didn't say don't do that, I said let's do it in an elegant way.
> Then I provided a few examples of how to do that.
>
> What is being done now is not ANAME by any stretch; it is
> vertically-integrated apex CNAME flattening.
> Yes, there are several providers doing it.
> Their customers are locked in to a single provider, precisely because of
> that vertical integration.
> None of their customers can have multi-vendor redundancy with feature
> parity.
> While not a prime motivation for ANAME or its alternatives, it is
> certainly (or should be) one of its goals.
>
> The fact that each existing vendor's solution is, and requires, vertical
> integration, means each is fundamentally a closed system, with no interop
> possible.
>
> What ANAME, and the other suggested things, are doing is figuring out how
> to do interoperable stuff that allows something kind of like a CNAME, to
> co-exist at an apex.
>
> Can you point me to a non-closed, non-vertically-integrated ANAME-like
> thing that offers interoperable multi-vendor support?
>
> I think you are confusing "dynamic update of A based on
> meta-data-configured FQDN" with actual ANAME.
>
> So, DNS not having fallen over yet, has nothing at all to do with ANAME.
>
>
>> In article <
>> cah1icirxysyb3sao8f1jy-q4melmqapsfo-7x5iddufdt_u...@mail.gmail.com> you
>> write:
>> >The requirement on update rate, is imposed externally by whichever entity
>> >operates the ANAME target. In other words, this is not under the direct
>> >control of the zone operator, and is potentially a potentially (and very
>> >likely) UNBOUNDED operational impact/cost.
>>
>> "Something very bad will happen if I do that."  "OK, so don't do
>> that."  My aname-ish code has a maximum update rate, and I expect
>> everyone else's does too.  Yeah, the ANAMEs won't be in sync with
>> the hostile remote server, but I can't get too upset about that.
>>
>
> How many zones do you operate this way?
> What is the maximum update rate?
> Are those zones you operate on behalf of paying customers?
> If those were paying customers, and the records got out of sync, don't you
> think the customers would get upset?
>
> That's the primary point; when non-toy situations with paying customers
> are considered, it isn't up to you to decide what the update rate is, and
> you don't have the luxury of not caring.
>
> It isn't whether it works for you; it's whether it works for EVERYBODY.
> If it doesn't, then we need to work harder on the problem.
>
>
>>
>> >Third, there is an issue with the impact to anycast operation of zones
>> with
>> >ANAMEs, with respect to differentiated answers, based on topological
>> >locations of anycast instances.
>>
>> How is this different from CNAMEs via to 8.8.8.8 and other anycast
>> caches?  The cache has no relation to the location of the client unless
>> you use one of the client location hint hacks.
>>
>
> Because authority servers for the same zone, when not doing stupid DNS
> tricks, are in sync.
> This is by design, and is the expectation of clients, resolvers, and
> registrants.
>
> Anycast caches do not have any expectation or requirement to be sync'd,
> and in particular, due to stupid DNS tricks, are typically topologically
> sync'd to regional answers.
>
> Anycast caches with smaller footprint or odd customer bases, might do
> those hacks, but even without them, there will be significant differences
> in the contents of those caches, in different locations.
>
> The problem is the ANAME *target* -- that will typically also be
> topologically diverse, e.g. answers supplied will involve stupid DNS tricks.
>
> You can't have your ANAME use only a single view and push that SAME answer
> to all anycast nodes.
> Doing so would break the client->resolver->(anycast auth)->ANAME-target
> model of diversified answers.
> If client/resolver are supposed to hit ANAME-targets (which are themselves
> anycast, but which do stupid DNS tricks to give different answers) and get
> DIFFERENT answers, then having only one instance of the ANAME-target
> returned by the anycast auth (regardless of location) will be an
> "#EpicFail".
>
> Example:
>
>- client in Los Angeles -> resolver somewhere in California -> ??? ->
>AWS obfuscated-name -> California IP address (based on resolver IP, or
>maybe client-subnet)
>- client in Boston -> resolver somewhere in New England -> ??? -> AWS
>obfuscated-name -> New York IP address (based on resolver IP, or maybe
>client-subnet)
>- If ??? is an ANAME, which does a tracking query FROM ONE LOCATION,
> 

Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread John R Levine

Subject: Re: [DNSOP] Fundamental ANAME problems Date: Fri, Nov 02, 2018 at 
04:03:50PM +0800 Quoting John R Levine (jo...@taugh.com):


I'll defer to other people, but it seems to me that anything that depends on
recursive DNS servers being updated isn't a realistic solution.  We're still
waiting for DNSSEC, after all.


Be as pessimistic as you like, but in Sweden, more than 80% of the ISP
resolvers validate. The DNS can change, at a sometimes glacial speed,
but it does change.


Sure, but DNSSEC addresses a huge security problem, and it's taken a 
decade to get fairly wide adoption.  ANAME basically works around a 
configuration mistake.  If we can't solve it in the servers with the 
configuration problems, it's not worth solving.



Regards,
John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
Please consider the environment before reading this e-mail. https://jl.ly

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Christian Huitema

On Nov 3, 2018, at 12:28 AM, Måns Nilsson  wrote:

>> I'll defer to other people, but it seems to me that anything that depends on
>> recursive DNS servers being updated isn't a realistic solution.  We're still
>> waiting for DNSSEC, after all.
> 
> Be as pessimistic as you like, but in Sweden, more than 80% of the ISP
> resolvers validate. The DNS can change, at a sometimes glacial speed,
> but it does change.

According to https://ithi.research.icann.org/graph-m5.html, the worldwide 
fraction of public DNS that performs DNSSEC validation is about 25%.

--Christian Huitema ___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Paul Vixie
for the love of god, please, do not add more complexity, logic, 
computation, or network fetching to recursive name servers. if it's your 
belief that a static solution can't work, push for SRV.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Richard Gibson
I haven't reviewed the full draft yet, but am happy to see some people 
echoing my sentiments from earlier versions [1]. I particularly wanted 
to agree with some statements from Bob Harold.


On 11/2/18 15:20, Bob Harold wrote:
Another option to give users is a non-updating fallback A record, that 
could point to a web redirect.  That saves all the hassle of updates.


YES! This means a slightly worse fallback-only experience for users 
behind ANAME-ignorant resolvers that query against ANAME-ignorant 
authoritatives (the introduction of ANAME awareness to /either/ 
component allowing an opportunity to provide better address records by 
chasing the ANAME target), but provides a dramatic reduction in the 
amount of necessary XFR traffic. And even more importantly, it forces 
TTL stretching to be an explicit decision on the part of those 
administrators who choose to perform manual target resolution and update 
their zones to use them as fallback records (as they would do now to 
approximate ANAME anyway), rather than an inherent and enduring aspect 
of the functionality.


Treating ANAME-sibling address records as fallback data also supports 
better behavior for dealing with negative results from resolving ANAME 
targets (NODATA, NXDOMAIN, signature verification failure, response 
timeout, etc.)—serve the fallbacks.


My preference would be a *NAME record that specifies which record 
types it applies to.  So one could delegate A and  at apex to a 
web provider, MX to a mail provider, etc.  That would also be valuable 
at non-apex names.  But I am happy to support ANAME as part of the 
solution.
I agree on both counts (arbitrary type-specificity and deferment to a 
later date).



[1]: https://www.ietf.org/mail-archive/web/dnsop/current/msg21722.html

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Bob Harold
My thoughts at the bottom...

On Thu, Nov 1, 2018 at 6:34 PM Brian Dickson 
wrote:

> Greetings, DNSOP folks.
>
> First, a disclaimer and perspective statement:
> These opinions are mine alone, and do not represent any official position
> of my employer.
> However, I will note that it is important to have the perspective of one
> segment of the DNS ecosystem, that of the authority operators (are also
> known as DNS hosting operators.)
> IMNSHO, authority operators provide a critical element of the DNS
> ecosystem, operating DNS zones for the vast majority of DNS registrants.
>
> The important element of this perspective is, that changes to how DNS
> operates and scales, if they have an adverse impact on authority operators
> (at scale), have potential knock-on impact to everyone.
> If (and I realize this is a big "if") changes are made which adversely
> affect the operation cost (regardless of whether it is direct or
> indirect/consequential) of operating authority services, this puts at risk
> the ability of registrants to operate their own zones (e.g. if there are
> fewer authority operators, or if prices skyrocket).
> This further puts at risk, the ongoing volume of DNS registrations, which
> impacts the viability of everyone else whose business relies on
> registration fees, directly or indirectly, including TLDs, CDNs, (non-DNS)
> hosting, and even ICANN itself.
> Caveat dnsops.
>
> Given the above, I feel it is important to point out several problems that
> are rooted in the requirement to dynamically update the sibling records
> that is present in the current design of ANAME. (This is the only real
> problem I see, but it's a doozie.)
>
> (The introduction text mentions some of these, but IMHO doesn't adequately
> address their impact.)
>
> First, there is the issue of imposed update frequency.
>
> The requirement on update rate, is imposed externally by whichever entity
> operates the ANAME target. In other words, this is not under the direct
> control of the zone operator, and is potentially a potentially (and very
> likely) UNBOUNDED operational impact/cost.
>

> Second, this issue is compounded by scale.
>
> The issue here is, that the larger the entity is that operates zones with
> ANAMEs is, the larger the resulting impact. This is a new, unanticipated,
> asymmetric cost. It has the definite potential to make operating authority
> servers prohibitively costly.
>

> Third, there is an issue with the impact to anycast operation of zones
> with ANAMEs, with respect to differentiated answers, based on topological
> locations of anycast instances.
>

> There is currently an expectation on resolving a given name, that where
> the name is ultimately served (at the end of a *NAME chain) by an entity
> doing "stupid DNS tricks" (e.g. CDNs), that the answer provided is
> topologically appropriate, i.e. gives the "best" answer based on resolver
> (or in the case of client-subnet, client) location.
> When done using CNAMEs, the resolver is the entity following the chain,
> and does so in a topologically consistent manner. Each resolver instance
> querying a sequence of anycast authorities which return respective CNAMEs,
> gets its unique, topologically-appropriate answers, and there is no
> requirement or expectation that resolvers in topologically distinct
> locations have any mutual consistency.
> ANAME places the authority servers in an anycast cloud, in a "Hobsons
> choice" scenario. Either a single, globally identical sibling value is
> replicated to the anycast instances (which violates the expectation of
> resolvers regarding "best" answer), or each anycast instance needs to do
> its own sibling maintenance (with all that implies, including on-the-fly
> DNSSEC signing), or the anycast cloud now has to maintain its own set of
> divergent, signed answers at the master, and add all the complexity of
> distributing and answering based on resolver topological placement. (The
> last two have significant risk and operational complexity, multiplied by
> the volume of zones served, and impacted by the size of the anycast cloud.)
>
> To summarize:
> The requirement to maintain sibling records (A/) itself is absolutely
> a "camel back breaking" requirement. The issues are: frequency of updates
> required is externally imposed; either the correctness required by ANAME
> targets is broken (using single A/ value regardless of anycast
> location), or the complexity of performing A/ updates is compounded by
> at least NxM (N anycast locations of authority operatior, M disparate
> values provided in response to A/ queries to the ANAME target); plus
> the added requirement of on-the-fly DNSSEC signing is a non-scalable and
> security-challenging non-starter.
>
> Side-note: we, as a community, have been pushing for wide-scale adoption
> of DNSSEC; this definitely places a significant hurdle to adoption,
> precisely in a wide-scale manner, i.e. to the vast majority of DNS
> registrants. It is a big roa

Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Erik Nygren
On Thu, Nov 1, 2018 at 6:34 PM Brian Dickson 
wrote:

> ...
>
> Third, there is an issue with the impact to anycast operation of zones
> with ANAMEs, with respect to differentiated answers, based on topological
> locations of anycast instances.
>
> There is currently an expectation on resolving a given name, that where
> the name is ultimately served (at the end of a *NAME chain) by an entity
> doing "stupid DNS tricks" (e.g. CDNs), that the answer provided is
> topologically appropriate, i.e. gives the "best" answer based on resolver
> (or in the case of client-subnet, client) location.
> When done using CNAMEs, the resolver is the entity following the chain,
> and does so in a topologically consistent manner. Each resolver instance
> querying a sequence of anycast authorities which return respective CNAMEs,
> gets its unique, topologically-appropriate answers, and there is no
> requirement or expectation that resolvers in topologically distinct
> locations have any mutual consistency.
> ANAME places the authority servers in an anycast cloud, in a "Hobsons
> choice" scenario. Either a single, globally identical sibling value is
> replicated to the anycast instances (which violates the expectation of
> resolvers regarding "best" answer), or each anycast instance needs to do
> its own sibling maintenance (with all that implies, including on-the-fly
> DNSSEC signing), or the anycast cloud now has to maintain its own set of
> divergent, signed answers at the master, and add all the complexity of
> distributing and answering based on resolver topological placement. (The
> last two have significant risk and operational complexity, multiplied by
> the volume of zones served, and impacted by the size of the anycast cloud.)
>

I share your concerns on how useful ANAME will be or whether it will
actually solve problems.
To make matters worse, for an authority to use ANAME in conjunction with a
CDN that returns
dynamic responses for mapping and load balancing, it's likely that the
*authority* would also need
to use ECS with client subnet information for the A/ lookups (as is
done by some of the largest
anycast open resolver services) but then dynamically resign the results.
This means online
ECS lookups with caching (often of names/records with very short TTLs)
and online signing which open up quite a few perf, scaling, and security
challenges
such as DDoS-attack-resilience.

One of the reason that at least some CDNs have built integrated-stack
solutions
to DNS authorities that can do CNAME collapsing internally is that this
allows them
to do proprietary optimizations to resolve some of the issues described
here.

If a stated goal of ANAME is to allow authorities distinct from a CDN to
follow
a CNAME chain to a given CDN and then incorporate the results in their
answer
for the zone apex (to improve vendor agnosticism), it may also be worth
really validating whether the design will scale and be performant and
interoperate
with such CDNs.  Since this is being configured by the owner of a zone,
they are likely incentivized to make sure that whatever is configured
actually
works well and provides good end-user overall performance (not just DNS
lookup
performance).  Not incorporating client information (eg, via ECS) into
responses
and/or substantially increasing TTLs are likely to cause significant
overall problems.
The result may then be "who will actually use ANAME as described here and
for what?".

Exploring some of the alternatives as you suggest may result in
slower-to-deploy
but overall more widely adoptable approaches.

Erik
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Måns Nilsson
Subject: Re: [DNSOP] Fundamental ANAME problems Date: Fri, Nov 02, 2018 at 
04:03:50PM +0800 Quoting John R Levine (jo...@taugh.com):

> I'll defer to other people, but it seems to me that anything that depends on
> recursive DNS servers being updated isn't a realistic solution.  We're still
> waiting for DNSSEC, after all.

Be as pessimistic as you like, but in Sweden, more than 80% of the ISP
resolvers validate. The DNS can change, at a sometimes glacial speed,
but it does change.

"E pur si muove"
-- 
Måns Nilsson primary/secondary/besserwisser/machina
MN-1334-RIPE   SA0XLR+46 705 989668
Why are these athletic shoe salesmen following me??


signature.asc
Description: PGP signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Tony Finch
It's really good to see more discussion about ANAME.

The current draft doesn't discuss scaling issues because my main concern
was to get the rewrite done, so there are a number of gaps, e.g. I deleted
the "operational considerations" section. But that doesn't mean I was
unaware of the problem :-)


Brian Dickson  wrote:

> ANAME places the authority servers in an anycast cloud, in a "Hobsons
> choice" scenario. Either a single, globally identical sibling value is
> replicated to the anycast instances (which violates the expectation of
> resolvers regarding "best" answer), or each anycast instance needs to do
> its own sibling maintenance (with all that implies, including on-the-fly
> DNSSEC signing), or the anycast cloud now has to maintain its own set of
> divergent, signed answers at the master, and add all the complexity of
> distributing and answering based on resolver topological placement. (The
> last two have significant risk and operational complexity, multiplied by
> the volume of zones served, and impacted by the size of the anycast cloud.)
>
> To summarize:
>
> The requirement to maintain sibling records (A/) itself is absolutely a
> "camel back breaking" requirement.

I think this is the meat of your objection.

I'm aware that most existing ANAME-like implementations are tailored
for target addresses that are controlled by the DNS hosting provider,
which makes it a lot easier :-) I think that's what you were referring to
on your other message about vertical integration.

Next most common is dynamic lookups of arbitrary targets. This is probably
easier to scale to a very large number of zones with ANAMEs than an
UPDATE-style implementation, but I gather from talking to various people
that it's still fiendish. (And that's why the WG consensus is not to
require a dynamic implementation style.)


(BTW, I live in the same city as Hobson did, so as a pedant I must point
out that Hobson's choice was one option, take it or go without. At least
for ANAME there are multiple implementation strategies, however
unpalatable they all are!)


> What are the alternatives?
>
> Fundamentally, the behavior that is desired that we are collectively trying
> to preserve, is that of resolver-based *NAME chain resolution, just with
> the ability to do so at the apex of a zone.

I'm not sure why you say "preserve" here, because none of the existing
ANAME-alikes work that way.

A key aim of this draft is to provide something that works similar enough
to existing ANAME-like features, to give zone owners portability across
providers. I spoke to a number of DNS providers in Amsterdam who have
ANAME-like features and who are keen to improve interoperability.


> Ultimately, this means any solution that has this characteristic, can only
> provide backwards compatibility to clients, if resolvers are updated, or
> alternatively, if clients are updated to do whatever is required that
> resolvers which aren't updated won't do.

It's really important that ANAME can be deployed on authoritative servers
without co-operation from anyone else, especially not resolvers. (After
all, that's how the existing implementations work.)

I think a resolver-side or client-side alternative (like the various
web-specific record types we have been discussing) is defintely soemthing
we should aim for in the long term, but that isn't what this work is
about.


Tony.
-- 
f.anthony.n.finchhttp://dotat.at/
Malin, Hebrides, Bailey: South or southeast 6 to gale 8, increasing severe
gale 9 at times. Moderate or rough, becoming rough or very rough, occasionally
high later. Rain. Good, occasionally poor.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Matthijs Mekking

Hi Brian,

Thanks for your feedback on ANAME. Comments inline.

On 01-11-18 23:34, Brian Dickson wrote:

Greetings, DNSOP folks.


[...]


First, there is the issue of imposed update frequency.

The requirement on update rate, is imposed externally by whichever 
entity operates the ANAME target. In other words, this is not under the 
direct control of the zone operator, and is potentially a potentially 
(and very likely) UNBOUNDED operational impact/cost.


As John already pointed out: You are in control of the update rate. It 
may be that the target address records are changing frequently, but it 
is up to the one that processes the ANAME to set an appropriate update 
rate. So I don't see this as a particular worry.


The draft uses DNS UPDATE as an example and that may be a useful 
scenario for one, but a red flag for another. However I will use ANAME 
and no DNS UPDATE and I think my implementation of ANAME is still 
compatible with the specification.


What I would like to see be changed in the draft is that were ANAME 
processing occurs is more relaxed: Currently it focuses on this 
happening at zone provisioning time (before the zone is loaded by the 
primary), and mentions an optimization algorithm for resolvers that is 
optional. However, there is some text in section 5.2 Alternatives that 
the ANAME processing can occur on secondary servers which I think may 
fit your DNS infrastructure better.


The point is that this draft should standardize the way ANAME is 
processed, while giving flexibility on where processing can occur.




Second, this issue is compounded by scale.

The issue here is, that the larger the entity is that operates zones 
with ANAMEs is, the larger the resulting impact. This is a new, 
unanticipated, asymmetric cost. It has the definite potential to make 
operating authority servers prohibitively costly.


I don't see any difference with existing proprietary implementations: 
This statement is true for any CNAME-at-the-apex solution.



Third, there is an issue with the impact to anycast operation of zones 
with ANAMEs, with respect to differentiated answers, based on 
topological locations of anycast instances.


There is currently an expectation on resolving a given name, that where 
the name is ultimately served (at the end of a *NAME chain) by an entity 
doing "stupid DNS tricks" (e.g. CDNs), that the answer provided is 
topologically appropriate, i.e. gives the "best" answer based on 
resolver (or in the case of client-subnet, client) location.


[...]

As I said above: I agree that the draft should relax where ANAME 
processing can occur. I don't see any reason why this processing can be 
done at the entity that does the stupid DNS trick.


[...]

ANAME places the authority servers in an anycast cloud, in a "Hobsons 
choice" scenario. Either a single, globally identical sibling value is 
replicated to the anycast instances (which violates the expectation of 
resolvers regarding "best" answer), or each anycast instance needs to do 
its own sibling maintenance (with all that implies, including on-the-fly 
DNSSEC signing), or the anycast cloud now has to maintain its own set of 
divergent, signed answers at the master, and add all the complexity of 
distributing and answering based on resolver topological placement. (The 
last two have significant risk and operational complexity, multiplied by 
the volume of zones served, and impacted by the size of the anycast cloud.)


If you are going to do stupid tricks (aka tailored responses), you will 
have to do on-the-fly DNSSEC signing anyway.



Side-note: we, as a community, have been pushing for wide-scale adoption 
of DNSSEC; this definitely places a significant hurdle to adoption, 
precisely in a wide-scale manner, i.e. to the vast majority of DNS 
registrants. It is a big roadblock to DNSSEC adoption, and a move in the 
wrong direction.


I disagree, I think it works pretty well with DNSSEC given the 
requirement that ANAME processing should happen before DNSSEC signing. 
This requirement is reasonable since ANAME processing is in the business 
of tailoring RRsets and any changes made to the zone contents must 
happen before signing.




What are the alternatives?

Fundamentally, the behavior that is desired that we are collectively 
trying to preserve, is that of resolver-based *NAME chain resolution, 
just with the ability to do so at the apex of a zone.


This points to the only logical places that MUST be part of any 
apex-based chaining of resolution: resolvers, or clients.


John is right: It is very hard to rely on a solution that depends on 
recursive name servers to be updated. Hence we look at a solution that 
can be implemented at authorities.



Kind regards,

Matthijs

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread Paul Vixie
i think an ANAME whose only purpose was to inform some outer process that  
and A RRsets should be copied from somewhere periodically using RFC 2136 
UPDATE messages, could be useful if some name server implementors decided to 
offer the feature of doing this internally, "as if UPDATE had been done" but 
not actually requiring a cron job and perl script and so on. it would likely 
be seen as a reasonable compromise by the CNAME flattening DNS hosters of the 
era, and may lead to interoperability.

for better dynamicism we should be pushing SRV. and there's no reason why an 
FYI document couldn't explain how to use an apex SRV to inform an UPDATE of 
apex  and A RRsets. this, too, could be added as a nonstandard enhancement 
to authority servers who could do it internally, without cron or perl.

it's very likely that the high volume sites of the world will just go on using 
CDN as before, where every apex  or A query is crafted by load estimating 
or triangulation software, no matter what we do here. however, something like 
ANAME, or something like ANAME but based on apex SRV, could be used to inform 
that estimation/triangulation, and could lead to greater interoperability. i'm 
not sure CDN's currently want interoperability, due to competition concerns, 
but i expect that their customers would like a multi-CDN option to avoid lock-
in.

in other words i don't think there's a glaring need for elegance on this one.

vixie


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-02 Thread John R Levine

Did you not read my full message?
I didn't say don't do that, I said let's do it in an elegant way.
Then I provided a few examples of how to do that.


I'll defer to other people, but it seems to me that anything that depends 
on recursive DNS servers being updated isn't a realistic solution.  We're 
still waiting for DNSSEC, after all.



What is being done now is not ANAME by any stretch; it is
vertically-integrated apex CNAME flattening.


My version periodically fetches the remote A and  records, invents 
local A and  records, and signs them.  It's a kludge, but it gets the 
job done.


With respect to the whole anycast and CDN thing, it is not my impression 
that ANAME hacks are widely used for big sophisticated sites.  Mine are 
used for small biz sites where my user wants to use my mail but someone 
else's web service.



Can you point me to a non-closed, non-vertically-integrated ANAME-like
thing that offers interoperable multi-vendor support?


Of course not.  That's why we're talking about ANAME.

Regards,
John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
Please consider the environment before reading this e-mail. https://jl.ly

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fundamental ANAME problems

2018-11-01 Thread Brian Dickson
On Thu, Nov 1, 2018 at 5:14 PM John Levine  wrote:

> I can't help but note that people all over the Internet do various
> flavors of ANAME now, and the DNS hasn't fallen over.  Let us not make
> the same mistake we did with NAT, and pretend that since we can't find
> an elegant way to do it, we can put our fingers in our ears and it
> will go away.
>
>
Did you not read my full message?
I didn't say don't do that, I said let's do it in an elegant way.
Then I provided a few examples of how to do that.

What is being done now is not ANAME by any stretch; it is
vertically-integrated apex CNAME flattening.
Yes, there are several providers doing it.
Their customers are locked in to a single provider, precisely because of
that vertical integration.
None of their customers can have multi-vendor redundancy with feature
parity.
While not a prime motivation for ANAME or its alternatives, it is certainly
(or should be) one of its goals.

The fact that each existing vendor's solution is, and requires, vertical
integration, means each is fundamentally a closed system, with no interop
possible.

What ANAME, and the other suggested things, are doing is figuring out how
to do interoperable stuff that allows something kind of like a CNAME, to
co-exist at an apex.

Can you point me to a non-closed, non-vertically-integrated ANAME-like
thing that offers interoperable multi-vendor support?

I think you are confusing "dynamic update of A based on
meta-data-configured FQDN" with actual ANAME.

So, DNS not having fallen over yet, has nothing at all to do with ANAME.


> In article <
> cah1icirxysyb3sao8f1jy-q4melmqapsfo-7x5iddufdt_u...@mail.gmail.com> you
> write:
> >The requirement on update rate, is imposed externally by whichever entity
> >operates the ANAME target. In other words, this is not under the direct
> >control of the zone operator, and is potentially a potentially (and very
> >likely) UNBOUNDED operational impact/cost.
>
> "Something very bad will happen if I do that."  "OK, so don't do
> that."  My aname-ish code has a maximum update rate, and I expect
> everyone else's does too.  Yeah, the ANAMEs won't be in sync with
> the hostile remote server, but I can't get too upset about that.
>

How many zones do you operate this way?
What is the maximum update rate?
Are those zones you operate on behalf of paying customers?
If those were paying customers, and the records got out of sync, don't you
think the customers would get upset?

That's the primary point; when non-toy situations with paying customers are
considered, it isn't up to you to decide what the update rate is, and you
don't have the luxury of not caring.

It isn't whether it works for you; it's whether it works for EVERYBODY.
If it doesn't, then we need to work harder on the problem.


>
> >Third, there is an issue with the impact to anycast operation of zones
> with
> >ANAMEs, with respect to differentiated answers, based on topological
> >locations of anycast instances.
>
> How is this different from CNAMEs via to 8.8.8.8 and other anycast
> caches?  The cache has no relation to the location of the client unless
> you use one of the client location hint hacks.
>

Because authority servers for the same zone, when not doing stupid DNS
tricks, are in sync.
This is by design, and is the expectation of clients, resolvers, and
registrants.

Anycast caches do not have any expectation or requirement to be sync'd, and
in particular, due to stupid DNS tricks, are typically topologically sync'd
to regional answers.

Anycast caches with smaller footprint or odd customer bases, might do those
hacks, but even without them, there will be significant differences in the
contents of those caches, in different locations.

The problem is the ANAME *target* -- that will typically also be
topologically diverse, e.g. answers supplied will involve stupid DNS tricks.

You can't have your ANAME use only a single view and push that SAME answer
to all anycast nodes.
Doing so would break the client->resolver->(anycast auth)->ANAME-target
model of diversified answers.
If client/resolver are supposed to hit ANAME-targets (which are themselves
anycast, but which do stupid DNS tricks to give different answers) and get
DIFFERENT answers, then having only one instance of the ANAME-target
returned by the anycast auth (regardless of location) will be an
"#EpicFail".

Example:

   - client in Los Angeles -> resolver somewhere in California -> ??? ->
   AWS obfuscated-name -> California IP address (based on resolver IP, or
   maybe client-subnet)
   - client in Boston -> resolver somewhere in New England -> ??? -> AWS
   obfuscated-name -> New York IP address (based on resolver IP, or maybe
   client-subnet)
   - If ??? is an ANAME, which does a tracking query FROM ONE LOCATION, and
   mirrors that out to many anycast instances, then one of two results will be
   seen in the mini-example case:
  - The client in Los Angeles will receive the New York IP address, or
  - The client in Bost

Re: [DNSOP] Fundamental ANAME problems

2018-11-01 Thread John Levine
I can't help but note that people all over the Internet do various
flavors of ANAME now, and the DNS hasn't fallen over.  Let us not make
the same mistake we did with NAT, and pretend that since we can't find
an elegant way to do it, we can put our fingers in our ears and it
will go away.

In article  
you write:
>The requirement on update rate, is imposed externally by whichever entity
>operates the ANAME target. In other words, this is not under the direct
>control of the zone operator, and is potentially a potentially (and very
>likely) UNBOUNDED operational impact/cost.

"Something very bad will happen if I do that."  "OK, so don't do
that."  My aname-ish code has a maximum update rate, and I expect
everyone else's does too.  Yeah, the ANAMEs won't be in sync with
the hostile remote server, but I can't get too upset about that.

>Third, there is an issue with the impact to anycast operation of zones with
>ANAMEs, with respect to differentiated answers, based on topological
>locations of anycast instances.

How is this different from CNAMEs via to 8.8.8.8 and other anycast
caches?  The cache has no relation to the location of the client unless
you use one of the client location hint hacks.

I'm not wedded to the current ANAME spec but we have plenty of experience
showing that it's possible to implement without causing disasters?

R's,
John

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Fundamental ANAME problems

2018-11-01 Thread Brian Dickson
Greetings, DNSOP folks.

First, a disclaimer and perspective statement:
These opinions are mine alone, and do not represent any official position
of my employer.
However, I will note that it is important to have the perspective of one
segment of the DNS ecosystem, that of the authority operators (are also
known as DNS hosting operators.)
IMNSHO, authority operators provide a critical element of the DNS
ecosystem, operating DNS zones for the vast majority of DNS registrants.

The important element of this perspective is, that changes to how DNS
operates and scales, if they have an adverse impact on authority operators
(at scale), have potential knock-on impact to everyone.
If (and I realize this is a big "if") changes are made which adversely
affect the operation cost (regardless of whether it is direct or
indirect/consequential) of operating authority services, this puts at risk
the ability of registrants to operate their own zones (e.g. if there are
fewer authority operators, or if prices skyrocket).
This further puts at risk, the ongoing volume of DNS registrations, which
impacts the viability of everyone else whose business relies on
registration fees, directly or indirectly, including TLDs, CDNs, (non-DNS)
hosting, and even ICANN itself.
Caveat dnsops.

Given the above, I feel it is important to point out several problems that
are rooted in the requirement to dynamically update the sibling records
that is present in the current design of ANAME. (This is the only real
problem I see, but it's a doozie.)

(The introduction text mentions some of these, but IMHO doesn't adequately
address their impact.)

First, there is the issue of imposed update frequency.

The requirement on update rate, is imposed externally by whichever entity
operates the ANAME target. In other words, this is not under the direct
control of the zone operator, and is potentially a potentially (and very
likely) UNBOUNDED operational impact/cost.

Second, this issue is compounded by scale.

The issue here is, that the larger the entity is that operates zones with
ANAMEs is, the larger the resulting impact. This is a new, unanticipated,
asymmetric cost. It has the definite potential to make operating authority
servers prohibitively costly.

Third, there is an issue with the impact to anycast operation of zones with
ANAMEs, with respect to differentiated answers, based on topological
locations of anycast instances.

There is currently an expectation on resolving a given name, that where the
name is ultimately served (at the end of a *NAME chain) by an entity doing
"stupid DNS tricks" (e.g. CDNs), that the answer provided is topologically
appropriate, i.e. gives the "best" answer based on resolver (or in the case
of client-subnet, client) location.
When done using CNAMEs, the resolver is the entity following the chain, and
does so in a topologically consistent manner. Each resolver instance
querying a sequence of anycast authorities which return respective CNAMEs,
gets its unique, topologically-appropriate answers, and there is no
requirement or expectation that resolvers in topologically distinct
locations have any mutual consistency.
ANAME places the authority servers in an anycast cloud, in a "Hobsons
choice" scenario. Either a single, globally identical sibling value is
replicated to the anycast instances (which violates the expectation of
resolvers regarding "best" answer), or each anycast instance needs to do
its own sibling maintenance (with all that implies, including on-the-fly
DNSSEC signing), or the anycast cloud now has to maintain its own set of
divergent, signed answers at the master, and add all the complexity of
distributing and answering based on resolver topological placement. (The
last two have significant risk and operational complexity, multiplied by
the volume of zones served, and impacted by the size of the anycast cloud.)

To summarize:
The requirement to maintain sibling records (A/) itself is absolutely a
"camel back breaking" requirement. The issues are: frequency of updates
required is externally imposed; either the correctness required by ANAME
targets is broken (using single A/ value regardless of anycast
location), or the complexity of performing A/ updates is compounded by
at least NxM (N anycast locations of authority operatior, M disparate
values provided in response to A/ queries to the ANAME target); plus
the added requirement of on-the-fly DNSSEC signing is a non-scalable and
security-challenging non-starter.

Side-note: we, as a community, have been pushing for wide-scale adoption of
DNSSEC; this definitely places a significant hurdle to adoption, precisely
in a wide-scale manner, i.e. to the vast majority of DNS registrants. It is
a big roadblock to DNSSEC adoption, and a move in the wrong direction.

What are the alternatives?

Fundamentally, the behavior that is desired that we are collectively trying
to preserve, is that of resolver-based *NAME chain resolution, just with
the ability