On 11/7/25 4:39 PM, Pawel Kowalik wrote:
> I made a small test on the performance of both approaches (yes, it's AI 
> generated code, I am not that fast), with inconclusive result (you may 
> name it a draw) so likely from this perspective I have to pull back my 
> argument that there is a major difference.

But let's not have that stop us from bikeshedding this inconsequential design 
choice... :)

> 
> On the usage side, I'm sure some people are and will be scripting RDAP 
> with curl and jq.

Section 1 of RFC 7480:

   In designing these common usage patterns, this document introduces
   considerations for a simple use of HTTP.  Where complexity may
   reside, it is the goal of this document to place it upon the server
   and to keep the client as simple as possible.  A client
   implementation should be possible using common operating system
   scripting tools (e.g., bash and wget).

Something we have abandoned in past RDAP extensions, for sure.

 
> This is how it would look like with object approach
> 
> curl -s 'https://api.example.com/foo.example/map' | jq '.NS.TTL'
> 
> This is the array variant
> 
> curl -s 'https://api.example.com/foo.example/array' | jq '.[] | 
> select(.type[] == "NS") | .TTL'
> 
> Both will work, one is a bit more complex than the other. The latter may 
> (theoretically) also deliver more than one result which might be 
> surprising.
This is a good point. For very simple clients, the object approach is better.

Also, there is a counter to my argument about the map approach being more 
difficult to software using object mappers. For this particular scenario, the 
number of DNS record types will be very limited... A, AAAA, DS, DNSKEY, etc.... 
A client using an object mapper can easily code for that limited set.

> 
>  From conversations I had it turns out, that operators have very 
> distinct (default) TTLs per record type, so in the end the benefits of 
> array approach may never appear.

Why would they, especially A and AAAA? Maybe some operational clue can 
enlighten us.

> Also, from the operator perspective it can be appealing to generate 
> array entries one by one and never "compress" them, because finding out 
> which records carry exactly the same data will be more code.

Agreed, some operators will likely do this.

> 
> Just few $0.01s. And no, I won't block this draft to proceed out of this 
> reason, but as I said I would like such decisions to be taken from the 
> perspective of running code and operational experience on both client 
> and server.

I recall a discussion at the last Vancouver IETF where this was proposed.
I still support it. Is the working group changing its mind on this? :)

-andy (no hats)

_______________________________________________
regext mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to