Re: [atlas] Easy way to view DNS results?

2023-07-24 Thread Petr Špaček

On 25. 07. 23 7:31, Seth David Schoen wrote:

I don't know why there isn't a parsed version of the reply included in
the JSON, but perhaps the idea is that the literal details are of
interest to some researchers.  One example that I happened to notice
in trying to answer your question: in parsing a sample DNS measurement
this way, I notice the use of DNS case randomization (also called "0x20
randomization") in some replies but not in others.  Having the literal
DNS query reply could help with analyzing the prevalence of this
mechanism, whereas it might be obscured by a parser that was written by
someone who believed that DNS replies are not case insensitive (which
is true from one point of view, but not from another point of view!).


For people interested in DNS case (in)sensitivity, have a look at
https://datatracker.ietf.org/doc/html/rfc4343

--
Petr Špaček


--
ripe-atlas mailing list
ripe-atlas@ripe.net
https://lists.ripe.net/mailman/listinfo/ripe-atlas


Re: [atlas] Proposal: Remove support for non-public measurements [ONLY-PUBLIC]

2023-01-02 Thread Petr Špaček

On 31. 12. 22 14:30, Róbert Kisteleki wrote:

Hello,

I seem to remember that a typical non-public measurement is a one-off,
and as such the data volume collected and stored is likely to be
relatiely low.
I don't have the numbers readily available to me at this moment, but
we can certainly dig up recent statistics about this.


Stats would be interesting. Another thing to consider is that lots of 
one-off measurements might mean "not much data" and at the same time 
"inflating database indices a lot" (or not, depending on the schema).


Petr Špaček




Cheers,
Robert

On Fri, Dec 30, 2022 at 5:42 PM Randy Bush  wrote:


i asked this quietly before, but let me be more direct.

how much private data is there actually?  what percentage of the stored
data is private?

randy


--
ripe-atlas mailing list
ripe-atlas@ripe.net
https://lists.ripe.net/mailman/listinfo/ripe-atlas


Re: [atlas] Proposal: Remove support for non-public measurements [ONLY-PUBLIC]

2022-12-16 Thread Petr Špaček

On 15. 12. 22 19:41, Steve Gibbard wrote:

I worry that that would have a “chilling effect” on use of the service.


I hear your concerns, and have a proposal how to quantify this concern:

- First, amend the page to say that all the data are public. (Possibly 
also switch the flag, but that can be a separate step.)


- Second, observe what has changed in the usage pattern.

- Third, evaluate.

That way we don't need to stay in limbo over hypothetical situations but 
get real data.



Side note about usefulness of one-off measurement history:
I think it _might_ be is interesting for anyone doing study on any given 
outage, or even study about optimization practices over time.


For example, the DNS community has service called DNSViz which does just 
one-off measurements, and yet, researchers come and write papers based 
on data from DNSViz.


HTH.

--
Petr Špaček

--
ripe-atlas mailing list
ripe-atlas@ripe.net
https://lists.ripe.net/mailman/listinfo/ripe-atlas


Re: [atlas] Proposal: Remove support for non-public measurements [ONLY-PUBLIC]

2022-12-15 Thread Petr Špaček

On 15. 12. 22 6:57, Alexander Burke via ripe-atlas wrote:

Hello,

 From the linked page:


A total of 173 users scheduled at least one, 81 users have at least two, one 
specific user scheduled 91.5% of all of these.


That is surprising. What do those numbers look like if you zoom out to the past 
6/12/24 months?

If you can count on one hand the number of users using >90% of the private 
measurements over a longer timeframe than two weeks, then I submit that the choice 
is clear.


I concur.

Ad:

Cons

Users who were specifically using RIPE Atlas because of this feature will 
stop using the service.
Other users may reduce / change their use of the service and perhaps 
ultimately disengage completely.
As a result we could lose some connected probes, as their hosts no longer 
see value in keeping them connected.


I suppose with such a small set of users of this feature, it should be 
possible to intersect set of users vs. set of probe owners and see how 
many probes would be affected in the worst case - if all of the users 
withdrew all of their probes?



If it is not a _significant_ part of network, I would say ... "Nuke it 
from orbit."


--
Petr Špaček

--
ripe-atlas mailing list
ripe-atlas@ripe.net
https://lists.ripe.net/mailman/listinfo/ripe-atlas


Re: [atlas] Overuse of software probes

2022-03-31 Thread Petr Špaček

On 31. 03. 22 20:15, Ponikierski, Grzegorz via ripe-atlas wrote:
Software probes is great improvement for Atlas environment and it made 
deployments so much easier and more scalable. One of my probes is also 
software probe on Raspberry Pi. However, me and my team noticed that 
sometimes software probes are overused and make measurements more 
cumbersome to interpret. For example we found that there 4 software 
probes in AWS Johannesburg [1] and 15 software probes in Hostinger Sao 
Paulo [2]. That's total overkill. These 15 probes in Hostinger Sao Paulo 
is 21% of all connected probes in Brazil. All of them delivers 
practically the same results so without additional data filtering (which 
is not easy) they can dramatically skew final results for Brazil.


With this I would like to open discussion how to handle this situation. 
Please share your thoughts about these questions:


  * How can we standardize data filtering for such case?


I'm not sure what data filtering you have in mind, but I know for sure 
that richer _probe selection_ filters would help for my use-cases.


First, the current probe selection options I'm aware of are:

(web wizard)
- Geo name
- ASN # filter
- IP filter
- Probe # filter

(web "manual" selection)
- Type (mandatory)
- Area (mandatory)
- Number of probes (mandatory)
- Include tags
- Exclude tags

Proposal for new filters options:
- Spread selection evenly across geo locations, max. N probes location
- Spread selection evenly across ASNs, max. N probes per ASN
- Spread selection across IP subnets, max. N probes per IP subnet

I imagine that these three should work as intersection with the other 
filter (and themselves). I.e. it should allow to specify:

- location = BR
- max 2 probes per ASN
- max 1 probe per subnet

Right now I'm trying to do that by manually selecting probe IDs when I 
need to, but obviously that does not scale.


Thank you for considering this.

--
Petr Špaček  @  Internet Systems Consortium

--
ripe-atlas mailing list
ripe-atlas@ripe.net
https://lists.ripe.net/mailman/listinfo/ripe-atlas


Re: [atlas] SSL Certificates for ripe anchors

2019-08-30 Thread Petr Špaček
On 30. 08. 19 15:14, Jóhann B. Guðmundsson wrote:
> On 8/30/19 10:07 AM, Robert Kisteleki wrote:
>> On 2019-08-22 10:30, Jóhann B. Guðmundsson wrote:
>>> Hi
>>>
>>>
>>> Has there been any dialog about moving the anchors away from using self
>>> signed certificates to Let's Encrypt?
>>>
>>>
>>> Regards
>>>
>>>  Jóhann B.
>> Hello,
>>
>> I believe there was no elaborate discussion about this so far. We do
>> have TLSA records for all anchors which could be of help depending on
>> what you want to achieve.
> 
> 
> What I'm trying to achieve is that ripe's anchors in data centers follow
> the latest security practices and standards, which require among other
> things a valid certificate issuer and associated CAA record for
> *.anchors.atlas.ripe.net anchors be it from Let's encrypt or Digicert,
> ripe's current certificate issuer
> 
> Using a self signed certificate in today's age act's as an indicator
> that the security on the device or server in use might be in question (
> if you cant even have an valid certificate issuer on the device or
> server when it's free, what other things are you skipping on, underlying
> OS and library updates, coding practices etc. ) and thus can negatively
> impact the anchor hosting provider security grade, which may lead to
> anchors having to be removed from data centers to prevent them from
> negatively affect corporation's security ratings.
> 
> If money was the issue why the anchors got deployed with self signed
> certificates to begin with, that's not an issue anymore and probably the
> community can just get rid of Digicert and actually save money or use
> that money for lottery or beer on ripe event(s) .  ;)

Hold your horses, self-signed cert with proper TLSA records in
DNSSEC-signed domain is even better, see
https://tools.ietf.org/html/rfc6698 .

Besides other things correctly configured TLSA record + client side
validation prevents rogue or compromised CAs from issuing "fake but
accepted as valid" certs.

So I would say RIPE NCC is attempting to do security it in the most
modern way available.

-- 
Petr Špaček  @  CZ.NIC



Re: [atlas] DNS RTT over TCP: twice as long than UDP?

2019-07-10 Thread Petr Špaček
On 09. 07. 19 13:51, Ponikierski, Grzegorz wrote:
> From this traffic looks like dig measures time between packets 4 (DNS
> query) and 6 (DNS response) which is precisely 8.5ms and matches what
> dig shows. Including TCP handshake it takes 23.7ms, 2.8x longer which is
> expected .
> 
>  
> 
> RTT can be measured on different layers for the same communication
> stream. In case of DNS over UDP we just ignores UDP overhead because it
> doesn't add any packets. With TCP additional packets are added which
> significantly increase time that end-user have to wait from first packet
> to get information that he/she needs. IMO RTT should always be measured
> from 1^st packet to packet which has information that you have actual
> data. If we want to measure raw DNS performance without overhead then it
> must be explicitly market it measurement description.

If I could get a ponny, I would like to get both numbers:

a) Time measured from moment of sending the very first packet (TCP SYN
or UDP query) to arrival of DNS answer (not counting TCP FIN etc.).

b) Time measured from moment of sending the DNS query (also think of TCP
fast open!) to arrival of DNS answer (not counting TCP FIN etc.).

Having both numbers would allow to calculate latency of connection vs.
DNS query separately, which gets even more important when we consider
DNS-over-TLS etc.

-- 
Petr Špaček  @  CZ.NIC



Re: [atlas] Feature Request to consider: DNS response IP header

2019-05-22 Thread Petr Špaček
Hello,

On 20. 05. 19 17:08, Jen Linkova wrote:
> Hello,
> 
> I have a number of use cases when it would be very useful to have
> access to the IP header of the measurement response packets. In my
> scenario I'd like to see TTL/Hop Limit of packets received in DNS
> measurements.
> 
> So I'm curious if anyone else think it would be a useful feature and
> if there is some demand for such a feature - maybe the Atlas team
> could consider implementing it?

Yes, I think it is occasionally useful.

-- 
Petr Špaček  @  CZ.NIC



Re: [atlas] DNS-over-TLS and DNS-over-HTTPS measurement

2019-04-08 Thread Petr Špaček
Thank you, I will have a look. I must have missed DoT in the UI and API
docs.

Anyway, are there plans for supporting DNS-over-HTTPS?

Petr Špaček  @  CZ.NIC

On 08. 04. 19 16:47, Stephane Bortzmeyer wrote:
> On Mon, Apr 08, 2019 at 04:36:37PM +0200,
>  Petr Špaček  wrote 
>  a message of 11 lines which said:
> 
>> could you share plans for DNS-over-TLS and DNS-over-HTTPS measurements?
>>
>> I had impression that DNS-over-TLS is already supported but now I cannot
>> find it in the UI so I'm probably wrong.
> 
> DNS-over-TLS works for me:
> 
> % blaeu-resolve --verbose --nameserver 9.9.9.9 --tls nic.cz
> Blaeu version 1.1.4
> {'is_oneoff': True, 'definitions': [{'description': 'DNS resolution of 
> nic.cz/ via nameserver 9.9.9.9', 'af': 4, 'type': 'dns', 
> 'query_argument': 'nic.cz', 'query_class': 'IN', 'query_type': '', 
> 'set_rd_bit': True, 'tls': True, 'protocol': 'TCP', 'use_probe_resolver': 
> False, 'target': '9.9.9.9'}], 'probes': [{'requested': 5, 'type': 'area', 
> 'value': 'WW', 'tags': {'include': ['system-ipv4-works']}}]}
> Measurement #20617896 for nic.cz/ uses 5 probes
> Nameserver 9.9.9.9
> [2001:1488:0:3::2] : 5 occurrences 
> Test #20617896 done at 2019-04-08T14:45:31Z
> 
> (Note the 'tls': True in the JSON)




[atlas] DNS-over-TLS and DNS-over-HTTPS measurement

2019-04-08 Thread Petr Špaček
Hello,

could you share plans for DNS-over-TLS and DNS-over-HTTPS measurements?

I had impression that DNS-over-TLS is already supported but now I cannot
find it in the UI so I'm probably wrong.

Thank you for information!

-- 
Petr Špaček  @  CZ.NIC



Re: [atlas] testing DNS flag day compatibility

2018-12-19 Thread Petr Špaček
, i.e. a year ahead).
6. Naturally if we found out that also a different type of queries is
needed (which always happens once you start experimenting) it is either
too late to repeat the full cycle, or we have to do experiments years
before DNS flag day itself.

Such a big delay does not reflect pace of DNS ecosystem development,
i.e. is good only for measurement after the fact instead of being usable
as precaution/data gathering before the event. In other words we have to
hope for the best and let operators to find out what the problem is
because there is no way to measure it beforehand (again, in client
networks).

I hope it illustrates why this limitations and problems steming from them.


Proposal

Proposal is to allow Atlas user to input wider variety of DNS messages
in some form, and do validation on them before sending user-provided DNS
message out.

This can be done in multiple ways and it up to discussion which way
gives reasonable assurance the client query will not cause problem.


Assessing impact

While assessing impact of this proposal we should take into account
current state of things. Even the current ability to send out simple A
query for user-provided name can trigger wide variety of bugs, including
security/denial-of-service bugs in DNS resolvers used by client networks.

One example for all is
https://doc.powerdns.com/recursor/security-advisories/powerdns-advisory-2017-08.html
(not picking on this particular implementation!)

An attacker who controls single authoritative server can trigger this
bug by sending plain A query from current Atlas to DNS resolver "under
attack". Effectivelly all resolvers have had similar bugs in the past,
it is certainly not one-off.

>From this example I conclude that anyone who can buy own domain (for
like ~ 6 USD/year) can mount this attack using current Atlas API, today.

In my opinion, an implementation which takes user-provided DNS message
and checks it using 3 independent parsers compiled with Valgrind/ASAN
(e.g. BIND, Unbound/ldns, Knot DNS, or any other) provides roughly the
same level of (in)security as current limited set of options.


I hope this clarifies the case. Where do we go from here?


[1] https://dnsflagday.net/
[2]
https://gitlab.isc.org/isc-projects/DNS-Compliance-Testing/blob/master/genreport.c#L216
[3] https://gitlab.labs.nic.cz/knot/edns-zone-scanner/

Petr Špaček  @  CZ.NIC


On 19. 12. 18 11:29, Daniel Suchy wrote:
> Hello,
> 
> On 12/19/18 10:33 AM, Petr Špaček wrote:
>> On 18. 12. 18 14:09, Daniel Suchy wrote:
>> I remember from RIPE 77 meeting that there are strong opinions on
>> limiting what can be done and that there are reasons for that. Purpose
>> of my e-mail is to find out if there is a middle ground.
>>
>> Does your answer mean "it is not going to happen, go away"
>> or is there a room for negotiation?
> 
> In my previous email I tried ask you to more precisely specify, what
> tests are really *necesary* (important) for DNS flag-day compability
> testing. I'm missing this information from you :-)
> 
> I think if you reduce (and explain) your needs, there's space for
> discussion. In general, proposed test is useful in my oppinion - but
> you're asking for more than you really need for that purpose, I think.
> 
> With regards,
> Daniel



Re: [atlas] testing DNS flag day compatibility

2018-12-19 Thread Petr Špaček
Hello Daniel and others,

On 18. 12. 18 14:09, Daniel Suchy wrote:
> Hello,
> I think there should be specified, which tests/options are really
> *necesary* for this compability testing related to the DNS flag day.
> From operator perspective, you just need to know, if your implementation
> will have problem or if it's OK... and I think many details reported by
> [2] will not be even understood by normal users.
> 
> From a quick look, you're missing ability to set some bits (flags) and
> other options in query packet. Majority of tests in linked source code
> are using SOA, some other common types in query, which are already
> included in options available, some aren't - but they're quite exotic
> query types and probably not widely used - so are these really needed?
> 
> I don't think allowing "simply" anything (as you're proposing in [a] or
> [b] below) is a good apporach. Some options (ignoretc, for example) will
> not be even understood by current `dig` implementations, that's another
> problem. And there's always some risk of malicious use and "open" Atlas
> network may be misused. So I prefer to stay restrictive in terms of
> queries allowed over Atlas network.

I remember from RIPE 77 meeting that there are strong opinions on
limiting what can be done and that there are reasons for that. Purpose
of my e-mail is to find out if there is a middle ground.

Does your answer mean "it is not going to happen, go away"
or is there a room for negotiation?

I can provide detailed argumentation if you are willing to negotiate.

Petr Špaček  @  CZ.NIC

> 
> Daniel
> 
> On 12/17/18 6:40 PM, Petr Špaček wrote:
>> Hello everyone,
>>
>> this is follow-up from RIPE 77 hallway discussion, sorry for delay.
>>
>> We are looking for ways to test DNS flag day [1] compatibility from
>> client networks. Objective is to test hypothesis that most breakage
>> happens on authoritative side of DNS. In other words, we would like to
>> test that DNS recursive infrastructure and client networks do not
>> significantly influence compatibility.
>>
>> That would help to provide precise information for network operators who
>> will have to deal with DNS flag day.
>>
>>
>> Problem here is that RIPE Atlas does not allow to send all types of
>> queries [2] required for full test. It was discussed at length that
>> Atlas team has its reasons for not sending random blobs to random IP
>> addresses, which is understood.
>>
>> Question here is:
>> Can we find a middle ground to allow greater variety of valid DNS
>> queries without forcing Atlas team to reimplement everything?
>>
>>
>> My notes from meeting mention two approaches for further dicussion:
>>
>> a) User provides command line arguments for well-known tool dig, which
>> gets executed in controlled environment ("as part of RIPE Atlas
>> infrastructure") and generates query packet/blob. This blob generated by
>> dig is then used as payload so use cannot ship anything but
>> syntactically valid DNS packet.
>>
>>
>> b) User provides blob for payload, which is then analyzed by packet
>> parser of choice (BIND/ldns/Knot DNS/all of them). The payload can be
>> sent out only if packet parsers do not find out any problem/blob is
>> syntactically valid.
>>
>> These two approaches can also be combined to guard again quirks in
>> either component.
>>
>>
>> c) 
>>
>>
>> What do you think? Is there a way to allow greater flexibility to Atlas DNS?
>>
>>
>> [1] https://dnsflagday.net/
>> [2]
>> https://gitlab.isc.org/isc-projects/DNS-Compliance-Testing/blob/master/genreport.c#L216




[atlas] testing DNS flag day compatibility

2018-12-17 Thread Petr Špaček
Hello everyone,

this is follow-up from RIPE 77 hallway discussion, sorry for delay.

We are looking for ways to test DNS flag day [1] compatibility from
client networks. Objective is to test hypothesis that most breakage
happens on authoritative side of DNS. In other words, we would like to
test that DNS recursive infrastructure and client networks do not
significantly influence compatibility.

That would help to provide precise information for network operators who
will have to deal with DNS flag day.


Problem here is that RIPE Atlas does not allow to send all types of
queries [2] required for full test. It was discussed at length that
Atlas team has its reasons for not sending random blobs to random IP
addresses, which is understood.

Question here is:
Can we find a middle ground to allow greater variety of valid DNS
queries without forcing Atlas team to reimplement everything?


My notes from meeting mention two approaches for further dicussion:

a) User provides command line arguments for well-known tool dig, which
gets executed in controlled environment ("as part of RIPE Atlas
infrastructure") and generates query packet/blob. This blob generated by
dig is then used as payload so use cannot ship anything but
syntactically valid DNS packet.


b) User provides blob for payload, which is then analyzed by packet
parser of choice (BIND/ldns/Knot DNS/all of them). The payload can be
sent out only if packet parsers do not find out any problem/blob is
syntactically valid.

These two approaches can also be combined to guard again quirks in
either component.


c) 


What do you think? Is there a way to allow greater flexibility to Atlas DNS?


[1] https://dnsflagday.net/
[2]
https://gitlab.isc.org/isc-projects/DNS-Compliance-Testing/blob/master/genreport.c#L216

-- 
Petr Špaček  @  CZ.NIC