> On 29. Mar 2019, at 10:07, Sven Van Caekenberghe <s...@stfx.eu> wrote:
> 
> Holger,

Sven, All!

Thanks for moving it to GitHub!

Pharo Days:

I am in APAC right now and I am not sure if I make it. I am hesitating. Maybe 
we can have a Google Hangout to discuss this (if not too inconvenient for the 
ones present?).


Unix system resolver config discovery:

The FreeBSD manpages are quite good. I think we need to parse resolv.conf, 
hosts and nsswitch (Linux, FreeBSD). It's probably okay to not support 
everything initially (e.g. I have never seen sortlist being used in my unix 
career). Also the timeouts for re-reading these file are interesting 
(inotify/stat/lazily reread might be preferable).


https://www.freebsd.org/cgi/man.cgi?resolv.conf
https://www.freebsd.org/cgi/man.cgi?hosts
https://www.freebsd.org/cgi/man.cgi?query=nsswitch.conf



Windows resolver config discovery:

It seems 
https://docs.microsoft.com/en-us/windows/desktop/api/iphlpapi/nf-iphlpapi-getnetworkparams
 populates a FIXED_INFO that includes a list of resolver addresses.


MacOS config discovery:

Starting with the Unix implementation might not be terrible.


My interest:

I would like Pharo to improve on the networking side and I have worked with 
recursive resolvers and authoritative servers in my last job. It seemed obvious 
to combine these two when Norbert tried NetNameResolver and only got one IPv4 
address and I looked at the C implementation.

The other interest is that I am following the IETF DNS development (on 
dnsop/dprive/doh with interesting topics). I think having a manageable DNS 
toolkit will help me to play with specs/standards in the future.


More responses inline.



>> What is internet access and how would this be used? Is this about captive 
>> portals? With local network policy the big anycast services might be blocked 
>> but the user can still reach services. Or with deployed microservices they 
>> might reach other but not the outside?
> 
> For years there is this issue in Pharo that if we build features that require 
> internet access (say for example automatic loading of the Catalog when you 
> start using Spotter, but there are many more places where this could add lots 
> of value), that people say "don't do this, because it won't work when I have 
> slow or no internet (like on a train)".

This sounds like "bearer management"? It seems like consulting the OS for the 
network status might be better/more consistent?



> The core cause of the problems is that the current NameResolver is totally 
> blocking, especially in failure cases, which gives a terrible experience.

Yes. That's horrible. The MacOS implementation is actually asynchronous but has 
a level of concurrency of one. :(



> One way to fix this would be with the concept of NetworkState, a cheap, 
> reliable, totally non-blocking way to test if the image has a working 
> internet connection. Related is the option of 'Airplane Mode', so that you 
> can manually say: "consider the internet unreachable".

Makes sense but is difficult as well. Just because we can't resolve one name 
doesn't mean that NetNameResolver won't lock-up soon after. :(

I think we have to come up with ways to deal with just because all I/O is 
blocking in a Pharo Process doesn't mean that there is no concurrency. Is this 
only true for files+dns?

In the bigger context I would like to have something like CSP in Pharo.


> I would *very* much prefer not to depend on any obscure, hard to maintain VM 
> code (FFI would just be acceptable).

ack.



> What I tried/implemented in NeoDNSClient (which inherits from the one-shot 
> NeoSimplifiedDNSClient) is a requestQueue and a cache (respecting ttl), where 
> clients put a request on the requestQueue and wait on a semaphore inside the 
> request (respecting their call's timeout). A single process (that 
> starts/stops as needed) handles the sending & receiving of the actual 
> protocol, signalling the matching request's semaphore. (The #beThreadSafe 
> option needs a bit more work though).

In my implementation I have separated the transports in their own classes. For 
UDP we always want to have a fresh socket to get a new source port assigned, 
for TCP, TLS and DoH it might make sense to keep the connection open a bit.

In some ways if I open 15 db connections with Voyage, I'm not concerned about 
15 DNS queries. The implementation will be a lot more simple (no 
synchronization, no need to reason about concurrency) but on the other hand 
coordination is what we have today.

I think we can achieve coordination with an easier way. E.g. register pending 
requests and allow other clients to subscribe on the result.



> I am curious though, what was your initial motivation for starting PaleoDNS ? 
> Which concrete issues did you encounter that you wanted to fix ?


What I like and found with my implementation:

* It would be nice if ZdcAbstractSocketStream understood uintX/uintX:
* My record classes can be parsed and serialized. In generalI want to be 
"sender" and "receiver".
* Separating the transport by class might make sense like my implementation. I 
have TCP, UDP and TLS.
* My decompression is subject to infinite loops but I think the other code too?
* We should aim to enable EDNS(0) support by default.
* We should have 0x20 randomization, random transaction ids, and comparing 
query and result in a final stub resolver.
* timeout and RTT handling should be adaptive. Chromium's stub resolver might 
have a good example.
* We should have a minimal implementation in the image and be extendable.


have a great weekend!

        holger


Reply via email to