Re: RPZ for Unbound?

2015-12-29 Thread Benno Overeinder via Unbound-users
Hi JP,

On 10-12-2015 10:53, Jan-Piet Mens via Unbound-users wrote:
> 
> I seem to recall having read/heard/seen somewhere (but can't find that
> tidbit anywhere) that Unbound was to have RPZ support added to it,
> slated for sometime next year.
> 
> Did I dream that? :-)

No, you didn't dream.  :-)

RPZ is on some wishlists already for some time, and we put it on the
road map for next year.

Best,

-- Benno




Can DNSSEC resolvers pass through all mangling CPEs?

2015-12-29 Thread Rick van Rein via Unbound-users
Hello,

We are seeing more DNSSEC all the way to the desktop, thanks to NLnet
Labs products like libunbound and GetDNS.  Hooray!

What I am wondering is, if this also resolves all issues relating to
NAT/firewall traversal of DNS.  Quite a few CPE boxes are known to
mangle DNS traffic under their default settings, and I am not sure if
this is only the case when passing through their builtin DNS proxy
service, or also when someone addresses port 53 (UDP, TCP, or both).

This matter of CPE mangling also comes up in relation to new RRtypes
that might be added to DNS; I wonder if that would be resolved by
local-machine recursive resolvers.

What is the experience with users and of NLnet Labs with CPE traversal
by recursive resolvers?

Thanks,
 -Rick


Re: unbound returns SERVFAIL although forwarder works just fine

2015-12-29 Thread martin f krafft via Unbound-users
also sprach Ralph Dolmans via Unbound-users  
[2015-12-24 00:33 +1300]:
> Are you sure the tcpdump output corresponds to this part of the log?
> The log indicates that no query is send out because the only suitable
> delegation point (the forwarding server) is marked as unusable, due to
> earlier timeouts. Flushing this entry from the infra cache should make
> it work again (unbound-control flush_infra 192.168.1.1).

Thanks Ralph, this makes a lot more sense now and it's exactly
what's going on: the infrastructure cache causes unbound to skip the
previously unreachable forwarder (this is on a laptop which
sometimes loses signal…), and now I set infra-host-ttl to a low
value and hope to have this fix things.

Take care, and good luck starting into the new year, everyone!

-- 
@martinkrafft | http://madduck.net/ | http://two.sentenc.es/
 
"it isn't pollution that's harming the environment.
 it's the impurities in our air and water that are doing it."
  - dan quayle
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current)


unbound-control dump_cache / load_cache

2015-12-29 Thread Havard Eidnes via Unbound-users
Hi,

a while back I needed/wanted to reconfigure my unbound recursor
to have more memory available for the "rrset cache", in what
seems to be a futile attempt at increasing the cache hit rate.

This would cause unbound to discard its cache in its entirety.  I
thought that in order to soften the blow for the users, I would
use the "dump_cache / load_cache" operations of unbound-control
so that I could (more or less) restore the state of the cache
across the reconfiguration.

However, despite my unbound-control "stats" saying via
"mem.cache.rrset" that it had lots of memory consumed for caching --
at the time around 2GB, current values are

mem.cache.rrset=1357501872
mem.cache.message=4587299
mem.mod.iterator=16540
mem.mod.validator=32503056

i.e. 1.3GB of data in the "RRSET" cache, the result of dump_cache
came to only around 43MB (a re-dump just now gave 36MB), and the
mem.cache.rrset value was dumped nearly to zero by the reconfigure
and increased much after load_cache was performed.

So ... is dump_cache doing a rather incomplete job of dumping the
cache?

What got me started on this path was that I'm observing that the cache
hit rate of my unbound name server (collected via a collectd plugin,
graphed with grafana) is ... rather pitiful, typically hovering
somewhere around 60-65%.  I've configured it to do "prefetch", but I'm
seeing a rather low rate of prefetches -- below 5%, and my other
recursor, running BIND, appears to see 75-80% cache hit rate (which
isn't all that great either, but appears to do marginally better...).

It is admittedly the "quiet season" now, so the daily query rate is
rather low, but I'm stil left wondering if there's something which
could be done about the cache eviction policies to increase the cache
hit rate?  And I'm left wondering what all that memory which doesn't
show up in "dump_cache" is used for...

Best regards,

- HÃ¥vard