>> So ... I can't get the glibc behaviour to mesh with the standard
>> on this particular point.
>
> It's set in RFC 6840:
I stand corrected, thanks.
- Håvard
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe
from this
On Thu 11/Feb/2021 17:44:20 +0100 Havard Eidnes wrote:
Yeah, by the time it lands on Debian's glibc we'll have grown a long
long beard. I'm still missing RES_TRUSTAD...
Oh, this set me off on a tangent. I hadn't heard of RES_TRUSTAD
before, so I found
The internet isn’t always on and it isn’t only composed of big tech
companies with lots of resources.
like Google's gmail, which has had hours-long service outages from time to
time? ;-)___
Please visit
> Yeah, by the time it lands on Debian's glibc we'll have grown a long
> long beard. I'm still missing RES_TRUSTAD...
Oh, this set me off on a tangent. I hadn't heard of RES_TRUSTAD
before, so I found
https://man7.org/linux/man-pages/man5/resolv.conf.5.html
which under "trust-ad" contains
On Thu 11/Feb/2021 14:47:13 +0100 Ondřej Surý wrote:
Mark is right. The internet isn’t always on and it isn’t only composed of big
tech companies with lots of resources.
The internet consists of lot small systems made by people like you and me and
we don’t have infinite resources to keep
Mark is right. The internet isn’t always on and it isn’t only composed of big
tech companies with lots of resources.
The internet consists of lot small systems made by people like you and me and
we don’t have infinite resources to keep everything always on.
And honestly I find your quote about
Machines still fall over. They take the same amount of time to fix now as they
did 30 years ago.
You still have to diagnose the fault. You still have to get the replacement
part. You still have to potentially restore from backups. Sometimes you can
switch to a standby machine which makes
On Wed 10/Feb/2021 22:38:05 +0100 J Doe wrote:
Out of curiosity, what servers have you encountered that no longer use the five
day cutoff ?
I didn't take note, but I read discussions on the topic. Users expect mail to be
delivered almost instantly. The "warning, still trying" messages
On Thu 11/Feb/2021 10:44:58 +0100 Havard Eidnes wrote:
Still, being able to differentiate a local network congestion from a
remote bad configuration would help.
That's true. There's
https://tools.ietf.org/html/draft-ietf-dnsop-extended-error-16
which look promising, trying to make it
> Still, being able to differentiate a local network congestion from a
> remote bad configuration would help.
That's true. There's
https://tools.ietf.org/html/draft-ietf-dnsop-extended-error-16
which look promising, trying to make it possible to distinguish
between the various reasons a
On 2021-02-10 3:05 a.m., Alessandro Vesely wrote:
Hi Havard,
That's what I've been doing. For an incoming message, a temporary
failure means replying a 4xx code. The sender keeps the message in its
queue, and eventually gives up. Once upon a time, MTAs used to retry
sending for five
Hi Havard,
thanks for your reply.
On Tue 09/Feb/2021 18:15:43 +0100 Havard Eidnes wrote:
is there a way to know that a query has already been tried a few
minutes ago, and failed?
From whose perspective?
A well-behaved application could remember it asked the same query
a short while ago, of
. I.e. it's not *just* DNSSEC failure
which can trigger SERVFAIL.
> Trying again is useless, it has to be treated as a permanent
> error.
Well, now... Basically nothing in the DNS is permanent, because
it is not completely static; hence most information in the DNS
has a TTL attac
Hi,
is there a way to know that a query has already been tried a few minutes ago,
and failed?
It happens seldomly, but sometimes the DKIM mail filter gets a SERVFAIL when it
tries to authenticate an incoming message. SERVFAIL occurs when DNSSEC check
fails. Trying again is useless, it has
14 matches
Mail list logo