At Wed, 18 Jan 2017 16:08:24 -0800, Stephen Farrell wrote:
...
> ----------------------------------------------------------------------
> DISCUSS:
> ----------------------------------------------------------------------
> 
> Why is sha-256 hardcoded?

Real answer: because it's hard-coded in RFC 6486 and we were trying to
use the same hashing algorithm for manifests, this, and RRDP
(draft-ietf-sidr-delta-protocol).

>  You could easily include a hash alg-id even as an option and in
> that way get algorithm agility, as called for by BCP201.  (Or you
> could use something like ni URIs but that's a bit of a self-serving
> suggestion;-) Anyway, what's the plan for replacing sha-256 here?
> (This is a bit of a subset of Alissa's discuss with which I agree.)
> 
> One possible way to handle this here is to identify sha-256 as
> the default hash algorithm but to re-define the ABNF for hash
> to allow an alg-id of some sort to be included there. Or have
> some generic versioning text somewhere that calls for a
> version bump if sha-256 is not to be used.

I had been assuming that an algorithm change would be a protocol
version bump.  Given that the server is probably storing these hashes
in a database, changing the algorithm is probably a bit more involved
than just changing the bits on the wire.

> ----------------------------------------------------------------------
> COMMENT:
> ----------------------------------------------------------------------
> 
> - general: I think a design that uses https with mutual auth
> would have been better and easier. But given that this is
> implemented and deployed, I guess it's too late for this one.

The design goals included offline authentication for audit purposes,
possibly years after the fact.  That's hard to do with any sort of
channel security mechanism, hence this approach.

> - As with the oob spec, the xmlns values get me a 404.

Don't think this is critical, but I can put up a vhost for this at
some point if it will make people happier.

> - section 6: I don't agree that CMS signed data means that
> https is not needed. The latter provides confidentiality and
> integrity and server auth which the former does not.  And even
> ignoring the security reasons, https is arguably much easier
> to deploy and requires less development. And http is
> vulnerable to middlebox messing (e.g. a client using http is
> more likely to be forced to support cleartext proxy-auth
> passwords).  I would encourage you to encourage use of https
> with server auth in addition to CMS signed data payloads.

Er, the CMS signatures do provide integrity, I think.

Having implemented both application code using both HTTPS and CMS as
part of this project, I will have to respectfully disagree on relative
difficulty of implementation.  CMS is a format hairball, true, but
it's all signed objects which can be signed and verified calmly at the
implementation's convenience.  TLS authentication tends to involve
callbacks at awkward times, requiring one to make authentication
decisions at a time chosen by somebody at the other end of a network
connection, which can get pretty nasty.

That said, I don't really object to HTTPS as a transport protocol, so
long as we don't have to change the authentication mechanism.

_______________________________________________
sidr mailing list
sidr@ietf.org
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to