Well, I discovered an issue with 'repeated Enum flags' in protobuf
(KUDU-1850). But I went with an appropriate workaround.

You can find the complete patch here: https://gerrit.cloudera.org/#/c/5796/

Please take a look.

-Todd

On Wed, Jan 25, 2017 at 3:35 PM, Dan Burkert <danburk...@apache.org> wrote:

> On Wed, Jan 25, 2017 at 3:33 PM, Alexey Serbin <aser...@cloudera.com>
> wrote:
> >
> > Or it's proposed that if new
> > fields are found in a token then don't accept the token at all?  In that
> > case it might be an issue during rolling upgrade, I think.
> >
>
> That's what I was thinking, but that's a good reason not to do it that
> way.  Our normal required feature flags mechanism should work in that case,
> where the field is not critical/required.
>
>
> >
> >
> > Best regards,
> >
> > Alexey
> >
> > On Wed, Jan 25, 2017 at 3:14 PM, Dan Burkert <danburk...@apache.org>
> > wrote:
> >
> > > That's an interesting idea - say if the format evolved to have an
> > > additional field which restricts use of the token?  Could we use the
> > > protobuf unknown fields API to recognize if this happened?
> > >
> > > - Dan
> > >
> > > On Wed, Jan 25, 2017 at 3:03 PM, Alexey Serbin <aser...@cloudera.com>
> > > wrote:
> > >
> > >> I like this idea.
> > >>
> > >> Probably, that's too early at this point, but consider adding notion
> of
> > >> compt/non-compat feature flags into tokens.  Imagine the new version
> of
> > >> token has some additional restriction field, which older code does not
> > >> understand.  In that case, even if the token signature is correct, the
> > >> older code should not accept the new token since unsupported
> non-compat
> > >> feature flags is present.
> > >>
> > >> But in the essence this looks great, IMO.
> > >>
> > >>
> > >> Best regards,
> > >>
> > >> Alexey
> > >>
> > >> On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <t...@cloudera.com>
> > wrote:
> > >>
> > >>> Actually had one more idea... how about:
> > >>> message AuthenticationToken {
> > >>> };
> > >>>
> > >>> message AuthorizationToken {
> > >>> };
> > >>>
> > >>> message TokenPB {
> > >>>   // The time at which this token expires, in seconds since the
> > >>>   // unix epoch.
> > >>>   optional int64 expire_unix_epoch_seconds = 1;
> > >>>
> > >>>   oneof token {
> > >>>     AuthenticationToken authn = 2;
> > >>>     AuthenticationToken authz = 3;
> > >>>   }
> > >>> };
> > >>>
> > >>> message SignedTokenPB {
> > >>>   // The actual token contents. This is a serialized TokenPB
> protobuf.
> > >>> However, we use a
> > >>>   // 'bytes' field, since protobuf doesn't guarantee that if two
> > >>> implementations serialize
> > >>>   // a protobuf, they'll necessary get bytewise identical results,
> > >>> particularly in the
> > >>>   // presence of unknown fields.
> > >>>   optional bytes token_contents = 2;
> > >>>
> > >>>   // The cryptographic signature of 'token_contents'.
> > >>>   optional bytes signature = 3;
> > >>>
> > >>>   // The sequence number of the key which produced 'signature'.
> > >>>   optional int64 signing_key_seq_num = 4;
> > >>> };
> > >>>
> > >>>
> > >>>
> > >>> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <t...@cloudera.com>
> > wrote:
> > >>>
> > >>>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <d...@cloudera.com>
> > wrote:
> > >>>>
> > >>>>> I think it must go in the 'token_contents' itself, otherwise it can
> > be
> > >>>>> modified by a malicious client.  Other than that, looks good.
> > >>>>>
> > >>>>>
> > >>>> well, if it went into a separate field, then we'd have something
> like:
> > >>>>
> > >>>> optional bytes token_contents = 1;
> > >>>> optional int64 expiration_timestamp = 2;
> > >>>>
> > >>>> // Signature of the string: '<32-bit big-endian length of
> > >>>> token_contents> token_contents <64-bit big-endian expiration>'
> > >>>> optional bytes signature = 3;
> > >>>>
> > >>>> so they could try to modify it, but the signature would fail.
> > >>>>
> > >>>>
> > >>>>
> > >>>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <t...@cloudera.com>
> > >>>>> wrote:
> > >>>>>
> > >>>>>> Hey folks
> > >>>>>>
> > >>>>>> I'm working on the token signing/verification stuff at the moment.
> > >>>>>> Curious to solicit some opinions on this:
> > >>>>>>
> > >>>>>>
> > >>>>>> message TokenPB {
> > >>>>>>   // The actual token contents. This is typically a serialized
> > >>>>>>   // protobuf of its own. However, we use a 'bytes' field, since
> > >>>>>>   // protobuf doesn't guarantee that if two implementations
> > serialize
> > >>>>>>   // a protobuf, they'll necessary get bytewise identical results,
> > >>>>>>   // particularly in the presence of unknown fields.
> > >>>>>>   optional bytes token_contents = 1;
> > >>>>>>
> > >>>>>>   // The cryptographic signature of 'token_contents'.
> > >>>>>>   optional bytes signature = 2;
> > >>>>>>
> > >>>>>>   // The sequence number of the key which produced 'signature'.
> > >>>>>>   optional int64_t signing_key_seq_num = 3;
> > >>>>>> };
> > >>>>>>
> > >>>>>> The thing that's currently missing is an expiration timestamp of
> the
> > >>>>>> signature. I have two options here:
> > >>>>>>
> > >>>>>> *Option A*) say that the TokenPB itself doesn't capture
> expiration,
> > >>>>>> and if a particular type of token needs expiration, it would have
> > to put an
> > >>>>>> 'expiration time' in its token contents itself.
> > >>>>>>
> > >>>>>> *pros:*
> > >>>>>> - token signing/verification is just a simple operation on the
> > >>>>>> 'token_contents' string
> > >>>>>>
> > >>>>>> *Cons:*
> > >>>>>> - would likely end up with redundant code between AuthN and AuthZ
> > >>>>>> tokens, both of which need expiration. However, that code isn't
> very
> > >>>>>> complicated (just a timestamp comparison) so maybe not a big deal?
> > >>>>>>
> > >>>>>> *Option B)* add an expiration timestamp field to TokenPB
> > >>>>>> *pros:*
> > >>>>>> - consolidate the expiration checking code into TokenVerifier
> > >>>>>> *cons:*
> > >>>>>> - now in order to sign/verify a token, we actually need to be
> > signing
> > >>>>>> something like a concatenation of 'token_contents + signature'.
> Not
> > too
> > >>>>>> hard to construct this concatenation, but it does add some
> > complexity.
> > >>>>>>
> > >>>>>> Any strong opinions either way?
> > >>>>>>
> > >>>>>> -Todd
> > >>>>>> --
> > >>>>>> Todd Lipcon
> > >>>>>> Software Engineer, Cloudera
> > >>>>>>
> > >>>>>
> > >>>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>> Todd Lipcon
> > >>>> Software Engineer, Cloudera
> > >>>>
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> Todd Lipcon
> > >>> Software Engineer, Cloudera
> > >>>
> > >>
> > >>
> > >
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to