Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-05 Thread Morgan Fainberg
Its been said before and I'll reiterate, the fernet keys are not meant to be 
managed by keystone itself directly. This is viewed as a DevOps concern as 
keystone itself doesnt dictate the HA method to be used (many deployers use 
different methodologies). Most CMS systems handle synchronization of files on 
disk very very well. Much the same as ssl certs, pki infrastructure,
Etc. fernet keys are more in the realm of your CMS tool than something that 
belongs in keystone's DB. 

If you look through the other messages in both this thread and the others, 
you'll find this has been the stance from the beginning. 

--Morgan

Sent via mobile

> On Aug 6, 2015, at 15:25, joehuang  wrote:
> 
> Hi,
>  
> Even if Barbican can store the key, but it will add overhead for restful API 
> interaction between KeyStone and Barbican. May we store the key in the 
> KeyStone DB backend (or another separate DB backend), for example MySQL?
>  
> Best Regards
> Chaoyi Huang ( Joe Huang )
>  
> From: Lance Bragstad [mailto:lbrags...@gmail.com] 
> Sent: Wednesday, August 05, 2015 9:06 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys
>  
>  
>  
> On Wed, Aug 5, 2015 at 2:38 AM, Adam Heczko  wrote:
> Hi, I believe that Barbican keystore for signing keys was discussed earlier.
> I'm not sure if that's best idea since Barbican relies on Keystone 
> authN/authZ.
>  
> Correct. Once we find a solution for that problem it would be interesting to 
> work towards a solution for storing keys in Barbican. I've talked to several 
> people about this already and it seems to be the natural progression. Once we 
> can do that, I think we can revisit the tooling for rotation.
>  
> That's why this mechanism should be considered rather as "out of band" to 
> Keystone/OS API and is rather devops task.
>  
> regards,
>  
> Adam
>  
>  
>  
>  
> On Wed, Aug 5, 2015 at 8:11 AM, joehuang  wrote:
> Hi, Lance,
>  
> May we store the keys in Barbican, can the  key rotation be done upon 
> Barbican? And if we use Barican as the repository, then it’s easier for Key 
> distribution and rotation in multiple KeyStone deployment scenario, the 
> database replication (sync. or async.) capability could be leveraged.
>  
> Best Regards
> Chaoyi Huang ( Joe Huang )
>  
> From: Lance Bragstad [mailto:lbrags...@gmail.com] 
> Sent: Tuesday, August 04, 2015 10:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys
>  
>  
> On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov  wrote:
> On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
> > On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov  wrote:
> > > On Monday 03 August 2015 21:05:00 David Stanek wrote:
> > > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> > >
> > > wrote:
> > > > > Also, come on, does http://paste.openstack.org/show/406674/ look
> > > > > overly
> > > > > complex? (it should be launched from Fuel master node).
> > > >
> > > > I'm reading this on a small phone, so I may have it wrong, but the
> > > > script
> > > >
> > > > appears to be broken.
> > > >
> > > >
> > > >
> > > > It will ssh to node-1 and rotate. In the simplest case this takes key
> > > > 0
> > >
> > > and
> > >
> > > > moves it to the next highest key number. Then a new key 0 is
> > > > generated.
> > > >
> > > >
> > > >
> > > > Later there is a loop that will again ssh into node-1 and run the
> > >
> > > rotation
> > >
> > > > script. If there is a limit set on the number of keys and you are at
> > > > that
> > > >
> > > > limit a key will be deleted. This extra rotation on node-1 means that
> > >
> > > it's
> > >
> > > > possible that it has a different set of keys than are on node-2 and
> > >
> > > node-3.
> > >
> > >
> > >
> > > You are absolutely right. Node-1 should be excluded from the loop.
> > >
> > >
> > >
> > > pinc also lacks "-c 1".
> > >
> > >
> > >
> > > I am sure that other issues can be found.
> > >
> > >
> > >
> > > In my excuse I want to say that I never ran the script and wrote it just
> > > to sho

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-05 Thread joehuang
Hi,

Even if Barbican can store the key, but it will add overhead for restful API 
interaction between KeyStone and Barbican. May we store the key in the KeyStone 
DB backend (or another separate DB backend), for example MySQL?

Best Regards
Chaoyi Huang ( Joe Huang )

From: Lance Bragstad [mailto:lbrags...@gmail.com]
Sent: Wednesday, August 05, 2015 9:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys



On Wed, Aug 5, 2015 at 2:38 AM, Adam Heczko 
mailto:ahec...@mirantis.com>> wrote:
Hi, I believe that Barbican keystore for signing keys was discussed earlier.
I'm not sure if that's best idea since Barbican relies on Keystone authN/authZ.

Correct. Once we find a solution for that problem it would be interesting to 
work towards a solution for storing keys in Barbican. I've talked to several 
people about this already and it seems to be the natural progression. Once we 
can do that, I think we can revisit the tooling for rotation.

That's why this mechanism should be considered rather as "out of band" to 
Keystone/OS API and is rather devops task.

regards,

Adam




On Wed, Aug 5, 2015 at 8:11 AM, joehuang 
mailto:joehu...@huawei.com>> wrote:
Hi, Lance,

May we store the keys in Barbican, can the  key rotation be done upon Barbican? 
And if we use Barican as the repository, then it’s easier for Key distribution 
and rotation in multiple KeyStone deployment scenario, the database replication 
(sync. or async.) capability could be leveraged.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Lance Bragstad [mailto:lbrags...@gmail.com<mailto:lbrags...@gmail.com>]
Sent: Tuesday, August 04, 2015 10:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys


On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov 
mailto:bbob...@mirantis.com>> wrote:
On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
> On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov 
> mailto:bbob...@mirantis.com>> wrote:
> > On Monday 03 August 2015 21:05:00 David Stanek wrote:
> > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> > > mailto:bbob...@mirantis.com>>
> >
> > wrote:
> > > > Also, come on, does http://paste.openstack.org/show/406674/ look
> > > > overly
> > > > complex? (it should be launched from Fuel master node).
> > >
> > > I'm reading this on a small phone, so I may have it wrong, but the
> > > script
> > >
> > > appears to be broken.
> > >
> > >
> > >
> > > It will ssh to node-1 and rotate. In the simplest case this takes key
> > > 0
> >
> > and
> >
> > > moves it to the next highest key number. Then a new key 0 is
> > > generated.
> > >
> > >
> > >
> > > Later there is a loop that will again ssh into node-1 and run the
> >
> > rotation
> >
> > > script. If there is a limit set on the number of keys and you are at
> > > that
> > >
> > > limit a key will be deleted. This extra rotation on node-1 means that
> >
> > it's
> >
> > > possible that it has a different set of keys than are on node-2 and
> >
> > node-3.
> >
> >
> >
> > You are absolutely right. Node-1 should be excluded from the loop.
> >
> >
> >
> > pinc also lacks "-c 1".
> >
> >
> >
> > I am sure that other issues can be found.
> >
> >
> >
> > In my excuse I want to say that I never ran the script and wrote it just
> > to show how simple it should be. Thank for review though!
> >
> >
> >
> > I also hope that no one is going to use a script from a mailing list.
> >
> > > What's the issue with just a simple rsync of the directory?
> >
> > None I think. I just want to reuse the interface provided by
> > keystone-manage.
>
> You wanted to use the interface from keystone-manage to handle the actual
> promotion of the staged key, right? This is why there were two
> fernet_rotate commands issued?
Right. Here is the fixed version (please don't use it anyway):
http://paste.openstack.org/show/406862/

Note, this doesn't take into account the initial key repository creation, does 
it?

Here is a similar version that relies on rsync for the distribution after the 
initial key rotation [0].

[0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua



--
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-05 Thread Lance Bragstad
On Wed, Aug 5, 2015 at 2:38 AM, Adam Heczko  wrote:

> Hi, I believe that Barbican keystore for signing keys was discussed
> earlier.
> I'm not sure if that's best idea since Barbican relies on Keystone
> authN/authZ.
>

Correct. Once we find a solution for that problem it would be interesting
to work towards a solution for storing keys in Barbican. I've talked to
several people about this already and it seems to be the natural
progression. Once we can do that, I think we can revisit the tooling for
rotation.


> That's why this mechanism should be considered rather as "out of band" to
> Keystone/OS API and is rather devops task.
>
> regards,
>
> Adam
>
>
>
>
> On Wed, Aug 5, 2015 at 8:11 AM, joehuang  wrote:
>
>> Hi, Lance,
>>
>>
>>
>> May we store the keys in Barbican, can the  key rotation be done upon
>> Barbican? And if we use Barican as the repository, then it’s easier for Key
>> distribution and rotation in multiple KeyStone deployment scenario, the
>> database replication (sync. or async.) capability could be leveraged.
>>
>>
>>
>> Best Regards
>>
>> Chaoyi Huang ( Joe Huang )
>>
>>
>>
>> *From:* Lance Bragstad [mailto:lbrags...@gmail.com]
>> *Sent:* Tuesday, August 04, 2015 10:56 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for
>> Fernet keys
>>
>>
>>
>>
>>
>> On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov 
>> wrote:
>>
>> On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
>> > On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov 
>> wrote:
>> > > On Monday 03 August 2015 21:05:00 David Stanek wrote:
>> > > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
>> > >
>> > > wrote:
>>
>> > > > > Also, come on, does http://paste.openstack.org/show/406674/ look
>> > > > > overly
>> > > > > complex? (it should be launched from Fuel master node).
>> > > >
>> > > > I'm reading this on a small phone, so I may have it wrong, but the
>> > > > script
>> > > >
>> > > > appears to be broken.
>> > > >
>> > > >
>> > > >
>> > > > It will ssh to node-1 and rotate. In the simplest case this takes
>> key
>> > > > 0
>> > >
>> > > and
>> > >
>> > > > moves it to the next highest key number. Then a new key 0 is
>> > > > generated.
>> > > >
>> > > >
>> > > >
>> > > > Later there is a loop that will again ssh into node-1 and run the
>> > >
>> > > rotation
>> > >
>> > > > script. If there is a limit set on the number of keys and you are at
>> > > > that
>> > > >
>> > > > limit a key will be deleted. This extra rotation on node-1 means
>> that
>> > >
>> > > it's
>> > >
>> > > > possible that it has a different set of keys than are on node-2 and
>> > >
>> > > node-3.
>> > >
>> > >
>> > >
>> > > You are absolutely right. Node-1 should be excluded from the loop.
>> > >
>> > >
>> > >
>> > > pinc also lacks "-c 1".
>> > >
>> > >
>> > >
>> > > I am sure that other issues can be found.
>> > >
>> > >
>> > >
>> > > In my excuse I want to say that I never ran the script and wrote it
>> just
>> > > to show how simple it should be. Thank for review though!
>> > >
>> > >
>> > >
>> > > I also hope that no one is going to use a script from a mailing list.
>> > >
>> > > > What's the issue with just a simple rsync of the directory?
>> > >
>> > > None I think. I just want to reuse the interface provided by
>> > > keystone-manage.
>> >
>> > You wanted to use the interface from keystone-manage to handle the
>> actual
>> > promotion of the staged key, right? This is why there were two
>> > fernet_rotate commands issued?
>>
>> Right. Here is the fixed version (please don't use it anyway):
>> http://paste.openstack.org/show/406862/
>>
>>
>>
>

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-05 Thread Adam Heczko
Hi, I believe that Barbican keystore for signing keys was discussed earlier.
I'm not sure if that's best idea since Barbican relies on Keystone
authN/authZ.
That's why this mechanism should be considered rather as "out of band" to
Keystone/OS API and is rather devops task.

regards,

Adam




On Wed, Aug 5, 2015 at 8:11 AM, joehuang  wrote:

> Hi, Lance,
>
>
>
> May we store the keys in Barbican, can the  key rotation be done upon
> Barbican? And if we use Barican as the repository, then it’s easier for Key
> distribution and rotation in multiple KeyStone deployment scenario, the
> database replication (sync. or async.) capability could be leveraged.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
> *From:* Lance Bragstad [mailto:lbrags...@gmail.com]
> *Sent:* Tuesday, August 04, 2015 10:56 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for
> Fernet keys
>
>
>
>
>
> On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov  wrote:
>
> On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
> > On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov 
> wrote:
> > > On Monday 03 August 2015 21:05:00 David Stanek wrote:
> > > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> > >
> > > wrote:
>
> > > > > Also, come on, does http://paste.openstack.org/show/406674/ look
> > > > > overly
> > > > > complex? (it should be launched from Fuel master node).
> > > >
> > > > I'm reading this on a small phone, so I may have it wrong, but the
> > > > script
> > > >
> > > > appears to be broken.
> > > >
> > > >
> > > >
> > > > It will ssh to node-1 and rotate. In the simplest case this takes key
> > > > 0
> > >
> > > and
> > >
> > > > moves it to the next highest key number. Then a new key 0 is
> > > > generated.
> > > >
> > > >
> > > >
> > > > Later there is a loop that will again ssh into node-1 and run the
> > >
> > > rotation
> > >
> > > > script. If there is a limit set on the number of keys and you are at
> > > > that
> > > >
> > > > limit a key will be deleted. This extra rotation on node-1 means that
> > >
> > > it's
> > >
> > > > possible that it has a different set of keys than are on node-2 and
> > >
> > > node-3.
> > >
> > >
> > >
> > > You are absolutely right. Node-1 should be excluded from the loop.
> > >
> > >
> > >
> > > pinc also lacks "-c 1".
> > >
> > >
> > >
> > > I am sure that other issues can be found.
> > >
> > >
> > >
> > > In my excuse I want to say that I never ran the script and wrote it
> just
> > > to show how simple it should be. Thank for review though!
> > >
> > >
> > >
> > > I also hope that no one is going to use a script from a mailing list.
> > >
> > > > What's the issue with just a simple rsync of the directory?
> > >
> > > None I think. I just want to reuse the interface provided by
> > > keystone-manage.
> >
> > You wanted to use the interface from keystone-manage to handle the actual
> > promotion of the staged key, right? This is why there were two
> > fernet_rotate commands issued?
>
> Right. Here is the fixed version (please don't use it anyway):
> http://paste.openstack.org/show/406862/
>
>
>
> Note, this doesn't take into account the initial key repository creation,
> does it?
>
>
>
> Here is a similar version that relies on rsync for the distribution after
> the initial key rotation [0].
>
>
>
> [0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua
>
>
>
>
>
> --
> Best regards,
> Boris Bobrov
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-04 Thread joehuang
Hi, Lance,

May we store the keys in Barbican, can the  key rotation be done upon Barbican? 
And if we use Barican as the repository, then it’s easier for Key distribution 
and rotation in multiple KeyStone deployment scenario, the database replication 
(sync. or async.) capability could be leveraged.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Lance Bragstad [mailto:lbrags...@gmail.com]
Sent: Tuesday, August 04, 2015 10:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys


On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov 
mailto:bbob...@mirantis.com>> wrote:
On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
> On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov 
> mailto:bbob...@mirantis.com>> wrote:
> > On Monday 03 August 2015 21:05:00 David Stanek wrote:
> > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> > > mailto:bbob...@mirantis.com>>
> >
> > wrote:
> > > > Also, come on, does http://paste.openstack.org/show/406674/ look
> > > > overly
> > > > complex? (it should be launched from Fuel master node).
> > >
> > > I'm reading this on a small phone, so I may have it wrong, but the
> > > script
> > >
> > > appears to be broken.
> > >
> > >
> > >
> > > It will ssh to node-1 and rotate. In the simplest case this takes key
> > > 0
> >
> > and
> >
> > > moves it to the next highest key number. Then a new key 0 is
> > > generated.
> > >
> > >
> > >
> > > Later there is a loop that will again ssh into node-1 and run the
> >
> > rotation
> >
> > > script. If there is a limit set on the number of keys and you are at
> > > that
> > >
> > > limit a key will be deleted. This extra rotation on node-1 means that
> >
> > it's
> >
> > > possible that it has a different set of keys than are on node-2 and
> >
> > node-3.
> >
> >
> >
> > You are absolutely right. Node-1 should be excluded from the loop.
> >
> >
> >
> > pinc also lacks "-c 1".
> >
> >
> >
> > I am sure that other issues can be found.
> >
> >
> >
> > In my excuse I want to say that I never ran the script and wrote it just
> > to show how simple it should be. Thank for review though!
> >
> >
> >
> > I also hope that no one is going to use a script from a mailing list.
> >
> > > What's the issue with just a simple rsync of the directory?
> >
> > None I think. I just want to reuse the interface provided by
> > keystone-manage.
>
> You wanted to use the interface from keystone-manage to handle the actual
> promotion of the staged key, right? This is why there were two
> fernet_rotate commands issued?
Right. Here is the fixed version (please don't use it anyway):
http://paste.openstack.org/show/406862/

Note, this doesn't take into account the initial key repository creation, does 
it?

Here is a similar version that relies on rsync for the distribution after the 
initial key rotation [0].

[0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua



--
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-04 Thread Lance Bragstad
On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov  wrote:

> On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
> > On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov 
> wrote:
> > > On Monday 03 August 2015 21:05:00 David Stanek wrote:
> > > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> > >
> > > wrote:
> > > > > Also, come on, does http://paste.openstack.org/show/406674/ look
> > > > > overly
> > > > > complex? (it should be launched from Fuel master node).
> > > >
> > > > I'm reading this on a small phone, so I may have it wrong, but the
> > > > script
> > > >
> > > > appears to be broken.
> > > >
> > > >
> > > >
> > > > It will ssh to node-1 and rotate. In the simplest case this takes key
> > > > 0
> > >
> > > and
> > >
> > > > moves it to the next highest key number. Then a new key 0 is
> > > > generated.
> > > >
> > > >
> > > >
> > > > Later there is a loop that will again ssh into node-1 and run the
> > >
> > > rotation
> > >
> > > > script. If there is a limit set on the number of keys and you are at
> > > > that
> > > >
> > > > limit a key will be deleted. This extra rotation on node-1 means that
> > >
> > > it's
> > >
> > > > possible that it has a different set of keys than are on node-2 and
> > >
> > > node-3.
> > >
> > >
> > >
> > > You are absolutely right. Node-1 should be excluded from the loop.
> > >
> > >
> > >
> > > pinc also lacks "-c 1".
> > >
> > >
> > >
> > > I am sure that other issues can be found.
> > >
> > >
> > >
> > > In my excuse I want to say that I never ran the script and wrote it
> just
> > > to show how simple it should be. Thank for review though!
> > >
> > >
> > >
> > > I also hope that no one is going to use a script from a mailing list.
> > >
> > > > What's the issue with just a simple rsync of the directory?
> > >
> > > None I think. I just want to reuse the interface provided by
> > > keystone-manage.
> >
> > You wanted to use the interface from keystone-manage to handle the actual
> > promotion of the staged key, right? This is why there were two
> > fernet_rotate commands issued?
>
> Right. Here is the fixed version (please don't use it anyway):
> http://paste.openstack.org/show/406862/


Note, this doesn't take into account the initial key repository creation,
does it?

Here is a similar version that relies on rsync for the distribution after
the initial key rotation [0].

[0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua


>
>
> --
> Best regards,
> Boris Bobrov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-04 Thread Boris Bobrov
On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
> On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov  wrote:
> > On Monday 03 August 2015 21:05:00 David Stanek wrote:
> > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> > 
> > wrote:
> > > > Also, come on, does http://paste.openstack.org/show/406674/ look
> > > > overly
> > > > complex? (it should be launched from Fuel master node).
> > > 
> > > I'm reading this on a small phone, so I may have it wrong, but the
> > > script
> > > 
> > > appears to be broken.
> > > 
> > > 
> > > 
> > > It will ssh to node-1 and rotate. In the simplest case this takes key
> > > 0
> > 
> > and
> > 
> > > moves it to the next highest key number. Then a new key 0 is
> > > generated.
> > > 
> > > 
> > > 
> > > Later there is a loop that will again ssh into node-1 and run the
> > 
> > rotation
> > 
> > > script. If there is a limit set on the number of keys and you are at
> > > that
> > > 
> > > limit a key will be deleted. This extra rotation on node-1 means that
> > 
> > it's
> > 
> > > possible that it has a different set of keys than are on node-2 and
> > 
> > node-3.
> > 
> > 
> > 
> > You are absolutely right. Node-1 should be excluded from the loop.
> > 
> > 
> > 
> > pinc also lacks "-c 1".
> > 
> > 
> > 
> > I am sure that other issues can be found.
> > 
> > 
> > 
> > In my excuse I want to say that I never ran the script and wrote it just
> > to show how simple it should be. Thank for review though!
> > 
> > 
> > 
> > I also hope that no one is going to use a script from a mailing list.
> > 
> > > What's the issue with just a simple rsync of the directory?
> > 
> > None I think. I just want to reuse the interface provided by
> > keystone-manage.
> 
> You wanted to use the interface from keystone-manage to handle the actual
> promotion of the staged key, right? This is why there were two
> fernet_rotate commands issued?

Right. Here is the fixed version (please don't use it anyway): 
http://paste.openstack.org/show/406862/

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-04 Thread Lance Bragstad
On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov  wrote:

> On Monday 03 August 2015 21:05:00 David Stanek wrote:
>
> > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> wrote:
>
> > > On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
>
> > > > This too is overly complex and will cause failures. If you replace
> key
>
> > > > 0,
>
> > > >
>
> > > > you will stop validating tokens that were encrypted with the old key
> 0.
>
> > >
>
> > > No. Key 0 is replaced after rotation.
>
> > >
>
> > >
>
> > >
>
> > > Also, come on, does http://paste.openstack.org/show/406674/ look
> overly
>
> > > complex? (it should be launched from Fuel master node).
>
> >
>
> > I'm reading this on a small phone, so I may have it wrong, but the script
>
> > appears to be broken.
>
> >
>
> > It will ssh to node-1 and rotate. In the simplest case this takes key 0
> and
>
> > moves it to the next highest key number. Then a new key 0 is generated.
>
> >
>
> > Later there is a loop that will again ssh into node-1 and run the
> rotation
>
> > script. If there is a limit set on the number of keys and you are at that
>
> > limit a key will be deleted. This extra rotation on node-1 means that
> it's
>
> > possible that it has a different set of keys than are on node-2 and
> node-3.
>
>
>
> You are absolutely right. Node-1 should be excluded from the loop.
>
>
>
> pinc also lacks "-c 1".
>
>
>
> I am sure that other issues can be found.
>
>
>
> In my excuse I want to say that I never ran the script and wrote it just
> to show how simple it should be. Thank for review though!
>
>
>
> I also hope that no one is going to use a script from a mailing list.
>
>
>
> > What's the issue with just a simple rsync of the directory?
>
>
>
> None I think. I just want to reuse the interface provided by
> keystone-manage.
>

You wanted to use the interface from keystone-manage to handle the actual
promotion of the staged key, right? This is why there were two
fernet_rotate commands issued?


>
>
> --
>
> С наилучшими пожеланиями,
>
> Boris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Boris Bobrov
On Monday 03 August 2015 21:05:00 David Stanek wrote:
> On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov  
wrote:
> > On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  
wrote:
> > > This too is overly complex and will cause failures. If you replace key
> > > 0,
> > > 
> > > you will stop validating tokens that were encrypted with the old key 
0.
> > 
> > No. Key 0 is replaced after rotation.
> > 
> > 
> > 
> > Also, come on, does http://paste.openstack.org/show/406674/ look 
overly
> > complex? (it should be launched from Fuel master node).
> 
> I'm reading this on a small phone, so I may have it wrong, but the script
> appears to be broken.
> 
> It will ssh to node-1 and rotate. In the simplest case this takes key 0 and
> moves it to the next highest key number. Then a new key 0 is generated.
> 
> Later there is a loop that will again ssh into node-1 and run the rotation
> script. If there is a limit set on the number of keys and you are at that
> limit a key will be deleted. This extra rotation on node-1 means that it's
> possible that it has a different set of keys than are on node-2 and 
node-3.

You are absolutely right. Node-1 should be excluded from the loop.

pinc also lacks "-c 1".

I am sure that other issues can be found.

In my excuse I want to say that I never ran the script and wrote it just to 
show how simple it should be. Thank for review though!

I also hope that no one is going to use a script from a mailing list.

> What's the issue with just a simple rsync of the directory?

None I think. I just want to reuse the interface provided by keystone-
manage.

-- 
С наилучшими пожеланиями,
Boris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Andrew Beekhof

> On 3 Aug 2015, at 8:02 pm, Sergii Golovatiuk  wrote:
> 
> Hi,
> 
> I agree with Bogdan that key rotation procedure should be part of HA solution.

These things don’t usually have to be an either/or situation.
Why not create one script that does the work and can be called manually if 
desired but also create an agent that pacemaker can use to call it at the 
appropriate time.

> If you make a simple script then this script will be a single point of 
> failure. It requires operator's attention so it may lead to human errors 
> also. Adding monitoring around it or expiration time is not a solution either.
> 
> There are couple of approaches how to make 'key rotation' HA ready.
> 
> 1. Make it as part of pacemaker OCF script. In this case pacemaker will 
> select the node which will be Master. It will be responsible for key 
> generations. In this case OCF script should have logic how to distribute 
> keys. It may be puppet or some rsync wrappers like lsyncd or special function 
> in OCF script itself. In this case when master is dead, pacemaker will elect 
> a new master while old one is down.
> 
> 2. Make keystone HA ready by itself. In this case, all logic of distributed 
> system should be covered in keystone. keystone should be able to detect 
> peers, it should have some consensus algorithms (PAXOS, RAFT, ZAB). Using 
> this algorithm master should be elected. Master should generate keys and 
> distribute them somehow to all other peers. Key distribution may be done via 
> rsync or using memcache/db as centralized storage for keys. Master may send a 
> event to all peers or peers may check memcache/db periodically.
> 
> 
> 
> 
> 
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
> 
> On Mon, Aug 3, 2015 at 2:37 AM, David Medberry  wrote:
> Glad to see you weighed in on this. -d
> 
> On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer  wrote:
> Agree that you guys are way over thinking this. You don't need to rotate keys 
> at exactly the same time, we do it in within a one or two hours typically 
> based on how our regions are setup. We do it with puppet, puppet runs on one 
> keystone node at a time and drops the keys into place. The actual rotation 
> and generation we handle with a script that then proposes the new key 
> structure as a review which is then approved and deployed via the normal 
> process. For this process I always drop keys 0, 1, 2 into place, I'm not 
> bumping the numbers like the normal rotations do.
> 
> We had also considered ansible which would be perfect for this, but that 
> makes our ability to setup throw away environments with a single click a bit 
> more complicated. If you don't have that requirement, a simple ansible script 
> is what you should do. 
> 
> 
> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
> Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
> > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
> > > I suggest to use pacemaker multistate clone resource to rotate and
> > rsync
> > > fernet tokens from local directories across cluster nodes. The resource
> > > prototype is described here
> > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros:
> > Pacemaker
> > > will care about CAP/split-brain stuff for us, we just design rotate and
> > > rsync logic. Also no shared FS/DB involved but only Corosync CIB - to
> > store
> > > few internal resource state related params, not tokens. Cons: Keystone
> > > nodes hosting fernet tokens directories must be members of pacemaker
> > > cluster. Also custom OCF script should be created to implement this. __
> > > Regards,
> > > Bogdan Dobrelya.
> > > IRC: bogdando
> >
> > Looks complex.
> >
> > I suggest this kind of bash or python script, running on Fuel master node:
> >
> > 0. Check that all controllers are online;
> > 1. Go to one of the controllers, rotate keys there;
> > 2. Fetch key 0 from there;
> > 3. For each other controller rotate keys there and put the 0-key instead of
> > their new 0-key.
> > 4. If any of the nodes fail to get new keys (because they went offline or 
> > for
> > some other reason) revert the rotate (move the key with the biggest index
> > back to 0).
> >
> > The script can be launched by cron or by button in Fuel.
> >
> > I don't see anything critically bad if one rotation/sync event fails.
> >
> 
> This too is overly complex and will cause failures. If you replace key 0,
> you will stop validating tokens that were encrypted with the old key 0.
> 
> You simply need to run rotate on one, and then rsync that key repository
> to all of the others. You _must not_ run rotate again until you rsync to
> all of the others, since the key 0 from one rotation becomes the primary
> token encrypting key going forward, so you need it to get pushed out to
> all nodes as 0 first.
> 
> Don't over think it. Just read http://lbragstad.com/?p=133 and it will
> remain simple.
> 
> _

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread David Stanek
On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov  wrote:

> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
>
> > This too is overly complex and will cause failures. If you replace key 0,
>
> > you will stop validating tokens that were encrypted with the old key 0.
>
>
>
> No. Key 0 is replaced after rotation.
>
>
>
> Also, come on, does http://paste.openstack.org/show/406674/ look overly
> complex? (it should be launched from Fuel master node).
>

I'm reading this on a small phone, so I may have it wrong, but the script
appears to be broken.

It will ssh to node-1 and rotate. In the simplest case this takes key 0 and
moves it to the next highest key number. Then a new key 0 is generated.

Later there is a loop that will again ssh into node-1 and run the rotation
script. If there is a limit set on the number of keys and you are at that
limit a key will be deleted. This extra rotation on node-1 means that it's
possible that it has a different set of keys than are on node-2 and node-3.

What's the issue with just a simple rsync of the directory?

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Vladimir Kuklin
Folks

As Sergii G. already pointed out if you want this solution to work in
production, you should provide common ways of synchronization between
different processing entities. Otherwise your "very simple one-script
solution" will be prone to errors such as race conditions and others. You
need to have a consensus and leader election algorithms here embedded into
keystone. Moreover, this process should be as realtime as possible and this
means, sticking to eventlet library is not the best option here.

So I am +1 to Bogdan and Pacemaker-based solution as it is just a simple
master/slave resource which will run on top of already implemented and
well-tested cluster stack. It is not so hard to debug such pacemaker
scripts, actually, so I cannot  agree with those who say, that this scheme
is over-complicated. Either write your own implementation of cluster
algorithms or go with existing cluster stack. Having "simple script" will
just make your production fail eventually.

On Mon, Aug 3, 2015 at 4:10 PM, Lance Bragstad  wrote:

>
>
> On Mon, Aug 3, 2015 at 7:03 AM, David Stanek  wrote:
>
>>
>> On Mon, Aug 3, 2015 at 7:14 AM, Davanum Srinivas 
>> wrote:
>>
>>> agree. "Native HA solution" was already ruled out in several email
>>> threads by keystone cores already (if i remember right). This is a
>>> devops issue and should be handled as such was the feedback.
>>>
>>
>> I'm sure you are right. I'm not sure why we would want to add that much
>> complexity into Keystone.
>>
>
> ++, I think the more complicated the tool to distribute the keys, the more
> complex it is to troubleshoot issues when things go south. If you have an
> issue with a single Keystone node, you have to understand whatever
> mechanism that keeps keys in sync, as well as what could go wrong and how
> to fix it. This is in comparison to something, or some ansible script, that
> is idempotent and can be applied against the whole cluster, or a single
> node. The ability of having a staged key buys you time in the key
> distribution process.
>
>>
>>
>>
>> --
>> David
>> blog: http://www.traceback.org
>> twitter: http://twitter.com/dstanek
>> www: http://dstanek.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Lance Bragstad
On Mon, Aug 3, 2015 at 7:03 AM, David Stanek  wrote:

>
> On Mon, Aug 3, 2015 at 7:14 AM, Davanum Srinivas 
> wrote:
>
>> agree. "Native HA solution" was already ruled out in several email
>> threads by keystone cores already (if i remember right). This is a
>> devops issue and should be handled as such was the feedback.
>>
>
> I'm sure you are right. I'm not sure why we would want to add that much
> complexity into Keystone.
>

++, I think the more complicated the tool to distribute the keys, the more
complex it is to troubleshoot issues when things go south. If you have an
issue with a single Keystone node, you have to understand whatever
mechanism that keeps keys in sync, as well as what could go wrong and how
to fix it. This is in comparison to something, or some ansible script, that
is idempotent and can be applied against the whole cluster, or a single
node. The ability of having a staged key buys you time in the key
distribution process.

>
>
>
> --
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
> www: http://dstanek.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Adam Heczko
Fine, then this simple bash based solution proposed by Boris [1] LGTM and
is not over thinked.
Maybe add kind of md5 or sha1 checksum functionality to confirm if keys
were rotated correctly and are in sync.

[1] http://paste.openstack.org/show/406674/

Regards,

Adam

On Mon, Aug 3, 2015 at 2:03 PM, David Stanek  wrote:

>
> On Mon, Aug 3, 2015 at 7:14 AM, Davanum Srinivas 
> wrote:
>
>> agree. "Native HA solution" was already ruled out in several email
>> threads by keystone cores already (if i remember right). This is a
>> devops issue and should be handled as such was the feedback.
>>
>
> I'm sure you are right. I'm not sure why we would want to add that much
> complexity into Keystone.
>
>
> --
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
> www: http://dstanek.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread David Stanek
On Mon, Aug 3, 2015 at 7:14 AM, Davanum Srinivas  wrote:

> agree. "Native HA solution" was already ruled out in several email
> threads by keystone cores already (if i remember right). This is a
> devops issue and should be handled as such was the feedback.
>

I'm sure you are right. I'm not sure why we would want to add that much
complexity into Keystone.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Morgan Fainberg


> On Aug 3, 2015, at 21:14, Davanum Srinivas  wrote:
> 
> agree. "Native HA solution" was already ruled out in several email
> threads by keystone cores already (if i remember right). This is a
> devops issue and should be handled as such was the feedback.
> 

Correct. This is generally considered a devops issue. CMS handles this type of 
configuration today extremely well compared to most in-keystone solutions. 
Enhancements to keystone are welcome to be proposed as long as they are keeping 
the devops direction as the core way to manage these keys. Just like certs and 
PKI, placing devops focus here allows the organization to adhere to their 
requirements for sensitive/cryptographic keys/data. 

--Morgan


> Thanks,
> -- dims
> 
> On Mon, Aug 3, 2015 at 7:03 AM, Sergii Golovatiuk
>  wrote:
>> Hi,
>> 
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>> 
>>> On Mon, Aug 3, 2015 at 12:44 PM, Adam Heczko  wrote:
>>> 
>>> Hi folks, we are discussing operations on sensitive data.
>>> May I ask you what security controls Pacemaker provides?
>> 
>> 
>> Pacemaker doesn't exchange any security information.
>> 
>>> 
>>> How we could audit its operations and data it is accessing?
>> 
>> 
>> Just audit all OCF scripts as they may contain some bits for storing
>> security data on CIB. If they store any data, then this data is exchanged
>> across all pacemaker nodes.
>> 
>>> 
>>> The same question arises when discussing native Keystone solution.
>>> From the security perspective, reduction of attack surface would be
>>> beneficial.
>>> IMO Keystone native solution would be the best possible, unless even today
>>> Pacemaker is accessing Keystone sensitive data (not sure about it).
>>> Bogdan, could you clarify this a bit?
>> 
>> 
>> Native HA solution is very costy which may require a lot of engineering
>> resource to make keystone ready with HA patterns (consensus algorithms,
>> network issues, split brain)
>> 
>>> 
>>> 
>>> Regards,
>>> 
>>> Adam
>>> 
>>> 
>>> On Mon, Aug 3, 2015 at 12:02 PM, Sergii Golovatiuk
>>>  wrote:
 
 Hi,
 
 I agree with Bogdan that key rotation procedure should be part of HA
 solution. If you make a simple script then this script will be a single
 point of failure. It requires operator's attention so it may lead to human
 errors also. Adding monitoring around it or expiration time is not a
 solution either.
 
 There are couple of approaches how to make 'key rotation' HA ready.
 
 1. Make it as part of pacemaker OCF script. In this case pacemaker will
 select the node which will be Master. It will be responsible for key
 generations. In this case OCF script should have logic how to distribute
 keys. It may be puppet or some rsync wrappers like lsyncd or special
 function in OCF script itself. In this case when master is dead, pacemaker
 will elect a new master while old one is down.
 
 2. Make keystone HA ready by itself. In this case, all logic of
 distributed system should be covered in keystone. keystone should be able 
 to
 detect peers, it should have some consensus algorithms (PAXOS, RAFT, ZAB).
 Using this algorithm master should be elected. Master should generate keys
 and distribute them somehow to all other peers. Key distribution may be 
 done
 via rsync or using memcache/db as centralized storage for keys. Master may
 send a event to all peers or peers may check memcache/db periodically.
 
 
 
 
 
 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser
 
 On Mon, Aug 3, 2015 at 2:37 AM, David Medberry 
 wrote:
> 
> Glad to see you weighed in on this. -d
> 
> On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer 
> wrote:
>> 
>> Agree that you guys are way over thinking this. You don't need to
>> rotate keys at exactly the same time, we do it in within a one or two 
>> hours
>> typically based on how our regions are setup. We do it with puppet, 
>> puppet
>> runs on one keystone node at a time and drops the keys into place. The
>> actual rotation and generation we handle with a script that then proposes
>> the new key structure as a review which is then approved and deployed via
>> the normal process. For this process I always drop keys 0, 1, 2 into 
>> place,
>> I'm not bumping the numbers like the normal rotations do.
>> 
>> We had also considered ansible which would be perfect for this, but
>> that makes our ability to setup throw away environments with a single 
>> click
>> a bit more complicated. If you don't have that requirement, a simple 
>> ansible
>> script is what you should do.
>> 
>> 
>>> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
>>> 
>>> Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
> On Saturday 01 August 2015 16:27:17 bdobr

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Davanum Srinivas
agree. "Native HA solution" was already ruled out in several email
threads by keystone cores already (if i remember right). This is a
devops issue and should be handled as such was the feedback.

Thanks,
-- dims

On Mon, Aug 3, 2015 at 7:03 AM, Sergii Golovatiuk
 wrote:
> Hi,
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Mon, Aug 3, 2015 at 12:44 PM, Adam Heczko  wrote:
>>
>> Hi folks, we are discussing operations on sensitive data.
>> May I ask you what security controls Pacemaker provides?
>
>
> Pacemaker doesn't exchange any security information.
>
>>
>> How we could audit its operations and data it is accessing?
>
>
> Just audit all OCF scripts as they may contain some bits for storing
> security data on CIB. If they store any data, then this data is exchanged
> across all pacemaker nodes.
>
>>
>> The same question arises when discussing native Keystone solution.
>> From the security perspective, reduction of attack surface would be
>> beneficial.
>> IMO Keystone native solution would be the best possible, unless even today
>> Pacemaker is accessing Keystone sensitive data (not sure about it).
>> Bogdan, could you clarify this a bit?
>
>
> Native HA solution is very costy which may require a lot of engineering
> resource to make keystone ready with HA patterns (consensus algorithms,
> network issues, split brain)
>
>>
>>
>> Regards,
>>
>> Adam
>>
>>
>> On Mon, Aug 3, 2015 at 12:02 PM, Sergii Golovatiuk
>>  wrote:
>>>
>>> Hi,
>>>
>>> I agree with Bogdan that key rotation procedure should be part of HA
>>> solution. If you make a simple script then this script will be a single
>>> point of failure. It requires operator's attention so it may lead to human
>>> errors also. Adding monitoring around it or expiration time is not a
>>> solution either.
>>>
>>> There are couple of approaches how to make 'key rotation' HA ready.
>>>
>>> 1. Make it as part of pacemaker OCF script. In this case pacemaker will
>>> select the node which will be Master. It will be responsible for key
>>> generations. In this case OCF script should have logic how to distribute
>>> keys. It may be puppet or some rsync wrappers like lsyncd or special
>>> function in OCF script itself. In this case when master is dead, pacemaker
>>> will elect a new master while old one is down.
>>>
>>> 2. Make keystone HA ready by itself. In this case, all logic of
>>> distributed system should be covered in keystone. keystone should be able to
>>> detect peers, it should have some consensus algorithms (PAXOS, RAFT, ZAB).
>>> Using this algorithm master should be elected. Master should generate keys
>>> and distribute them somehow to all other peers. Key distribution may be done
>>> via rsync or using memcache/db as centralized storage for keys. Master may
>>> send a event to all peers or peers may check memcache/db periodically.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Sergii Golovatiuk,
>>> Skype #golserge
>>> IRC #holser
>>>
>>> On Mon, Aug 3, 2015 at 2:37 AM, David Medberry 
>>> wrote:

 Glad to see you weighed in on this. -d

 On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer 
 wrote:
>
> Agree that you guys are way over thinking this. You don't need to
> rotate keys at exactly the same time, we do it in within a one or two 
> hours
> typically based on how our regions are setup. We do it with puppet, puppet
> runs on one keystone node at a time and drops the keys into place. The
> actual rotation and generation we handle with a script that then proposes
> the new key structure as a review which is then approved and deployed via
> the normal process. For this process I always drop keys 0, 1, 2 into 
> place,
> I'm not bumping the numbers like the normal rotations do.
>
> We had also considered ansible which would be perfect for this, but
> that makes our ability to setup throw away environments with a single 
> click
> a bit more complicated. If you don't have that requirement, a simple 
> ansible
> script is what you should do.
>
>
> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
>>
>> Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
>> > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
>> > > I suggest to use pacemaker multistate clone resource to rotate and
>> > rsync
>> > > fernet tokens from local directories across cluster nodes. The
>> > > resource
>> > > prototype is described here
>> > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros:
>> > Pacemaker
>> > > will care about CAP/split-brain stuff for us, we just design
>> > > rotate and
>> > > rsync logic. Also no shared FS/DB involved but only Corosync CIB -
>> > > to
>> > store
>> > > few internal resource state related params, not tokens. Cons:
>> > > Keystone
>> > > nodes hosting fernet tokens directories must be members 

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Sergii Golovatiuk
Hi,

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Aug 3, 2015 at 12:44 PM, Adam Heczko  wrote:

> Hi folks, we are discussing operations on sensitive data.
> May I ask you what security controls Pacemaker provides?
>

Pacemaker doesn't exchange any security information.


> How we could audit its operations and data it is accessing?
>

Just audit all OCF scripts as they may contain some bits for storing
security data on CIB. If they store any data, then this data is exchanged
across all pacemaker nodes.


> The same question arises when discussing native Keystone solution.
> From the security perspective, reduction of attack surface would be
> beneficial.
> IMO Keystone native solution would be the best possible, unless even today
> Pacemaker is accessing Keystone sensitive data (not sure about it).
> Bogdan, could you clarify this a bit?
>

Native HA solution is very costy which may require a lot of engineering
resource to make keystone ready with HA patterns (consensus algorithms,
network issues, split brain)


>
> Regards,
>
> Adam
>
>
> On Mon, Aug 3, 2015 at 12:02 PM, Sergii Golovatiuk <
> sgolovat...@mirantis.com> wrote:
>
>> Hi,
>>
>> I agree with Bogdan that key rotation procedure should be part of HA
>> solution. If you make a simple script then this script will be a single
>> point of failure. It requires operator's attention so it may lead to human
>> errors also. Adding monitoring around it or expiration time is not a
>> solution either.
>>
>> There are couple of approaches how to make 'key rotation' HA ready.
>>
>> 1. Make it as part of pacemaker OCF script. In this case pacemaker will
>> select the node which will be Master. It will be responsible for key
>> generations. In this case OCF script should have logic how to distribute
>> keys. It may be puppet or some rsync wrappers like lsyncd or special
>> function in OCF script itself. In this case when master is dead, pacemaker
>> will elect a new master while old one is down.
>>
>> 2. Make keystone HA ready by itself. In this case, all logic of
>> distributed system should be covered in keystone. keystone should be able
>> to detect peers, it should have some consensus algorithms (PAXOS, RAFT,
>> ZAB). Using this algorithm master should be elected. Master should generate
>> keys and distribute them somehow to all other peers. Key distribution may
>> be done via rsync or using memcache/db as centralized storage for keys.
>> Master may send a event to all peers or peers may check memcache/db
>> periodically.
>>
>>
>>
>>
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> On Mon, Aug 3, 2015 at 2:37 AM, David Medberry 
>> wrote:
>>
>>> Glad to see you weighed in on this. -d
>>>
>>> On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer 
>>> wrote:
>>>
 Agree that you guys are way over thinking this. You don't need to
 rotate keys at exactly the same time, we do it in within a one or two hours
 typically based on how our regions are setup. We do it with puppet, puppet
 runs on one keystone node at a time and drops the keys into place. The
 actual rotation and generation we handle with a script that then proposes
 the new key structure as a review which is then approved and deployed via
 the normal process. For this process I always drop keys 0, 1, 2 into place,
 I'm not bumping the numbers like the normal rotations do.

 We had also considered ansible which would be perfect for this, but
 that makes our ability to setup throw away environments with a single click
 a bit more complicated. If you don't have that requirement, a simple
 ansible script is what you should do.


 On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:

> Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
> > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
> > > I suggest to use pacemaker multistate clone resource to rotate and
> > rsync
> > > fernet tokens from local directories across cluster nodes. The
> resource
> > > prototype is described here
> > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros:
> > Pacemaker
> > > will care about CAP/split-brain stuff for us, we just design
> rotate and
> > > rsync logic. Also no shared FS/DB involved but only Corosync CIB -
> to
> > store
> > > few internal resource state related params, not tokens. Cons:
> Keystone
> > > nodes hosting fernet tokens directories must be members of
> pacemaker
> > > cluster. Also custom OCF script should be created to implement
> this. __
> > > Regards,
> > > Bogdan Dobrelya.
> > > IRC: bogdando
> >
> > Looks complex.
> >
> > I suggest this kind of bash or python script, running on Fuel master
> node:
> >
> > 0. Check that all controllers are online;
> > 1. Go to one of the controllers, rotate keys there;
>

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Adam Heczko
Hi folks, we are discussing operations on sensitive data.
May I ask you what security controls Pacemaker provides?
How we could audit its operations and data it is accessing?
The same question arises when discussing native Keystone solution.
>From the security perspective, reduction of attack surface would be
beneficial.
IMO Keystone native solution would be the best possible, unless even today
Pacemaker is accessing Keystone sensitive data (not sure about it).
Bogdan, could you clarify this a bit?

Regards,

Adam


On Mon, Aug 3, 2015 at 12:02 PM, Sergii Golovatiuk  wrote:

> Hi,
>
> I agree with Bogdan that key rotation procedure should be part of HA
> solution. If you make a simple script then this script will be a single
> point of failure. It requires operator's attention so it may lead to human
> errors also. Adding monitoring around it or expiration time is not a
> solution either.
>
> There are couple of approaches how to make 'key rotation' HA ready.
>
> 1. Make it as part of pacemaker OCF script. In this case pacemaker will
> select the node which will be Master. It will be responsible for key
> generations. In this case OCF script should have logic how to distribute
> keys. It may be puppet or some rsync wrappers like lsyncd or special
> function in OCF script itself. In this case when master is dead, pacemaker
> will elect a new master while old one is down.
>
> 2. Make keystone HA ready by itself. In this case, all logic of
> distributed system should be covered in keystone. keystone should be able
> to detect peers, it should have some consensus algorithms (PAXOS, RAFT,
> ZAB). Using this algorithm master should be elected. Master should generate
> keys and distribute them somehow to all other peers. Key distribution may
> be done via rsync or using memcache/db as centralized storage for keys.
> Master may send a event to all peers or peers may check memcache/db
> periodically.
>
>
>
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Mon, Aug 3, 2015 at 2:37 AM, David Medberry 
> wrote:
>
>> Glad to see you weighed in on this. -d
>>
>> On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer 
>> wrote:
>>
>>> Agree that you guys are way over thinking this. You don't need to rotate
>>> keys at exactly the same time, we do it in within a one or two hours
>>> typically based on how our regions are setup. We do it with puppet, puppet
>>> runs on one keystone node at a time and drops the keys into place. The
>>> actual rotation and generation we handle with a script that then proposes
>>> the new key structure as a review which is then approved and deployed via
>>> the normal process. For this process I always drop keys 0, 1, 2 into place,
>>> I'm not bumping the numbers like the normal rotations do.
>>>
>>> We had also considered ansible which would be perfect for this, but that
>>> makes our ability to setup throw away environments with a single click a
>>> bit more complicated. If you don't have that requirement, a simple ansible
>>> script is what you should do.
>>>
>>>
>>> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
>>>
 Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
 > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
 > > I suggest to use pacemaker multistate clone resource to rotate and
 > rsync
 > > fernet tokens from local directories across cluster nodes. The
 resource
 > > prototype is described here
 > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros:
 > Pacemaker
 > > will care about CAP/split-brain stuff for us, we just design rotate
 and
 > > rsync logic. Also no shared FS/DB involved but only Corosync CIB -
 to
 > store
 > > few internal resource state related params, not tokens. Cons:
 Keystone
 > > nodes hosting fernet tokens directories must be members of pacemaker
 > > cluster. Also custom OCF script should be created to implement
 this. __
 > > Regards,
 > > Bogdan Dobrelya.
 > > IRC: bogdando
 >
 > Looks complex.
 >
 > I suggest this kind of bash or python script, running on Fuel master
 node:
 >
 > 0. Check that all controllers are online;
 > 1. Go to one of the controllers, rotate keys there;
 > 2. Fetch key 0 from there;
 > 3. For each other controller rotate keys there and put the 0-key
 instead of
 > their new 0-key.
 > 4. If any of the nodes fail to get new keys (because they went
 offline or for
 > some other reason) revert the rotate (move the key with the biggest
 index
 > back to 0).
 >
 > The script can be launched by cron or by button in Fuel.
 >
 > I don't see anything critically bad if one rotation/sync event fails.
 >

 This too is overly complex and will cause failures. If you replace key
 0,
 you will stop validating tokens that were encrypted with the old key 0.

 You simply need 

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Sergii Golovatiuk
Hi,

I agree with Bogdan that key rotation procedure should be part of HA
solution. If you make a simple script then this script will be a single
point of failure. It requires operator's attention so it may lead to human
errors also. Adding monitoring around it or expiration time is not a
solution either.

There are couple of approaches how to make 'key rotation' HA ready.

1. Make it as part of pacemaker OCF script. In this case pacemaker will
select the node which will be Master. It will be responsible for key
generations. In this case OCF script should have logic how to distribute
keys. It may be puppet or some rsync wrappers like lsyncd or special
function in OCF script itself. In this case when master is dead, pacemaker
will elect a new master while old one is down.

2. Make keystone HA ready by itself. In this case, all logic of distributed
system should be covered in keystone. keystone should be able to detect
peers, it should have some consensus algorithms (PAXOS, RAFT, ZAB). Using
this algorithm master should be elected. Master should generate keys and
distribute them somehow to all other peers. Key distribution may be done
via rsync or using memcache/db as centralized storage for keys. Master may
send a event to all peers or peers may check memcache/db periodically.





--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Aug 3, 2015 at 2:37 AM, David Medberry 
wrote:

> Glad to see you weighed in on this. -d
>
> On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer  wrote:
>
>> Agree that you guys are way over thinking this. You don't need to rotate
>> keys at exactly the same time, we do it in within a one or two hours
>> typically based on how our regions are setup. We do it with puppet, puppet
>> runs on one keystone node at a time and drops the keys into place. The
>> actual rotation and generation we handle with a script that then proposes
>> the new key structure as a review which is then approved and deployed via
>> the normal process. For this process I always drop keys 0, 1, 2 into place,
>> I'm not bumping the numbers like the normal rotations do.
>>
>> We had also considered ansible which would be perfect for this, but that
>> makes our ability to setup throw away environments with a single click a
>> bit more complicated. If you don't have that requirement, a simple ansible
>> script is what you should do.
>>
>>
>> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
>>
>>> Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
>>> > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
>>> > > I suggest to use pacemaker multistate clone resource to rotate and
>>> > rsync
>>> > > fernet tokens from local directories across cluster nodes. The
>>> resource
>>> > > prototype is described here
>>> > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros:
>>> > Pacemaker
>>> > > will care about CAP/split-brain stuff for us, we just design rotate
>>> and
>>> > > rsync logic. Also no shared FS/DB involved but only Corosync CIB - to
>>> > store
>>> > > few internal resource state related params, not tokens. Cons:
>>> Keystone
>>> > > nodes hosting fernet tokens directories must be members of pacemaker
>>> > > cluster. Also custom OCF script should be created to implement this.
>>> __
>>> > > Regards,
>>> > > Bogdan Dobrelya.
>>> > > IRC: bogdando
>>> >
>>> > Looks complex.
>>> >
>>> > I suggest this kind of bash or python script, running on Fuel master
>>> node:
>>> >
>>> > 0. Check that all controllers are online;
>>> > 1. Go to one of the controllers, rotate keys there;
>>> > 2. Fetch key 0 from there;
>>> > 3. For each other controller rotate keys there and put the 0-key
>>> instead of
>>> > their new 0-key.
>>> > 4. If any of the nodes fail to get new keys (because they went offline
>>> or for
>>> > some other reason) revert the rotate (move the key with the biggest
>>> index
>>> > back to 0).
>>> >
>>> > The script can be launched by cron or by button in Fuel.
>>> >
>>> > I don't see anything critically bad if one rotation/sync event fails.
>>> >
>>>
>>> This too is overly complex and will cause failures. If you replace key 0,
>>> you will stop validating tokens that were encrypted with the old key 0.
>>>
>>> You simply need to run rotate on one, and then rsync that key repository
>>> to all of the others. You _must not_ run rotate again until you rsync to
>>> all of the others, since the key 0 from one rotation becomes the primary
>>> token encrypting key going forward, so you need it to get pushed out to
>>> all nodes as 0 first.
>>>
>>> Don't over think it. Just read http://lbragstad.com/?p=133 and it will
>>> remain simple.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-02 Thread David Medberry
Glad to see you weighed in on this. -d

On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer  wrote:

> Agree that you guys are way over thinking this. You don't need to rotate
> keys at exactly the same time, we do it in within a one or two hours
> typically based on how our regions are setup. We do it with puppet, puppet
> runs on one keystone node at a time and drops the keys into place. The
> actual rotation and generation we handle with a script that then proposes
> the new key structure as a review which is then approved and deployed via
> the normal process. For this process I always drop keys 0, 1, 2 into place,
> I'm not bumping the numbers like the normal rotations do.
>
> We had also considered ansible which would be perfect for this, but that
> makes our ability to setup throw away environments with a single click a
> bit more complicated. If you don't have that requirement, a simple ansible
> script is what you should do.
>
>
> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
>
>> Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
>> > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
>> > > I suggest to use pacemaker multistate clone resource to rotate and
>> > rsync
>> > > fernet tokens from local directories across cluster nodes. The
>> resource
>> > > prototype is described here
>> > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros:
>> > Pacemaker
>> > > will care about CAP/split-brain stuff for us, we just design rotate
>> and
>> > > rsync logic. Also no shared FS/DB involved but only Corosync CIB - to
>> > store
>> > > few internal resource state related params, not tokens. Cons: Keystone
>> > > nodes hosting fernet tokens directories must be members of pacemaker
>> > > cluster. Also custom OCF script should be created to implement this.
>> __
>> > > Regards,
>> > > Bogdan Dobrelya.
>> > > IRC: bogdando
>> >
>> > Looks complex.
>> >
>> > I suggest this kind of bash or python script, running on Fuel master
>> node:
>> >
>> > 0. Check that all controllers are online;
>> > 1. Go to one of the controllers, rotate keys there;
>> > 2. Fetch key 0 from there;
>> > 3. For each other controller rotate keys there and put the 0-key
>> instead of
>> > their new 0-key.
>> > 4. If any of the nodes fail to get new keys (because they went offline
>> or for
>> > some other reason) revert the rotate (move the key with the biggest
>> index
>> > back to 0).
>> >
>> > The script can be launched by cron or by button in Fuel.
>> >
>> > I don't see anything critically bad if one rotation/sync event fails.
>> >
>>
>> This too is overly complex and will cause failures. If you replace key 0,
>> you will stop validating tokens that were encrypted with the old key 0.
>>
>> You simply need to run rotate on one, and then rsync that key repository
>> to all of the others. You _must not_ run rotate again until you rsync to
>> all of the others, since the key 0 from one rotation becomes the primary
>> token encrypting key going forward, so you need it to get pushed out to
>> all nodes as 0 first.
>>
>> Don't over think it. Just read http://lbragstad.com/?p=133 and it will
>> remain simple.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread Boris Bobrov
On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
> This too is overly complex and will cause failures. If you replace key 0,
> you will stop validating tokens that were encrypted with the old key 0.

No. Key 0 is replaced after rotation.

Also, come on, does http://paste.openstack.org/show/406674/ look overly 
complex? (it should be launched from Fuel master node).

> You simply need to run rotate on one, and then rsync that key repository
> to all of the others. You _must not_ run rotate again until you rsync to
> all of the others, since the key 0 from one rotation becomes the primary
> token encrypting key going forward, so you need it to get pushed out to
> all nodes as 0 first.

I agree. What step in my logic misses that part?

On Saturday 01 August 2015 15:50:13 Matt Fischer wrote:
> Agree that you guys are way over thinking this. You don't need to rotate
> keys at exactly the same time, we do it in within a one or two hours
> typically based on how our regions are setup. We do it with puppet, 
puppet
> runs on one keystone node at a time and drops the keys into place.

There is a constraint: sometimes you cannot connect from one keystone 
node to another. For example, in a cloud deployed by Fuel you cannot ssh 
from one controller to another afaik.

> The
> actual rotation and generation we handle with a script that then 
proposes
> the new key structure as a review which is then approved and deployed 
via
> the normal process. For this process I always drop keys 0, 1, 2 into 
place,
> I'm not bumping the numbers like the normal rotations do.

I dislike this solution because there is more than 1 point of configiration. If 
your cloud administrator decides to use not 3 keys, but 5, he will have to 
change not only the option in keystone.conf, but also in your script. Yes, 
keystone will still work, but there will be some inconsistency.

I also dislike it because keys should be generated only be a single tool. If it 
would turn out that keys used for fernet tokens are too weak and 
developers decide to change key length from 32 bytes to 64, it will have to 
be fixed outside of that tool too. Which is not good. Now this tool is 
keystone-manage

> We had also considered ansible which would be perfect for this, but that
> makes our ability to setup throw away environments with a single click a
> bit more complicated. If you don't have that requirement, a simple 
ansible
> script is what you should do.
> 
> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
> > Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
> > > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com 
wrote:
> > > > I suggest to use pacemaker multistate clone resource to rotate 
and
> > > 
> > > rsync
> > > 
> > > > fernet tokens from local directories across cluster nodes. The
> > > > resource
> > > > prototype is described here
> > > 
> > > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> 
Pros:
> > > Pacemaker
> > > 
> > > > will care about CAP/split-brain stuff for us, we just design rotate
> > > > and
> > > > rsync logic. Also no shared FS/DB involved but only Corosync CIB - 
to
> > > 
> > > store
> > > 
> > > > few internal resource state related params, not tokens. Cons: 
Keystone
> > > > nodes hosting fernet tokens directories must be members of 
pacemaker
> > > > cluster. Also custom OCF script should be created to implement 
this.
> > > > __
> > > > Regards,
> > > > Bogdan Dobrelya.
> > > > IRC: bogdando
> > > 
> > > Looks complex.
> > > 
> > > I suggest this kind of bash or python script, running on Fuel master
> > 
> > node:
> > > 0. Check that all controllers are online;
> > > 1. Go to one of the controllers, rotate keys there;
> > > 2. Fetch key 0 from there;
> > > 3. For each other controller rotate keys there and put the 0-key 
instead
> > 
> > of
> > 
> > > their new 0-key.
> > > 4. If any of the nodes fail to get new keys (because they went offline
> > 
> > or for
> > 
> > > some other reason) revert the rotate (move the key with the 
biggest
> > > index
> > > back to 0).
> > > 
> > > The script can be launched by cron or by button in Fuel.
> > > 
> > > I don't see anything critically bad if one rotation/sync event fails.
> > 
> > This too is overly complex and will cause failures. If you replace key 0,
> > you will stop validating tokens that were encrypted with the old key 0.
> > 
> > You simply need to run rotate on one, and then rsync that key 
repository
> > to all of the others. You _must not_ run rotate again until you rsync to
> > all of the others, since the key 0 from one rotation becomes the 
primary
> > token encrypting key going forward, so you need it to get pushed out 
to
> > all nodes as 0 first.
> > 
> > Don't over think it. Just read http://lbragstad.com/?p=133 and it will
> > remain simple.
> > 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStac

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread Matt Fischer
Agree that you guys are way over thinking this. You don't need to rotate
keys at exactly the same time, we do it in within a one or two hours
typically based on how our regions are setup. We do it with puppet, puppet
runs on one keystone node at a time and drops the keys into place. The
actual rotation and generation we handle with a script that then proposes
the new key structure as a review which is then approved and deployed via
the normal process. For this process I always drop keys 0, 1, 2 into place,
I'm not bumping the numbers like the normal rotations do.

We had also considered ansible which would be perfect for this, but that
makes our ability to setup throw away environments with a single click a
bit more complicated. If you don't have that requirement, a simple ansible
script is what you should do.


On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:

> Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
> > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
> > > I suggest to use pacemaker multistate clone resource to rotate and
> > rsync
> > > fernet tokens from local directories across cluster nodes. The resource
> > > prototype is described here
> > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros:
> > Pacemaker
> > > will care about CAP/split-brain stuff for us, we just design rotate and
> > > rsync logic. Also no shared FS/DB involved but only Corosync CIB - to
> > store
> > > few internal resource state related params, not tokens. Cons: Keystone
> > > nodes hosting fernet tokens directories must be members of pacemaker
> > > cluster. Also custom OCF script should be created to implement this. __
> > > Regards,
> > > Bogdan Dobrelya.
> > > IRC: bogdando
> >
> > Looks complex.
> >
> > I suggest this kind of bash or python script, running on Fuel master
> node:
> >
> > 0. Check that all controllers are online;
> > 1. Go to one of the controllers, rotate keys there;
> > 2. Fetch key 0 from there;
> > 3. For each other controller rotate keys there and put the 0-key instead
> of
> > their new 0-key.
> > 4. If any of the nodes fail to get new keys (because they went offline
> or for
> > some other reason) revert the rotate (move the key with the biggest index
> > back to 0).
> >
> > The script can be launched by cron or by button in Fuel.
> >
> > I don't see anything critically bad if one rotation/sync event fails.
> >
>
> This too is overly complex and will cause failures. If you replace key 0,
> you will stop validating tokens that were encrypted with the old key 0.
>
> You simply need to run rotate on one, and then rsync that key repository
> to all of the others. You _must not_ run rotate again until you rsync to
> all of the others, since the key 0 from one rotation becomes the primary
> token encrypting key going forward, so you need it to get pushed out to
> all nodes as 0 first.
>
> Don't over think it. Just read http://lbragstad.com/?p=133 and it will
> remain simple.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread Clint Byrum
Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
> On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
> > I suggest to use pacemaker multistate clone resource to rotate and 
> rsync
> > fernet tokens from local directories across cluster nodes. The resource
> > prototype is described here
> > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros: 
> Pacemaker
> > will care about CAP/split-brain stuff for us, we just design rotate and
> > rsync logic. Also no shared FS/DB involved but only Corosync CIB - to 
> store
> > few internal resource state related params, not tokens. Cons: Keystone
> > nodes hosting fernet tokens directories must be members of pacemaker
> > cluster. Also custom OCF script should be created to implement this. __
> > Regards,
> > Bogdan Dobrelya.
> > IRC: bogdando
> 
> Looks complex.
> 
> I suggest this kind of bash or python script, running on Fuel master node:
> 
> 0. Check that all controllers are online;
> 1. Go to one of the controllers, rotate keys there;
> 2. Fetch key 0 from there;
> 3. For each other controller rotate keys there and put the 0-key instead of 
> their new 0-key.
> 4. If any of the nodes fail to get new keys (because they went offline or for 
> some other reason) revert the rotate (move the key with the biggest index 
> back to 0).
> 
> The script can be launched by cron or by button in Fuel.
> 
> I don't see anything critically bad if one rotation/sync event fails.
> 

This too is overly complex and will cause failures. If you replace key 0,
you will stop validating tokens that were encrypted with the old key 0.

You simply need to run rotate on one, and then rsync that key repository
to all of the others. You _must not_ run rotate again until you rsync to
all of the others, since the key 0 from one rotation becomes the primary
token encrypting key going forward, so you need it to get pushed out to
all nodes as 0 first.

Don't over think it. Just read http://lbragstad.com/?p=133 and it will
remain simple.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread Boris Bobrov
On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
> I suggest to use pacemaker multistate clone resource to rotate and 
rsync
> fernet tokens from local directories across cluster nodes. The resource
> prototype is described here
> https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros: 
Pacemaker
> will care about CAP/split-brain stuff for us, we just design rotate and
> rsync logic. Also no shared FS/DB involved but only Corosync CIB - to 
store
> few internal resource state related params, not tokens. Cons: Keystone
> nodes hosting fernet tokens directories must be members of pacemaker
> cluster. Also custom OCF script should be created to implement this. __
> Regards,
> Bogdan Dobrelya.
> IRC: bogdando

Looks complex.

I suggest this kind of bash or python script, running on Fuel master node:

0. Check that all controllers are online;
1. Go to one of the controllers, rotate keys there;
2. Fetch key 0 from there;
3. For each other controller rotate keys there and put the 0-key instead of 
their new 0-key.
4. If any of the nodes fail to get new keys (because they went offline or for 
some other reason) revert the rotate (move the key with the biggest index 
back to 0).

The script can be launched by cron or by button in Fuel.

I don't see anything critically bad if one rotation/sync event fails.

> Matt Fischer also discusses key rotation here:
> 
>   http://www.mattfischer.com/blog/?p=648
> 
> And here:
> 
>   http://www.mattfischer.com/blog/?p=665
> 
> On Mon, Jul 27, 2015 at 2:30 PM, Dolph Mathews 
> wrote:
> …

-- 
С наилучшими пожеланиями,
Boris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread Clint Byrum
Meta: Bogdan, please do try to get your email client to reply with references
to the thread, so it doesn't create a new thread.

Excerpts from bdobrelia's message of 2015-08-01 09:27:17 -0700:
> I suggest to use pacemaker multistate clone resource to rotate and rsync 
> fernet tokens from local directories across cluster nodes. The resource 
> prototype is described here 
> https://etherpad.openstack.org/p/fernet_tokens_pacemaker
> Pros: Pacemaker will care about CAP/split-brain stuff for us, we just design 
> rotate and rsync logic. Also no shared FS/DB involved but only Corosync CIB - 
> to store few internal resource state related params, not tokens.
> Cons: Keystone nodes hosting fernet tokens directories must be members of 
> pacemaker cluster. Also custom OCF script should be created to implement this.

This is a massive con. And there is no need for this level of complexity.

Just making sure you only ever run key rotation in one place concurrently,
followed by an rsync push to all other nodes, is a lot simpler to enact
than pacemaker.

That said, both of those solutions benefit from a feature of the keys
being in the local filesystem: it decouples the way you do HA from the way
you provide a performant service entirely.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread bdobrelia
I suggest to use pacemaker multistate clone resource to rotate and rsync fernet 
tokens from local directories across cluster nodes. The resource prototype is 
described here https://etherpad.openstack.org/p/fernet_tokens_pacemaker
Pros: Pacemaker will care about CAP/split-brain stuff for us, we just design 
rotate and rsync logic. Also no shared FS/DB involved but only Corosync CIB - 
to store few internal resource state related params, not tokens.
Cons: Keystone nodes hosting fernet tokens directories must be members of 
pacemaker cluster. Also custom OCF script should be created to implement this.
__
Regards,
Bogdan Dobrelya.
IRC: bogdando



Matt Fischer also discusses key rotation here:

  http://www.mattfischer.com/blog/?p=648

And here:

  http://www.mattfischer.com/blog/?p=665

On Mon, Jul 27, 2015 at 2:30 PM, Dolph Mathews 
wrote:
…__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
Matt Fischer also discusses key rotation here:

  http://www.mattfischer.com/blog/?p=648

And here:

  http://www.mattfischer.com/blog/?p=665

On Mon, Jul 27, 2015 at 2:30 PM, Dolph Mathews 
wrote:

>
>
> On Mon, Jul 27, 2015 at 2:03 PM, Clint Byrum  wrote:
>
>> Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
>> > On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:
>> >
>> > > Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34
>> -0700:
>> > > > Greetings!
>> > > >
>> > > > I'd like to discuss pro's and contra's of having Fernet encryption
>> keys
>> > > > stored in a database backend.
>> > > > The idea itself emerged during discussion about synchronizing
>> rotated
>> > > keys
>> > > > in HA environment.
>> > > > Now Fernet keys are stored in the filesystem that has some
>> availability
>> > > > issues in unstable cluster.
>> > > > OTOH, making SQL highly available is considered easier than that
>> for a
>> > > > filesystem.
>> > > >
>> > >
>> > > I don't think HA is the root of the problem here. The problem is
>> > > synchronization. If I have 3 keystone servers (n+1), and I rotate
>> keys on
>> > > them, I must very carefully restart them all at the exact right time
>> to
>> > > make sure one of them doesn't issue a token which will not be
>> validated
>> > > on another. This is quite a real possibility because the validation
>> > > will not come from the user, but from the service, so it's not like we
>> > > can use simple persistence rules. One would need a layer 7 capable
>> load
>> > > balancer that can find the token ID and make sure it goes back to the
>> > > server that issued it.
>> > >
>> >
>> > This is not true (or if it is, I'd love see a bug report).
>> keystone-manage
>> > fernet_rotate uses a three phase rotation strategy (staged -> primary ->
>> > secondary) that allows you to distribute a staged key (used only for
>> token
>> > validation) throughout your cluster before it becomes a primary key
>> (used
>> > for token creation and validation) anywhere. Secondary keys are only
>> used
>> > for token validation.
>> >
>> > All you have to do is atomically replace the fernet key directory with a
>> > new key set.
>> >
>> > You also don't have to restart keystone for it to pickup new keys
>> dropped
>> > onto the filesystem beneath it.
>> >
>>
>> That's great news! Is this documented anywhere? I dug through the
>> operators guides, security guide, install guide, etc. Nothing described
>> this dance, which is impressive and should be written down!
>>
>
> (BTW, your original assumption would normally have been an accurate one!)
>
> I don't believe it's documented in any of those places, yet. The best
> explanation of the three phases in tree I'm aware of is probably this
> (which isn't particularly accessible..):
>
>
> https://github.com/openstack/keystone/blob/6a6fcc2/keystone/cmd/cli.py#L208-L223
>
> Lance Bragstad and I also gave a small presentation at the Vancouver
> summit on the behavior and he mentions the same on one of his blog posts:
>
>   https://www.youtube.com/watch?v=duRBlm9RtCw&feature=youtu.be
>   http://lbragstad.com/?p=133
>
>
>> I even tried to discern how it worked from the code but it actually
>> looks like it does not work the way you describe on casual investigation.
>>
>
> I don't blame you! I'll work to improve the user-facing docs on the topic.
>
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
On Mon, Jul 27, 2015 at 2:03 PM, Clint Byrum  wrote:

> Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
> > On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:
> >
> > > Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> > > > Greetings!
> > > >
> > > > I'd like to discuss pro's and contra's of having Fernet encryption
> keys
> > > > stored in a database backend.
> > > > The idea itself emerged during discussion about synchronizing rotated
> > > keys
> > > > in HA environment.
> > > > Now Fernet keys are stored in the filesystem that has some
> availability
> > > > issues in unstable cluster.
> > > > OTOH, making SQL highly available is considered easier than that for
> a
> > > > filesystem.
> > > >
> > >
> > > I don't think HA is the root of the problem here. The problem is
> > > synchronization. If I have 3 keystone servers (n+1), and I rotate keys
> on
> > > them, I must very carefully restart them all at the exact right time to
> > > make sure one of them doesn't issue a token which will not be validated
> > > on another. This is quite a real possibility because the validation
> > > will not come from the user, but from the service, so it's not like we
> > > can use simple persistence rules. One would need a layer 7 capable load
> > > balancer that can find the token ID and make sure it goes back to the
> > > server that issued it.
> > >
> >
> > This is not true (or if it is, I'd love see a bug report).
> keystone-manage
> > fernet_rotate uses a three phase rotation strategy (staged -> primary ->
> > secondary) that allows you to distribute a staged key (used only for
> token
> > validation) throughout your cluster before it becomes a primary key (used
> > for token creation and validation) anywhere. Secondary keys are only used
> > for token validation.
> >
> > All you have to do is atomically replace the fernet key directory with a
> > new key set.
> >
> > You also don't have to restart keystone for it to pickup new keys dropped
> > onto the filesystem beneath it.
> >
>
> That's great news! Is this documented anywhere? I dug through the
> operators guides, security guide, install guide, etc. Nothing described
> this dance, which is impressive and should be written down!
>

(BTW, your original assumption would normally have been an accurate one!)

I don't believe it's documented in any of those places, yet. The best
explanation of the three phases in tree I'm aware of is probably this
(which isn't particularly accessible..):


https://github.com/openstack/keystone/blob/6a6fcc2/keystone/cmd/cli.py#L208-L223

Lance Bragstad and I also gave a small presentation at the Vancouver summit
on the behavior and he mentions the same on one of his blog posts:

  https://www.youtube.com/watch?v=duRBlm9RtCw&feature=youtu.be
  http://lbragstad.com/?p=133


> I even tried to discern how it worked from the code but it actually
> looks like it does not work the way you describe on casual investigation.
>

I don't blame you! I'll work to improve the user-facing docs on the topic.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
> On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:
> 
> > Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> > > Greetings!
> > >
> > > I'd like to discuss pro's and contra's of having Fernet encryption keys
> > > stored in a database backend.
> > > The idea itself emerged during discussion about synchronizing rotated
> > keys
> > > in HA environment.
> > > Now Fernet keys are stored in the filesystem that has some availability
> > > issues in unstable cluster.
> > > OTOH, making SQL highly available is considered easier than that for a
> > > filesystem.
> > >
> >
> > I don't think HA is the root of the problem here. The problem is
> > synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
> > them, I must very carefully restart them all at the exact right time to
> > make sure one of them doesn't issue a token which will not be validated
> > on another. This is quite a real possibility because the validation
> > will not come from the user, but from the service, so it's not like we
> > can use simple persistence rules. One would need a layer 7 capable load
> > balancer that can find the token ID and make sure it goes back to the
> > server that issued it.
> >
> 
> This is not true (or if it is, I'd love see a bug report). keystone-manage
> fernet_rotate uses a three phase rotation strategy (staged -> primary ->
> secondary) that allows you to distribute a staged key (used only for token
> validation) throughout your cluster before it becomes a primary key (used
> for token creation and validation) anywhere. Secondary keys are only used
> for token validation.
> 
> All you have to do is atomically replace the fernet key directory with a
> new key set.
> 
> You also don't have to restart keystone for it to pickup new keys dropped
> onto the filesystem beneath it.
> 

That's great news! Is this documented anywhere? I dug through the
operators guides, security guide, install guide, etc. Nothing described
this dance, which is impressive and should be written down!

I even tried to discern how it worked from the code but it actually
looks like it does not work the way you describe on casual investigation.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:

> Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> > Greetings!
> >
> > I'd like to discuss pro's and contra's of having Fernet encryption keys
> > stored in a database backend.
> > The idea itself emerged during discussion about synchronizing rotated
> keys
> > in HA environment.
> > Now Fernet keys are stored in the filesystem that has some availability
> > issues in unstable cluster.
> > OTOH, making SQL highly available is considered easier than that for a
> > filesystem.
> >
>
> I don't think HA is the root of the problem here. The problem is
> synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
> them, I must very carefully restart them all at the exact right time to
> make sure one of them doesn't issue a token which will not be validated
> on another. This is quite a real possibility because the validation
> will not come from the user, but from the service, so it's not like we
> can use simple persistence rules. One would need a layer 7 capable load
> balancer that can find the token ID and make sure it goes back to the
> server that issued it.
>

This is not true (or if it is, I'd love see a bug report). keystone-manage
fernet_rotate uses a three phase rotation strategy (staged -> primary ->
secondary) that allows you to distribute a staged key (used only for token
validation) throughout your cluster before it becomes a primary key (used
for token creation and validation) anywhere. Secondary keys are only used
for token validation.

All you have to do is atomically replace the fernet key directory with a
new key set.

You also don't have to restart keystone for it to pickup new keys dropped
onto the filesystem beneath it.


>
> A database will at least ensure that it is updated in one place,
> atomically, assuming each server issues a query to find the latest
> key at every key validation request. That would be a very cheap query,
> but not free. A cache would be fine, with the cache being invalidated
> on any failed validation, but then that opens the service up to DoS'ing
> the database simply by throwing tons of invalid tokens at it.
>
> So an alternative approach is to try to reload the filesystem based key
> repository whenever a validation fails. This is quite a bit cheaper than a
> SQL query, so the DoS would have to be a full-capacity DoS (overwhelming
> all the nodes, not just the database) which you can never prevent. And
> with that, you can simply sync out new keys at will, and restart just
> one of the keystones, whenever you are confident the whole repository is
> synchronized. This is also quite a bit simpler, as one basically needs
> only to add a single piece of code that issues load_keys and retries
> inside validation.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Fox, Kevin M
Barbican depends on Keystone though for authentication. Its not a silver bullet 
here.

Kevin

From: Dolph Mathews [dolph.math...@gmail.com]
Sent: Monday, July 27, 2015 10:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

Although using a node's *local* filesystem requires external configuration 
management to manage the distribution of rotated keys, it's always available, 
easy to secure, and can be updated atomically per node. Note that Fernet's 
rotation strategy uses a staged key that can be distributed to all nodes in 
advance of it being used to create new tokens.

Also be aware that you wouldn't want to store encryption keys in plaintext in a 
shared database, so you must introduce an additional layer of complexity to 
solve that problem.

Barbican seems like much more logical next-step beyond the local filesystem, as 
it shifts the burden onto a system explicitly designed to handle this issue 
(albeit in a multitenant environment).

On Mon, Jul 27, 2015 at 12:01 PM, Alexander Makarov 
mailto:amaka...@mirantis.com>> wrote:
Greetings!

I'd like to discuss pro's and contra's of having Fernet encryption keys stored 
in a database backend.
The idea itself emerged during discussion about synchronizing rotated keys in 
HA environment.
Now Fernet keys are stored in the filesystem that has some availability issues 
in unstable cluster.
OTOH, making SQL highly available is considered easier than that for a 
filesystem.

--
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Clint Byrum
Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> Greetings!
> 
> I'd like to discuss pro's and contra's of having Fernet encryption keys
> stored in a database backend.
> The idea itself emerged during discussion about synchronizing rotated keys
> in HA environment.
> Now Fernet keys are stored in the filesystem that has some availability
> issues in unstable cluster.
> OTOH, making SQL highly available is considered easier than that for a
> filesystem.
> 

I don't think HA is the root of the problem here. The problem is
synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
them, I must very carefully restart them all at the exact right time to
make sure one of them doesn't issue a token which will not be validated
on another. This is quite a real possibility because the validation
will not come from the user, but from the service, so it's not like we
can use simple persistence rules. One would need a layer 7 capable load
balancer that can find the token ID and make sure it goes back to the
server that issued it.

A database will at least ensure that it is updated in one place,
atomically, assuming each server issues a query to find the latest
key at every key validation request. That would be a very cheap query,
but not free. A cache would be fine, with the cache being invalidated
on any failed validation, but then that opens the service up to DoS'ing
the database simply by throwing tons of invalid tokens at it.

So an alternative approach is to try to reload the filesystem based key
repository whenever a validation fails. This is quite a bit cheaper than a
SQL query, so the DoS would have to be a full-capacity DoS (overwhelming
all the nodes, not just the database) which you can never prevent. And
with that, you can simply sync out new keys at will, and restart just
one of the keystones, whenever you are confident the whole repository is
synchronized. This is also quite a bit simpler, as one basically needs
only to add a single piece of code that issues load_keys and retries
inside validation.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
Although using a node's *local* filesystem requires external configuration
management to manage the distribution of rotated keys, it's always
available, easy to secure, and can be updated atomically per node. Note
that Fernet's rotation strategy uses a staged key that can be distributed
to all nodes in advance of it being used to create new tokens.

Also be aware that you wouldn't want to store encryption keys in plaintext
in a shared database, so you must introduce an additional layer of
complexity to solve that problem.

Barbican seems like much more logical next-step beyond the local
filesystem, as it shifts the burden onto a system explicitly designed to
handle this issue (albeit in a multitenant environment).

On Mon, Jul 27, 2015 at 12:01 PM, Alexander Makarov 
wrote:

> Greetings!
>
> I'd like to discuss pro's and contra's of having Fernet encryption keys
> stored in a database backend.
> The idea itself emerged during discussion about synchronizing rotated keys
> in HA environment.
> Now Fernet keys are stored in the filesystem that has some availability
> issues in unstable cluster.
> OTOH, making SQL highly available is considered easier than that for a
> filesystem.
>
> --
> Kind Regards,
> Alexander Makarov,
> Senior Software Developer,
>
> Mirantis, Inc.
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (926) 204-50-60
>
> Skype: MAKAPOB.AJIEKCAHDP
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Alexander Makarov
Greetings!

I'd like to discuss pro's and contra's of having Fernet encryption keys
stored in a database backend.
The idea itself emerged during discussion about synchronizing rotated keys
in HA environment.
Now Fernet keys are stored in the filesystem that has some availability
issues in unstable cluster.
OTOH, making SQL highly available is considered easier than that for a
filesystem.

-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev