Re: [openstack-dev] [Fuel] We lost some commits during upstream puppet manifests merge

2014-12-11 Thread Aleksandr Didenko
 Also I’m just wondering how do we keep upstream modules in our repo? They
are not submodules, so how is it organized?

Currently, we don't have any automatic tracking system for changes we apply
to the community/upstream modules, that could help us to re-apply them
during the sync. Only git or diff comparison between original module and
our copy. But that should not be a problem when we finish current sync and
switch to the new contribution workflow described in the doc, Vladimir has
mentioned in the initial email [1].

Also, in the nearest future we're planning to add unit tests (rake spec)
and puppet noop tests into our CI. I think we should combine noop tests
with regression testing by using 'rake spec'. But this time I mean RSpec
tests for puppet host, not for specific classes as I suggested in the
previous email. Such tests would compile a complete catalog using our
'site.pp' for specific astute.yaml settings and it will check that needed
puppet resources present in the catalog and have needed attributes. Here's
a draft - [2]. It checks catalog compilation for a controller node and runs
few checks for 'keystone' class and keystone cache driver settings.

Since all the test logic is outside of our puppet modules directory, it
won't be affected by any further upstream syncs or changes we apply in our
modules. So in case some commit removes anything critical that is covered
by regression/noop tests, then it will get '-1' from CI and attract our
attention :)

[1]
http://docs.mirantis.com/fuel-dev/develop/module_structure.html#contributing-to-existing-fuel-library-modules
[2] https://review.openstack.org/141022

Regards,
Aleksandr


On Fri, Nov 21, 2014 at 8:07 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:


  On 21 Nov 2014, at 17:15, Aleksandr Didenko adide...@mirantis.com
 wrote:
 
  Hi,
 
  following our docs/workflow plus writing rspec tests for every new
 option we add/modify in our manifests could help with regressions. For
 example:
• we add new keystone config option in openstack::keystone class -
 keystone_config {'cache/backend': value = 'keystone.cache.memcache_pool';}
• we create new test for openstack::keystone class, something like
 this:
• should
 contain_keystone_config(cache/backend).with_value('keystone.cache.memcache_pool')
  So with such test, if for some reason we lose
 keystone_config(cache/backend) option, 'rake spec' would alert us about
 it right away and we'll get -1 from CI. Of course we should also
 implement 'rake spec' CI gate for this.
 
  But from the other hand, if someone changes option in manifests and
 updates rspec tests accordingly, then such commit will pass 'rake spec'
 test and we can still lose some specific option.
 
   We should speed up development of some modular testing framework that
 will check that corresponding change affects only particular pieces.
 
  Such test would not catch this particular regressions with
 keystone_config {'cache/backend': value =
 'keystone.cache.memcache_pool';}, because even with regression (i.e.
 dogpile backend) keystone was working OK. It has passed several BVTs and
 custom system tests, because 'dogpile' cache backend was working just fine
 while all memcached servers are up and running. So it looks like we need
 some kind of tests that will ensure that particular config options (or
 particular puppet resources) have some particular values (like backend =
 keystone.cache.memcache_pool in [cache] block of keystone.conf).
 
  So I would go with rspec testing for specific resources but I would
 write them in 'openstack' module. Those tests should check that needed
 (nova/cinder/keystone/glance)_config resources have needed values in the
 puppet catalog. Since we're not going to sync 'openstack' module with the
 upstream, such tests will remain intact until we change them, and they
 won't be affected by other modules sync/merge (keystone, cinder, nova, etc).

 I totally agree, but we need to remember to introduce tests in separate
 commits, otherwise loosing commit ID we would also lose tests ;)

 Also I’m just wondering how do we keep upstream modules in our repo? They
 are not submodules, so how is it organized?

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] We lost some commits during upstream puppet manifests merge

2014-11-21 Thread Aleksandr Didenko
Hi,

following our docs/workflow plus writing rspec tests for every new option
we add/modify in our manifests could help with regressions. For example:

   - we add new keystone config option in openstack::keystone class -
   keystone_config {'cache/backend': value = 'keystone.cache.memcache_pool';}
   - we create new test for openstack::keystone class, something like this:
  - should
  
contain_keystone_config(cache/backend).with_value('keystone.cache.memcache_pool')

So with such test, if for some reason we lose
keystone_config(cache/backend) option, 'rake spec' would alert us about
it right away and we'll get -1 from CI. Of course we should also
implement 'rake spec' CI gate for this.

But from the other hand, if someone changes option in manifests and updates
rspec tests accordingly, then such commit will pass 'rake spec' test and we
can still lose some specific option.

 We should speed up development of some modular testing framework that
will check that corresponding change affects only particular pieces.

Such test would not catch this particular regressions with keystone_config
{'cache/backend': value = 'keystone.cache.memcache_pool';}, because even
with regression (i.e. dogpile backend) keystone was working OK. It has
passed several BVTs and custom system tests, because 'dogpile' cache
backend was working just fine while all memcached servers are up and
running. So it looks like we need some kind of tests that will ensure that
particular config options (or particular puppet resources) have some
particular values (like backend = keystone.cache.memcache_pool in [cache]
block of keystone.conf).

So I would go with rspec testing for specific resources but I would write
them in 'openstack' module. Those tests should check that needed
(nova/cinder/keystone/glance)_config resources have needed values in the
puppet catalog. Since we're not going to sync 'openstack' module with the
upstream, such tests will remain intact until we change them, and they
won't be affected by other modules sync/merge (keystone, cinder, nova, etc).

--
Regards,
Aleksandr Didenko


On Wed, Nov 19, 2014 at 5:45 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Fuelers

 I am writing that we had a really sad incident - we noticed that after we
 merged upstream keystone module we lost modifications (Change-Id:
 Idfe4b54caa0d96a93e93bfff12d8b6216f83e2f1
 https://review.openstack.org/#/q/Idfe4b54caa0d96a93e93bfff12d8b6216f83e2f1,n,z)
 for memcached dogpile driver which are crucial for us. And here I can see 2
 problems:

 1) how can we ensure that we did not lose anything else?
 2) how can we ensure that this will never happen again?

 Sadly, it seems that the first question implies that we recheck all the
 upstream merge/adaptation commits by hand and check that we did not lose
 anything.

 Regarding question number 2 we do already have established process for
 upstream code merge:
 http://docs.mirantis.com/fuel-dev/develop/module_structure.html#contributing-to-existing-fuel-library-modules.
 It seems that this process had  not been established when keystone code was
 reviewed. I see two ways here:

 1) We should enforce code review workflow and specifically say that
 upstream merges can be accepted only after we have 2 '+2s' from core
 reviewers after they recheck that corresponding change does not introduce
 any regressions.
 2) We should speed up development of some modular testing framework that
 will check that corresponding change affects only particular pieces. It
 seems much easier if we split deployment into stages (oh my, I am again
 talking about granular deployment feature) and each particular commit
 affects only one of the stages, so that we can see the difference and catch
 regressions eariler.





 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] We lost some commits during upstream puppet manifests merge

2014-11-19 Thread Vladimir Kuklin
Fuelers

I am writing that we had a really sad incident - we noticed that after we
merged upstream keystone module we lost modifications (Change-Id:
Idfe4b54caa0d96a93e93bfff12d8b6216f83e2f1
https://review.openstack.org/#/q/Idfe4b54caa0d96a93e93bfff12d8b6216f83e2f1,n,z)
for memcached dogpile driver which are crucial for us. And here I can see 2
problems:

1) how can we ensure that we did not lose anything else?
2) how can we ensure that this will never happen again?

Sadly, it seems that the first question implies that we recheck all the
upstream merge/adaptation commits by hand and check that we did not lose
anything.

Regarding question number 2 we do already have established process for
upstream code merge:
http://docs.mirantis.com/fuel-dev/develop/module_structure.html#contributing-to-existing-fuel-library-modules.
It seems that this process had  not been established when keystone code was
reviewed. I see two ways here:

1) We should enforce code review workflow and specifically say that
upstream merges can be accepted only after we have 2 '+2s' from core
reviewers after they recheck that corresponding change does not introduce
any regressions.
2) We should speed up development of some modular testing framework that
will check that corresponding change affects only particular pieces. It
seems much easier if we split deployment into stages (oh my, I am again
talking about granular deployment feature) and each particular commit
affects only one of the stages, so that we can see the difference and catch
regressions eariler.





-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev