So how frequently would this be required and when do the results need to be provided? I generally think this is a good idea, but as well all know that the project seems to be in flux at times around the milestones. Especially around G3 when we changed the way the CONF object was accessed, almost all drivers failed to work right after G3 when cinder.conf was configured with a multi-backend setup. It took a week or so after G3 before everyone was able to go through drivers and clean them up and get them to work in this scenario. I'm not trying to be a wet blanket on this idea, but I think we just have to be careful. I have no problem running my drivers through this and providing the results. What happens when something changes in cinder that causes drivers to fail and the test runs fail? How long does a maintainer get to fix the issue before X happens as a result? Does this happen every milestone I-1, I-2, I-3? What happens if a maintainer can't do it for every milestone for whatever reason?

Just playing a bit of devil's advocate here. I do like the idea though, just depends on the "rules" setup and how it all applies when things don't go well for a particular driver.

Cheers,
Walt


Hey Everyone,

Something I've been kicking around for quite a while now but never really been able to get around to is the idea of requiring that drivers in Cinder run a qualification test and submit results prior to introduction in to Cinder.

To elaborate a bit, the idea could start as something really simple like the following:
1. We'd add a functional_qual option/script to devstack

2. Driver maintainer runs this script to setup devstack and configure it to use their backend device on their own system.

3. Script does the usual devstack install/configure and runs the volume pieces of the Tempest gate tests.

4. Grabs output and checksums of the directories in the devstack and /opt/stack directories, bundles up the results for submission

5. Maintainer submits results

So why would we do this you ask? Cinder is pretty heavy on the third party driver plugin model which is fantastic. On the other hand while there are a lot of folks who do great reviews that catch things like syntax or logic errors in the code, and unit tests do a reasonable job of exercising the code it's difficult for folks to truly verify these devices all work.

I think it would be a very useful tool for initial introduction of a new driver and even perhaps some sort of check that's run and submitted again prior to milestone releases.

This would also drive some more activity and contribution in to Tempest with respect to getting folks like myself motivated to contribute more tests (particularly in terms of new functionality) in to Tempest.

I'd be interested to hear if folks have any interest or strong opinions on this (positive or otherwise). I know that some vendors like RedHat have this sort of thing in place for certifications, and to be honest that observation is something that caused me to start thinking about this again.

There are a lot of gaps here regarding how the submission process would look, but we could start relatively simple and grow from there if it's valuable or just abandon the idea if it proves to be unpopular and a waste of time.

Anyway, I'd love to get feed-back from folks and see what they think.

Thanks,
John



_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to