My wish list for our proposed framework:

-        Create XML JUnit style run reports

-        Run tests in parallel

-        Should be able to run out of the box with little configuration (a 
single configuration file, everything in one place)

-        Run through standard runner like Nosetest (i.e. nosetests /Kong or 
nosetests /YourSuite). This will allow the suites to easily integrate in each 
company's framework.

-        Tests framework should support drop and run using reflection as a way 
to identify tests to run

Thanks,

Donald

From: openstack-bounces+donald.ngo=hp....@lists.launchpad.net 
[mailto:openstack-bounces+donald.ngo=hp....@lists.launchpad.net] On Behalf Of 
Brebner, Gavin
Sent: Wednesday, October 19, 2011 10:39 AM
To: Daryl Walleck; Rohit Karajgi
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] openstack-integration-tests


My 2c

To me, the end-customer facing part of the Openstack solution is in many ways 
the set of libraries and tools customers are likely to use - as such testing 
with them
is essential. If there's a bug that can be only exposed through some obscure 
API call that isn't readily available through one of the usual libraries, it 
mostly will be of
only minor importance as it will be rarely if ever get used, whereas e.g. a bug 
in a library that causes data corruption will not be good for Openstack no 
matter
how correct things are from the endpoint in. The whole solution needs to work; 
this is complex as we don't necessarily control all the libraries, and can't 
test everything
with every possible library, so we have to do the best we can to ensure we 
catch errors as early as possible e.g. via direct API testing for unit tests / 
low level
functional tests. Testing at multiple levels is required, and the art is in 
selecting how much effort to put at each level.

Re. framework we need a wide range of capabilities, hence keep it simple and 
flexible. One thing I'd really like to see would be a means to express 
parallelism - e.g. for
chaos monkey type tests, race conditions, realistic stress runs and so on. 
Support for tests written in any arbitrary language is also required. I can 
write all
this myself, but would love a framework to handle it for me, and leave me to 
concentrate on mimicking what I think our end customers are likely to do.

Gavin

From: openstack-bounces+gavin.brebner=hp....@lists.launchpad.net 
[mailto:openstack-bounces+gavin.brebner=hp....@lists.launchpad.net] On Behalf 
Of Daryl Walleck
Sent: Wednesday, October 19, 2011 6:27 PM
To: Rohit Karajgi
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] openstack-integration-tests

Hi Rohit,

I'm glad to see so much interest in getting testing done right. So here's my 
thoughts. As far as the nova client/euca-tools portion, I think we absolutely 
need a series of tests that validate that these bindings work correctly. As a 
nice side effect they do test their respective APIs, which is good. I think 
duplication of testing between these two bindings and even what I'm envisioning 
as the "main" test suite is necessary, as we have to verify at least at a high 
level that they work correctly.

My thoughts for our core testing is that those would the ones that do not use 
language bindings. I think this is where the interesting architectural work can 
be done. Test framework is a very loose term that gets used a lot, but to me a 
framework includes:


 *   The test runner and it's capabilities
 *   How the test code is structured to assure maintainability/flexibility/ease 
of code re-use
 *   Any utilities provided to extend or ease the ability to test

I think we all have a lot of good ideas about this, it's just a matter of 
consolidating that and choosing one direction to go forward with.

Daryl

On Oct 19, 2011, at 9:58 AM, Rohit Karajgi wrote:

Hello Stackers,

I was at the design summit and the sessions that were 'all about QA' and had 
shown my interest in supporting this effort. Sorry I could not be present at 
the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his 
account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today's meeting.
I had a look at the various (7 and counting??) test frameworks out there to 
test OpenStack API.
Jay, Gabe and Tim put up a neat wiki 
(http://wiki.openstack.org/openstack-integration-test-suites) to compare many 
of these.

I looked at Lettuce<https://github.com/gabrielfalcao/lettuce> and felt it was 
quite effective. It's incredibly easy to write tests once the wrappers over the 
application are setup. Easy as in "Given a ttylinux image create a Server" 
would be how a test scenario would be written in a typical .feature file, 
(which is basically a list of test scenarios for a particular feature) in a 
natural language. It has nose support, and there's some neat 
documentation<http://lettuce.it/index.html> too. I was just curious if anyone 
has already tried out Lettuce with OpenStack? From the ODS, I think the Grid 
Dynamics guys already have their own implementation. It would be great if one 
of you guys join the meeting and throw some light on how you've got it to work.
Just for those who may be unaware, Soren's branch 
openstack-integration-tests<https://github.com/openstack/openstack-integration-tests>
 is actually a merge of Kong and Stacktester.

The other point I wanted to have more clarity on was on using both novaclient 
AND httplib2 to make the API requests. Though <wwkeyboard> did mention issues 
regarding spec bug proliferation into the client, how can we best utilize this 
dual approach and avoid another round of duplicate test cases. Maybe we target 
novaclient first and then the use httplib2 to fill in gaps? After-all 
novaclient does call httplib2 internally.

I would like to team up with Gabe and others for the unified test runner task. 
Please chip me in if you're doing some division of labor there.

Thanks!
Rohit

(NTT)
From: 
openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net<mailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net>
 
[mailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net<mailto:vertex.co...@lists.launchpad.net>]
 On Behalf Of Gabe Westmaas
Sent: Monday, October 10, 2011 9:22 PM
To: openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Subject: [Openstack] [QA] openstack-integration-tests

I'd like to try to summarize and propose at least one next step for the content 
of the openstack-integration-tests git repository.  Note that this is only 
about the actual tests themselves, and says absolutely nothing about any gating 
decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to 
be run in the openstack jenkins, it should be included in the community github 
repository.

Second, it was agreed that there is value in having tests in multiple 
languages, especially in the case where those tests add value beyond the base 
language.  Examples of this may include testing using another set of bindings, 
and therefore testing the API.  Using a testing framework that just takes a 
different approach to testing.  Invalid examples include implanting the exact 
same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as 
httplib2.  Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires 
should be done via script that is close to but not within that suite - we want 
tests to be as agnostic to an implementation of openstack as possible, and 
anything you cannot do from the the API should not be inside the tests.

Fifth, there are suites of white box tests - we understand there can be value 
here, but we aren't sure how to approach that in this project, definitely more 
discussion needed here.  Maybe we have a separate directory for holding white 
and black box tests?

Sixth, no matter what else changes, we must maintain the ability to run a 
subset of tests through a common runner.  This can be done via command line or 
configuration, whichever makes the most sense.  I'd personally lean towards 
configuration with the ability to override on the command line.

If you feel I mischaracterized any of the agreements, please feel free to say 
so.

Next, we want to start moving away from the multiple entry points to write 
additional tests.  That means taking inventory of the tests that are there now, 
and figuring out what they are testing, and how we run them, and then working 
to combine what makes sense, into a directory structure that makes sense.  As 
often as possible, we should make sure the tests can be run in the same way.  I 
started a little wiki to start collecting information.  I think a short 
description of the general strategy of each suite and then details about the 
specific tests in that suite would be useful.

http://wiki.openstack.org/openstack-integration-test-suites

Hopefully this can make things a little easier to start contributing.

Gabe
This email may include confidential information. If you received it in error, 
please delete it.
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : 
openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to