Re: [Openstack] [QA] openstack-integration-tests

2011-10-20 Thread Anne Gentle
Hi Aaron and all -

Just wanted to make you aware of testing wiki pages that exist already:

http://wiki.openstack.org/SmallTestingGuide

http://wiki.openstack.org/TestGuide

If you look at the info for these pages you can see who has worked on them,
for example see http://wiki.openstack.org/TestGuide?action=info for all the
writers on the page. I'm just listed at the bottom as the last editor.

Looks like there's work at standardizing vocabulary there.

Anne


On Thu, Oct 20, 2011 at 6:53 AM, Aaron Lee aaron@rackspace.com wrote:

  Adding dependancies between tests is going to create a maintenance and
 communication headache later. If the test requires a specific state to run,
 that state should be setup, or asserted within that test.

  We need to set up a wiki page that gives us a glossary of testing terms.
 It seems like everyone here has a different definition of framework,
 test, suite, functional, and integration. I think we need to make
 sure we are expecting the same things from this project, and that we all
 don't just agree to run in different directions. I'll set this up when I get
 in to the office.

  I strongly agree with your point about not making the official
 integration suite dependent on a single implementation. Yesterday Daryl was
 talking about making sure the tests were extensible enough that anyone could
 keep a separate set(s) of tests around for extensions or their custom
 configuration. This should not be a problem, if we make the configuration
 and client easy enough to use, and the inclusion of additional tests into
 the project could be handled with git submodules. If we keep the tests
 separated in this way it would not be hard to keep the main suite down to a
 minimum does it work, can be run on a live system set of tests, and have
 separate submodules for twisting, torquing, breaking, or whatever other
 mayhem we want to cause on a test system.

  Aaron

  On Oct 19, 2011, at 10:48 PM, Joseph Heck wrote:

  Fair enough; let me outline the goals I'm pursuing:

  In the integration tests, I'm looking for a suite of tests that are
 relatively agnostic to the implementation of the components of OpenStack,
 and which interact with OpenStack through the API's (and potentially nova-*
 swift-* command line).

  I have a hardware environment that I want to burn down and rebuild on a
 regular basis (i.e. PXE boot and clean-slate the stuff), installing and
 configuring OpenStack from the trunk branches (or a release branch if we're
 at milestone time) and run test that use these systems together:
  - the integration authorization, authentication (keystone) with nova and
 swift
  - using glance in conjunction with swift
  - using glance in conjunction with nova
  - working the dashboard and it's UI mechanisms that interacts with
 everything else behind it through the keystone endpoints service catalog

  I'm hoping that this framework would allow for supporting any number of
 installs - whether it's Fedora  GridDynamics RPMs, Ubuntu+KVM, or
 interesting and random variations on the themes. It's especially important
 to me that we can have this framework run, and expect to pass, against any
 OpenStack setup as we get into Quantum and Melange improvements over the
 Essex timeframe, which will be heavily reliant on deployment specific
 choices (Cisco vs. OpenVSwitch vs. whatever else might come available).

  I'd prefer the test suite not choose or show preference to a specific
 install, or at the very minimum support identifying tests specific to an
 install (i.e. to highlight the interesting delta's that have grown in
 between Xen and KVM installations) - and be something that anyone can set up
 and run (since I don't want to expect any one entity to be responsible for
 having all the various OpenStack flavors).

  My own personal aim is to have a stable back-end environment that I can
 run Selenium or equivalent tests against the dashboard that will in turn
 work everything else. This means having the environment in a known state to
 start with, and I'm expecting that a whole suite of these tests might well
 be very time consuming to run - up to a couple of hours.

  My intention is to actually have multiple swim lanes of these hardware
 environments (I do already) and to automate the burn-down, install,
 configuration, and testing of those environments. I would like the testing
 portion of this to be consistent - and I'm looking to the OpenStack
 Integration Tests to be the basis for that - and to which we'll be
 submitting back tests working the Dashboard as it goes forward.

  Something to note - (kind of buried in:
 http://wiki.openstack.org/openstack-integration-test-suites). The
 proboscis library (http://pypi.python.org/pypi/proboscis) for Nose is a
 NoseTest running that allows the annotation of tests with dependencies and
 groups, inspired by TestNG and maintiained by Tim Simpson @ rackspace.

  I haven't written much with it yet, but would like to suggestion that
 tests

Re: [Openstack] [QA] openstack-integration-tests

2011-10-19 Thread Rohit Karajgi
Hello Stackers,

I was at the design summit and the sessions that were 'all about QA' and had 
shown my interest in supporting this effort. Sorry I could not be present at 
the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his 
account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today's meeting.
I had a look at the various (7 and counting??) test frameworks out there to 
test OpenStack API.
Jay, Gabe and Tim put up a neat wiki 
(http://wiki.openstack.org/openstack-integration-test-suites) to compare many 
of these.

I looked at Lettucehttps://github.com/gabrielfalcao/lettuce and felt it was 
quite effective. It's incredibly easy to write tests once the wrappers over the 
application are setup. Easy as in Given a ttylinux image create a Server 
would be how a test scenario would be written in a typical .feature file, 
(which is basically a list of test scenarios for a particular feature) in a 
natural language. It has nose support, and there's some neat 
documentationhttp://lettuce.it/index.html too. I was just curious if anyone 
has already tried out Lettuce with OpenStack? From the ODS, I think the Grid 
Dynamics guys already have their own implementation. It would be great if one 
of you guys join the meeting and throw some light on how you've got it to work.
Just for those who may be unaware, Soren's branch 
openstack-integration-testshttps://github.com/openstack/openstack-integration-tests
 is actually a merge of Kong and Stacktester.

The other point I wanted to have more clarity on was on using both novaclient 
AND httplib2 to make the API requests. Though wwkeyboard did mention issues 
regarding spec bug proliferation into the client, how can we best utilize this 
dual approach and avoid another round of duplicate test cases. Maybe we target 
novaclient first and then the use httplib2 to fill in gaps? After-all 
novaclient does call httplib2 internally.

I would like to team up with Gabe and others for the unified test runner task. 
Please chip me in if you're doing some division of labor there.

Thanks!
Rohit

(NTT)
From: openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net 
[mailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net] On 
Behalf Of Gabe Westmaas
Sent: Monday, October 10, 2011 9:22 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [QA] openstack-integration-tests

I'd like to try to summarize and propose at least one next step for the content 
of the openstack-integration-tests git repository.  Note that this is only 
about the actual tests themselves, and says absolutely nothing about any gating 
decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to 
be run in the openstack jenkins, it should be included in the community github 
repository.

Second, it was agreed that there is value in having tests in multiple 
languages, especially in the case where those tests add value beyond the base 
language.  Examples of this may include testing using another set of bindings, 
and therefore testing the API.  Using a testing framework that just takes a 
different approach to testing.  Invalid examples include implanting the exact 
same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as 
httplib2.  Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires 
should be done via script that is close to but not within that suite - we want 
tests to be as agnostic to an implementation of openstack as possible, and 
anything you cannot do from the the API should not be inside the tests.

Fifth, there are suites of white box tests - we understand there can be value 
here, but we aren't sure how to approach that in this project, definitely more 
discussion needed here.  Maybe we have a separate directory for holding white 
and black box tests?

Sixth, no matter what else changes, we must maintain the ability to run a 
subset of tests through a common runner.  This can be done via command line or 
configuration, whichever makes the most sense.  I'd personally lean towards 
configuration with the ability to override on the command line.

If you feel I mischaracterized any of the agreements, please feel free to say 
so.

Next, we want to start moving away from the multiple entry points to write 
additional tests.  That means taking inventory of the tests that are there now, 
and figuring out what they are testing, and how we run them, and then working 
to combine what makes sense, into a directory structure that makes sense.  As 
often as possible, we should make sure the tests can be run in the same way.  I 
started a little wiki to start collecting information.  I think a short 
description of the general strategy of each

Re: [Openstack] [QA] openstack-integration-tests

2011-10-19 Thread Daryl Walleck
Hi Rohit,

I'm glad to see so much interest in getting testing done right. So here's my 
thoughts. As far as the nova client/euca-tools portion, I think we absolutely 
need a series of tests that validate that these bindings work correctly. As a 
nice side effect they do test their respective APIs, which is good. I think 
duplication of testing between these two bindings and even what I'm envisioning 
as the main test suite is necessary, as we have to verify at least at a high 
level that they work correctly.

My thoughts for our core testing is that those would the ones that do not use 
language bindings. I think this is where the interesting architectural work can 
be done. Test framework is a very loose term that gets used a lot, but to me a 
framework includes:


  *   The test runner and it's capabilities
  *   How the test code is structured to assure 
maintainability/flexibility/ease of code re-use
  *   Any utilities provided to extend or ease the ability to test

I think we all have a lot of good ideas about this, it's just a matter of 
consolidating that and choosing one direction to go forward with.

Daryl

On Oct 19, 2011, at 9:58 AM, Rohit Karajgi wrote:

Hello Stackers,

I was at the design summit and the sessions that were ‘all about QA’ and had 
shown my interest in supporting this effort. Sorry I could not be present at 
the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his 
account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today’s meeting.
I had a look at the various (7 and counting??) test frameworks out there to 
test OpenStack API.
Jay, Gabe and Tim put up a neat wiki 
(http://wiki.openstack.org/openstack-integration-test-suites) to compare many 
of these.

I looked at Lettucehttps://github.com/gabrielfalcao/lettuce and felt it was 
quite effective. It’s incredibly easy to write tests once the wrappers over the 
application are setup. Easy as in “Given a ttylinux image create a Server” 
would be how a test scenario would be written in a typical .feature file, 
(which is basically a list of test scenarios for a particular feature) in a 
natural language. It has nose support, and there’s some neat 
documentationhttp://lettuce.it/index.html too. I was just curious if anyone 
has already tried out Lettuce with OpenStack? From the ODS, I think the Grid 
Dynamics guys already have their own implementation. It would be great if one 
of you guys join the meeting and throw some light on how you’ve got it to work.
Just for those who may be unaware, Soren’s branch 
openstack-integration-testshttps://github.com/openstack/openstack-integration-tests
 is actually a merge of Kong and Stacktester.

The other point I wanted to have more clarity on was on using both novaclient 
AND httplib2 to make the API requests. Though wwkeyboard did mention issues 
regarding spec bug proliferation into the client, how can we best utilize this 
dual approach and avoid another round of duplicate test cases. Maybe we target 
novaclient first and then the use httplib2 to fill in gaps? After-all 
novaclient does call httplib2 internally.

I would like to team up with Gabe and others for the unified test runner task. 
Please chip me in if you’re doing some division of labor there.

Thanks!
Rohit

(NTT)
From: 
openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.netmailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net
 
[mailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.netmailto:vertex.co...@lists.launchpad.net]
 On Behalf Of Gabe Westmaas
Sent: Monday, October 10, 2011 9:22 PM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] [QA] openstack-integration-tests

I'd like to try to summarize and propose at least one next step for the content 
of the openstack-integration-tests git repository.  Note that this is only 
about the actual tests themselves, and says absolutely nothing about any gating 
decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to 
be run in the openstack jenkins, it should be included in the community github 
repository.

Second, it was agreed that there is value in having tests in multiple 
languages, especially in the case where those tests add value beyond the base 
language.  Examples of this may include testing using another set of bindings, 
and therefore testing the API.  Using a testing framework that just takes a 
different approach to testing.  Invalid examples include implanting the exact 
same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as 
httplib2.  Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires 
should be done via script that is close

Re: [Openstack] [QA] openstack-integration-tests

2011-10-19 Thread Brebner, Gavin
My 2c

To me, the end-customer facing part of the Openstack solution is in many ways 
the set of libraries and tools customers are likely to use - as such testing 
with them
is essential. If there's a bug that can be only exposed through some obscure 
API call that isn't readily available through one of the usual libraries, it 
mostly will be of
only minor importance as it will be rarely if ever get used, whereas e.g. a bug 
in a library that causes data corruption will not be good for Openstack no 
matter
how correct things are from the endpoint in. The whole solution needs to work; 
this is complex as we don't necessarily control all the libraries, and can't 
test everything
with every possible library, so we have to do the best we can to ensure we 
catch errors as early as possible e.g. via direct API testing for unit tests / 
low level
functional tests. Testing at multiple levels is required, and the art is in 
selecting how much effort to put at each level.

Re. framework we need a wide range of capabilities, hence keep it simple and 
flexible. One thing I'd really like to see would be a means to express 
parallelism - e.g. for
chaos monkey type tests, race conditions, realistic stress runs and so on. 
Support for tests written in any arbitrary language is also required. I can 
write all
this myself, but would love a framework to handle it for me, and leave me to 
concentrate on mimicking what I think our end customers are likely to do.

Gavin

From: openstack-bounces+gavin.brebner=hp@lists.launchpad.net 
[mailto:openstack-bounces+gavin.brebner=hp@lists.launchpad.net] On Behalf 
Of Daryl Walleck
Sent: Wednesday, October 19, 2011 6:27 PM
To: Rohit Karajgi
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] openstack-integration-tests

Hi Rohit,

I'm glad to see so much interest in getting testing done right. So here's my 
thoughts. As far as the nova client/euca-tools portion, I think we absolutely 
need a series of tests that validate that these bindings work correctly. As a 
nice side effect they do test their respective APIs, which is good. I think 
duplication of testing between these two bindings and even what I'm envisioning 
as the main test suite is necessary, as we have to verify at least at a high 
level that they work correctly.

My thoughts for our core testing is that those would the ones that do not use 
language bindings. I think this is where the interesting architectural work can 
be done. Test framework is a very loose term that gets used a lot, but to me a 
framework includes:


 *   The test runner and it's capabilities
 *   How the test code is structured to assure maintainability/flexibility/ease 
of code re-use
 *   Any utilities provided to extend or ease the ability to test

I think we all have a lot of good ideas about this, it's just a matter of 
consolidating that and choosing one direction to go forward with.

Daryl

On Oct 19, 2011, at 9:58 AM, Rohit Karajgi wrote:


Hello Stackers,

I was at the design summit and the sessions that were 'all about QA' and had 
shown my interest in supporting this effort. Sorry I could not be present at 
the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his 
account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today's meeting.
I had a look at the various (7 and counting??) test frameworks out there to 
test OpenStack API.
Jay, Gabe and Tim put up a neat wiki 
(http://wiki.openstack.org/openstack-integration-test-suites) to compare many 
of these.

I looked at Lettucehttps://github.com/gabrielfalcao/lettuce and felt it was 
quite effective. It's incredibly easy to write tests once the wrappers over the 
application are setup. Easy as in Given a ttylinux image create a Server 
would be how a test scenario would be written in a typical .feature file, 
(which is basically a list of test scenarios for a particular feature) in a 
natural language. It has nose support, and there's some neat 
documentationhttp://lettuce.it/index.html too. I was just curious if anyone 
has already tried out Lettuce with OpenStack? From the ODS, I think the Grid 
Dynamics guys already have their own implementation. It would be great if one 
of you guys join the meeting and throw some light on how you've got it to work.
Just for those who may be unaware, Soren's branch 
openstack-integration-testshttps://github.com/openstack/openstack-integration-tests
 is actually a merge of Kong and Stacktester.

The other point I wanted to have more clarity on was on using both novaclient 
AND httplib2 to make the API requests. Though wwkeyboard did mention issues 
regarding spec bug proliferation into the client, how can we best utilize this 
dual approach and avoid another round of duplicate test cases. Maybe we target 
novaclient first and then the use httplib2 to fill in gaps? After-all 
novaclient does call httplib2 internally.

I would like

Re: [Openstack] [QA] openstack-integration-tests

2011-10-19 Thread Joseph Heck
What you've described is a great unit testing framework, but with integration 
testing you need recognition that some tests are dependent of specific system 
state - and therefore can not be run blindly in parallel. 

Some can, just not all - and often the most expedient way to get a system in a 
known state is to walk it there through sequences of tests. 

I believe we essentially already have this framework in place with the 
openstack-integration-tests repo and the proposed (but not yet implemented) set 
of tests using proboscis to enable dependencies and grouping in those tests. 

-joe

On Oct 19, 2011, at 5:00 PM, Ngo, Donald (HP Cloud Services : QA) 
donald@hp.com wrote:

 My wish list for our proposed framework:
  
 -Create XML JUnit style run reports
 -Run tests in parallel
 -Should be able to run out of the box with little configuration (a 
 single configuration file, everything in one place)
 -Run through standard runner like Nosetest (i.e. nosetests /Kong or 
 nosetests /YourSuite). This will allow the suites to easily integrate in each 
 company’s framework.
 -Tests framework should support drop and run using reflection as a 
 way to identify tests to run
  
 Thanks,
  
 Donald
  
 From: openstack-bounces+donald.ngo=hp@lists.launchpad.net 
 [mailto:openstack-bounces+donald.ngo=hp@lists.launchpad.net] On Behalf Of 
 Brebner, Gavin
 Sent: Wednesday, October 19, 2011 10:39 AM
 To: Daryl Walleck; Rohit Karajgi
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] [QA] openstack-integration-tests
  
  
 My 2c
  
 To me, the end-customer facing part of the Openstack solution is in many ways 
 the set of libraries and tools customers are likely to use – as such testing 
 with them
 is essential. If there’s a bug that can be only exposed through some obscure 
 API call that isn’t readily available through one of the usual libraries, it 
 mostly will be of
 only minor importance as it will be rarely if ever get used, whereas e.g. a 
 bug in a library that causes data corruption will not be good for Openstack 
 no matter
 how correct things are from the endpoint in. The whole solution needs to 
 work; this is complex as we don’t necessarily control all the libraries, and 
 can’t test everything
 with every possible library, so we have to do the best we can to ensure we 
 catch errors as early as possible e.g. via direct API testing for unit tests 
 / low level
 functional tests. Testing at multiple levels is required, and the art is in 
 selecting how much effort to put at each level.
  
 Re. framework we need a wide range of capabilities, hence keep it simple and 
 flexible. One thing I’d really like to see would be a means to express 
 parallelism – e.g. for
 chaos monkey type tests, race conditions, realistic stress runs and so on. 
 Support for tests written in any arbitrary language is also required. I can 
 write all
 this myself, but would love a framework to handle it for me, and leave me to 
 concentrate on mimicking what I think our end customers are likely to do.
  
 Gavin
  
 From: openstack-bounces+gavin.brebner=hp@lists.launchpad.net 
 [mailto:openstack-bounces+gavin.brebner=hp@lists.launchpad.net] On Behalf 
 Of Daryl Walleck
 Sent: Wednesday, October 19, 2011 6:27 PM
 To: Rohit Karajgi
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] [QA] openstack-integration-tests
  
 Hi Rohit,
  
 I'm glad to see so much interest in getting testing done right. So here's my 
 thoughts. As far as the nova client/euca-tools portion, I think we absolutely 
 need a series of tests that validate that these bindings work correctly. As a 
 nice side effect they do test their respective APIs, which is good. I think 
 duplication of testing between these two bindings and even what I'm 
 envisioning as the main test suite is necessary, as we have to verify at 
 least at a high level that they work correctly.
  
 My thoughts for our core testing is that those would the ones that do not use 
 language bindings. I think this is where the interesting architectural work 
 can be done. Test framework is a very loose term that gets used a lot, but to 
 me a framework includes:
  
 The test runner and it's capabilities
 How the test code is structured to assure maintainability/flexibility/ease of 
 code re-use
 Any utilities provided to extend or ease the ability to test
  
 I think we all have a lot of good ideas about this, it's just a matter of 
 consolidating that and choosing one direction to go forward with.
  
 Daryl
  
 On Oct 19, 2011, at 9:58 AM, Rohit Karajgi wrote:
  
 
 Hello Stackers,
  
 I was at the design summit and the sessions that were ‘all about QA’ and had 
 shown my interest in supporting this effort. Sorry I could not be present at 
 the first QA IRC meeting due to a vacation.
 I had a chance to eavesdrop at the meeting log and Nachi-san also shared his 
 account of the outcome with me. Thanks Nachi!
  
 Just a heads

Re: [Openstack] [QA] openstack-integration-tests

2011-10-19 Thread Ngo, Donald (HP Cloud Services : QA)
My wish list for our proposed framework:


-Create XML JUnit style run reports

-Run tests in parallel

-Should be able to run out of the box with little configuration (a 
single configuration file, everything in one place)

-Run through standard runner like Nosetest (i.e. nosetests /Kong or 
nosetests /YourSuite). This will allow the suites to easily integrate in each 
company's framework.

-Tests framework should support drop and run using reflection as a way 
to identify tests to run

Thanks,

Donald

From: openstack-bounces+donald.ngo=hp@lists.launchpad.net 
[mailto:openstack-bounces+donald.ngo=hp@lists.launchpad.net] On Behalf Of 
Brebner, Gavin
Sent: Wednesday, October 19, 2011 10:39 AM
To: Daryl Walleck; Rohit Karajgi
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] openstack-integration-tests


My 2c

To me, the end-customer facing part of the Openstack solution is in many ways 
the set of libraries and tools customers are likely to use - as such testing 
with them
is essential. If there's a bug that can be only exposed through some obscure 
API call that isn't readily available through one of the usual libraries, it 
mostly will be of
only minor importance as it will be rarely if ever get used, whereas e.g. a bug 
in a library that causes data corruption will not be good for Openstack no 
matter
how correct things are from the endpoint in. The whole solution needs to work; 
this is complex as we don't necessarily control all the libraries, and can't 
test everything
with every possible library, so we have to do the best we can to ensure we 
catch errors as early as possible e.g. via direct API testing for unit tests / 
low level
functional tests. Testing at multiple levels is required, and the art is in 
selecting how much effort to put at each level.

Re. framework we need a wide range of capabilities, hence keep it simple and 
flexible. One thing I'd really like to see would be a means to express 
parallelism - e.g. for
chaos monkey type tests, race conditions, realistic stress runs and so on. 
Support for tests written in any arbitrary language is also required. I can 
write all
this myself, but would love a framework to handle it for me, and leave me to 
concentrate on mimicking what I think our end customers are likely to do.

Gavin

From: openstack-bounces+gavin.brebner=hp@lists.launchpad.net 
[mailto:openstack-bounces+gavin.brebner=hp@lists.launchpad.net] On Behalf 
Of Daryl Walleck
Sent: Wednesday, October 19, 2011 6:27 PM
To: Rohit Karajgi
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] openstack-integration-tests

Hi Rohit,

I'm glad to see so much interest in getting testing done right. So here's my 
thoughts. As far as the nova client/euca-tools portion, I think we absolutely 
need a series of tests that validate that these bindings work correctly. As a 
nice side effect they do test their respective APIs, which is good. I think 
duplication of testing between these two bindings and even what I'm envisioning 
as the main test suite is necessary, as we have to verify at least at a high 
level that they work correctly.

My thoughts for our core testing is that those would the ones that do not use 
language bindings. I think this is where the interesting architectural work can 
be done. Test framework is a very loose term that gets used a lot, but to me a 
framework includes:


 *   The test runner and it's capabilities
 *   How the test code is structured to assure maintainability/flexibility/ease 
of code re-use
 *   Any utilities provided to extend or ease the ability to test

I think we all have a lot of good ideas about this, it's just a matter of 
consolidating that and choosing one direction to go forward with.

Daryl

On Oct 19, 2011, at 9:58 AM, Rohit Karajgi wrote:

Hello Stackers,

I was at the design summit and the sessions that were 'all about QA' and had 
shown my interest in supporting this effort. Sorry I could not be present at 
the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his 
account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today's meeting.
I had a look at the various (7 and counting??) test frameworks out there to 
test OpenStack API.
Jay, Gabe and Tim put up a neat wiki 
(http://wiki.openstack.org/openstack-integration-test-suites) to compare many 
of these.

I looked at Lettucehttps://github.com/gabrielfalcao/lettuce and felt it was 
quite effective. It's incredibly easy to write tests once the wrappers over the 
application are setup. Easy as in Given a ttylinux image create a Server 
would be how a test scenario would be written in a typical .feature file, 
(which is basically a list of test scenarios for a particular feature) in a 
natural language. It has nose support, and there's some neat 
documentationhttp://lettuce.it/index.html too. I was just

Re: [Openstack] [QA] openstack-integration-tests

2011-10-19 Thread Joseph Heck
Fair enough; let me outline the goals I'm pursuing:

In the integration tests, I'm looking for a suite of tests that are relatively 
agnostic to the implementation of the components of OpenStack, and which 
interact with OpenStack through the API's (and potentially nova-* swift-* 
command line).

I have a hardware environment that I want to burn down and rebuild on a 
regular basis (i.e. PXE boot and clean-slate the stuff), installing and 
configuring OpenStack from the trunk branches (or a release branch if we're at 
milestone time) and run test that use these systems together:
 - the integration authorization, authentication (keystone) with nova and swift
 - using glance in conjunction with swift
 - using glance in conjunction with nova
 - working the dashboard and it's UI mechanisms that interacts with everything 
else behind it through the keystone endpoints service catalog

I'm hoping that this framework would allow for supporting any number of 
installs - whether it's Fedora  GridDynamics RPMs, Ubuntu+KVM, or interesting 
and random variations on the themes. It's especially important to me that we 
can have this framework run, and expect to pass, against any OpenStack setup as 
we get into Quantum and Melange improvements over the Essex timeframe, which 
will be heavily reliant on deployment specific choices (Cisco vs. OpenVSwitch 
vs. whatever else might come available).

I'd prefer the test suite not choose or show preference to a specific install, 
or at the very minimum support identifying tests specific to an install (i.e. 
to highlight the interesting delta's that have grown in between Xen and KVM 
installations) - and be something that anyone can set up and run (since I don't 
want to expect any one entity to be responsible for having all the various 
OpenStack flavors).

My own personal aim is to have a stable back-end environment that I can run 
Selenium or equivalent tests against the dashboard that will in turn work 
everything else. This means having the environment in a known state to start 
with, and I'm expecting that a whole suite of these tests might well be very 
time consuming to run - up to a couple of hours.

My intention is to actually have multiple swim lanes of these hardware 
environments (I do already) and to automate the burn-down, install, 
configuration, and testing of those environments. I would like the testing 
portion of this to be consistent - and I'm looking to the OpenStack Integration 
Tests to be the basis for that - and to which we'll be submitting back tests 
working the Dashboard as it goes forward.

Something to note - (kind of buried in: 
http://wiki.openstack.org/openstack-integration-test-suites). The proboscis 
library (http://pypi.python.org/pypi/proboscis) for Nose is a NoseTest running 
that allows the annotation of tests with dependencies and groups, inspired by 
TestNG and maintiained by Tim Simpson @ rackspace.

I haven't written much with it yet, but would like to suggestion that tests 
could be organized with this to allow for dependencies and grouping to allow 
parallel execution where appropriate.

I've been currently generating functional tests using Lettuce (with a httplib2 
based webdriver), and while I like it from a BDD perspective, I'm finding it 
more time consuming to generate the tests (as I end up messing with the DSL 
more than writing tests) today.

-joe

On Oct 19, 2011, at 8:25 PM, Daryl Walleck wrote:
 Interesting analysis. I see a few issues though. I don't think that running 
 tests in serial is a realistic option with a functional/integration test 
 suite of significant size, given the time needed to create the resources 
 needed for testing. We could, but the time needed to execute the suite and 
 get feedback in a useful period of time would be prohibitive. If tests are 
 self sufficient and create or intelligently share resources, parallelization 
 should be doable. 
 
 I've heard the idea of forced test dependencies several times, which adds its 
 own set of problems. Independent and flexible grouping/execution of tests is 
 something I've relied on quite a bit in the past when troubleshooting. I'd 
 also be concerned about the stability and reliability of the tests if each 
 relies on the state generated by the tests before it. If test 4 out of 100 
 fails, that means the results of the 96 tests would either be false 
 positives, or if all test dependent execution ends at that point numerous 
 test cases would not be executed, possible hiding other issues until the 
 first issue is resolved.
 
 I also get the impression that I think we all may be fairly far apart about 
 what it is that we each want from a testing framework (and even to some 
 degree of what components make up a framework). It might be useful to take a 
 step back and discuss what it is that we want from this test suite and what 
 it should accomplish. For example, my understanding is that this suite will 
 likely grow to hundreds

Re: [Openstack] [QA] openstack-integration-tests

2011-10-12 Thread James E. Blair
Gabe Westmaas gabe.westm...@rackspace.com writes:

 First, there was widespread agreement that in order for an integration
 suite to be run in the openstack jenkins, it should be included in the
 community github repository.

Hi,

During the meeting today, folks asked about what the core group is for
the openstack-integration-tests project.

The project is currently on github and gerrit:
  https://github.com/openstack/openstack-integration-tests

I think there should be a new group, perhaps openstack-qa-core, that is
the group of core committers to the project that can approve changes in
gerrit.  Unless there are any objections, I'll create such a group and
add it to gerrit.

-Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] openstack-integration-tests

2011-10-12 Thread Jay Pipes
Sorry for top posting... At a conference and typing from my phone...

The openstack-qa-core group should be for core code committers to the
openstack-integrated-tests project. The fixes team is the one responsible
for the stable branches..

-jay
On Oct 12, 2011 4:13 PM, Nati Ueno nati.u...@gmail.com wrote:

 Jim++

 #I suppose the scope of openstack-qa-core and fixes are very similar.

 2011/10/12 James E. Blair cor...@inaugust.com:
  Gabe Westmaas gabe.westm...@rackspace.com writes:
 
  First, there was widespread agreement that in order for an integration
  suite to be run in the openstack jenkins, it should be included in the
  community github repository.
 
  Hi,
 
  During the meeting today, folks asked about what the core group is for
  the openstack-integration-tests project.
 
  The project is currently on github and gerrit:
   https://github.com/openstack/openstack-integration-tests
 
  I think there should be a new group, perhaps openstack-qa-core, that is
  the group of core committers to the project that can approve changes in
  gerrit.  Unless there are any objections, I'll create such a group and
  add it to gerrit.
 
  -Jim
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 



 --
 Nachi Ueno
 email:nati.u...@gmail.com
 twitter:http://twitter.com/nati

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] openstack-integration-tests

2011-10-12 Thread Nati Ueno
Jim++

#I suppose the scope of openstack-qa-core and fixes are very similar.

2011/10/12 James E. Blair cor...@inaugust.com:
 Gabe Westmaas gabe.westm...@rackspace.com writes:

 First, there was widespread agreement that in order for an integration
 suite to be run in the openstack jenkins, it should be included in the
 community github repository.

 Hi,

 During the meeting today, folks asked about what the core group is for
 the openstack-integration-tests project.

 The project is currently on github and gerrit:
  https://github.com/openstack/openstack-integration-tests

 I think there should be a new group, perhaps openstack-qa-core, that is
 the group of core committers to the project that can approve changes in
 gerrit.  Unless there are any objections, I'll create such a group and
 add it to gerrit.

 -Jim

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Nachi Ueno
email:nati.u...@gmail.com
twitter:http://twitter.com/nati

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [QA] openstack-integration-tests

2011-10-10 Thread Gabe Westmaas
I'd like to try to summarize and propose at least one next step for the content 
of the openstack-integration-tests git repository.  Note that this is only 
about the actual tests themselves, and says absolutely nothing about any gating 
decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to 
be run in the openstack jenkins, it should be included in the community github 
repository.

Second, it was agreed that there is value in having tests in multiple 
languages, especially in the case where those tests add value beyond the base 
language.  Examples of this may include testing using another set of bindings, 
and therefore testing the API.  Using a testing framework that just takes a 
different approach to testing.  Invalid examples include implanting the exact 
same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as 
httplib2.  Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires 
should be done via script that is close to but not within that suite – we want 
tests to be as agnostic to an implementation of openstack as possible, and 
anything you cannot do from the the API should not be inside the tests.

Fifth, there are suites of white box tests – we understand there can be value 
here, but we aren't sure how to approach that in this project, definitely more 
discussion needed here.  Maybe we have a separate directory for holding white 
and black box tests?

Sixth, no matter what else changes, we must maintain the ability to run a 
subset of tests through a common runner.  This can be done via command line or 
configuration, whichever makes the most sense.  I'd personally lean towards 
configuration with the ability to override on the command line.

If you feel I mischaracterized any of the agreements, please feel free to say 
so.

Next, we want to start moving away from the multiple entry points to write 
additional tests.  That means taking inventory of the tests that are there now, 
and figuring out what they are testing, and how we run them, and then working 
to combine what makes sense, into a directory structure that makes sense.  As 
often as possible, we should make sure the tests can be run in the same way.  I 
started a little wiki to start collecting information.  I think a short 
description of the general strategy of each suite and then details about the 
specific tests in that suite would be useful.

http://wiki.openstack.org/openstack-integration-test-suites

Hopefully this can make things a little easier to start contributing.

Gabe
This email may include confidential information. If you received it in error, 
please delete it.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-13 Thread Thierry Carrez
Matt Dietz wrote:
 Ditto
 
 On 9/12/11 2:10 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 
 From: Jay Pipes [jaypi...@gmail.com]

 Can we discuss pulling novaclient into Nova's source tree at the design
 summit?

 +1 

Someone should file it at summit.openstack.org, then :)

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-13 Thread Sandy Walsh
done


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, September 13, 2011 6:57 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Matt Dietz wrote:
 Ditto

 On 9/12/11 2:10 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 From: Jay Pipes [jaypi...@gmail.com]

 Can we discuss pulling novaclient into Nova's source tree at the design
 summit?

 +1

Someone should file it at summit.openstack.org, then :)

--
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Jay Pipes
Wow, another integration test framework emerges out of the woodwork :)

Looking forward to getting all of these into a single place... and
clearing up the lack of communication between teams on this subject!
There's Kong, Stacktester, and Backfire. Any others out there we
should know about?

Cheers,
jay

On Sun, Sep 11, 2011 at 9:09 PM, Matt Dietz matt.di...@rackspace.com wrote:
 Suite looks great, Soren!
 Wanted to mention that we actually developed our own suite of tests, which
 you can find at https://github.com/ohthree/backfire I'm planning to
 reconcile the difference so we can stop the independent efforts, but it's
 going to take time. Something else I'd like to see in the suite, that we
 currently have in backfire via a custom test module, is the ability to
 parallelize the tests for execution.
 From: Soren Hansen so...@linux2go.dk
 Date: Sat, 10 Sep 2011 00:21:10 +0200
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Integration tests

 That link shouldn't have included the +bug bit. Copy/paste fail. :(

 Sent from my phone. Pardon my brevity.

 ___ Mailing list:
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack More help :
 https://help.launchpad.net/ListHelp This email may include confidential
 information. If you received it in error, please delete it.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Tim Simpson
I'm with the Reddwarf (Database as a Service) team at Rackspace. We employ a 
large set of integration / functional tests which run in a Vagrant controlled 
VM.

Our tests use python-novaclient as well as a similar client for Reddwarf we've 
written ourselves. The motivation is to eat our own dog food- if the novaclient 
is the official rich client for Python, why shouldn't the test make use of it?

We also use a library called Proboscis so we can order our tests without 
prefixing numbers to them, automatically skip related tests when something in a 
dependency fails, and avoid global variables even as we pass state between 
related tests. I think the openstack-integration-tests would definitely benefit 
from it.

I'd love to have a conversation about getting the traits outlined above adopted 
into this standardized testing solution. :)

Output of our tests running: http://pastie.org/2521835
Proboscis documentation: http://packages.python.org/proboscis/

From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on behalf of 
Jay Pipes [jaypi...@gmail.com]
Sent: Monday, September 12, 2011 8:20 AM
To: Matt Dietz
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Wow, another integration test framework emerges out of the woodwork :)

Looking forward to getting all of these into a single place... and
clearing up the lack of communication between teams on this subject!
There's Kong, Stacktester, and Backfire. Any others out there we
should know about?

Cheers,
jay

On Sun, Sep 11, 2011 at 9:09 PM, Matt Dietz matt.di...@rackspace.com wrote:
 Suite looks great, Soren!
 Wanted to mention that we actually developed our own suite of tests, which
 you can find at https://github.com/ohthree/backfire I'm planning to
 reconcile the difference so we can stop the independent efforts, but it's
 going to take time. Something else I'd like to see in the suite, that we
 currently have in backfire via a custom test module, is the ability to
 parallelize the tests for execution.
 From: Soren Hansen so...@linux2go.dk
 Date: Sat, 10 Sep 2011 00:21:10 +0200
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Integration tests

 That link shouldn't have included the +bug bit. Copy/paste fail. :(

 Sent from my phone. Pardon my brevity.

 ___ Mailing list:
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack More help :
 https://help.launchpad.net/ListHelp This email may include confidential
 information. If you received it in error, please delete it.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Matt Dietz
Regarding novaclient, I was debating this myself last night. Our backfire
suite currently uses it. However, I can see scenarios where nova has
advanced but novaclient has yet to, and because all three projects are
independently developed, we're stuck temporarily. I can see utility in
tests specifically for Novaclient, and ones that use a built in client (as
the suite that Soren provided currently does.)

Also, Backfire uses a module Kevin Mitchell wrote named Dtest that
actually provides a lot of that same functionality you mention, Tim. It
also gives you the ability to spawn the tests individually in greenthreads
so they execute in parallel. We got push back when we initially tried to
merge our tests into Nova, partially because it uses a 3rd party library
for executing the tests. We can try merging these things together, if
that's what everyone wants.

On 9/12/11 11:39 AM, Tim Simpson tim.simp...@rackspace.com wrote:

I'm with the Reddwarf (Database as a Service) team at Rackspace. We
employ a large set of integration / functional tests which run in a
Vagrant controlled VM.

Our tests use python-novaclient as well as a similar client for Reddwarf
we've written ourselves. The motivation is to eat our own dog food- if
the novaclient is the official rich client for Python, why shouldn't the
test make use of it?

We also use a library called Proboscis so we can order our tests without
prefixing numbers to them, automatically skip related tests when
something in a dependency fails, and avoid global variables even as we
pass state between related tests. I think the openstack-integration-tests
would definitely benefit from it.

I'd love to have a conversation about getting the traits outlined above
adopted into this standardized testing solution. :)

Output of our tests running: http://pastie.org/2521835
Proboscis documentation: http://packages.python.org/proboscis/

From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on
behalf of Jay Pipes [jaypi...@gmail.com]
Sent: Monday, September 12, 2011 8:20 AM
To: Matt Dietz
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Wow, another integration test framework emerges out of the woodwork :)

Looking forward to getting all of these into a single place... and
clearing up the lack of communication between teams on this subject!
There's Kong, Stacktester, and Backfire. Any others out there we
should know about?

Cheers,
jay

On Sun, Sep 11, 2011 at 9:09 PM, Matt Dietz matt.di...@rackspace.com
wrote:
 Suite looks great, Soren!
 Wanted to mention that we actually developed our own suite of tests,
which
 you can find at https://github.com/ohthree/backfire I'm planning to
 reconcile the difference so we can stop the independent efforts, but
it's
 going to take time. Something else I'd like to see in the suite, that we
 currently have in backfire via a custom test module, is the ability to
 parallelize the tests for execution.
 From: Soren Hansen so...@linux2go.dk
 Date: Sat, 10 Sep 2011 00:21:10 +0200
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Integration tests

 That link shouldn't have included the +bug bit. Copy/paste fail. :(

 Sent from my phone. Pardon my brevity.

 ___ Mailing list:
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack More help :
 https://help.launchpad.net/ListHelp This email may include confidential
 information. If you received it in error, please delete it.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Jay Pipes
On Mon, Sep 12, 2011 at 1:22 PM, Matt Dietz matt.di...@rackspace.com wrote:
 Regarding novaclient, I was debating this myself last night. Our backfire
 suite currently uses it. However, I can see scenarios where nova has
 advanced but novaclient has yet to, and because all three projects are
 independently developed, we're stuck temporarily.

This, I believe, is a problem. novaclient is currently the only client
that isn't in the core project itself. Can we discuss pulling
novaclient into Nova's source tree at the design summit?

 I can see utility in
 tests specifically for Novaclient, and ones that use a built in client (as
 the suite that Soren provided currently does.)

Sure. This is actually something that Glance's functional test suite
does. It uses httplib2 as a direct HTTP client and uses bin/glance to
test the supplied Glance client.

 Also, Backfire uses a module Kevin Mitchell wrote named Dtest that
 actually provides a lot of that same functionality you mention, Tim. It
 also gives you the ability to spawn the tests individually in greenthreads
 so they execute in parallel. We got push back when we initially tried to
 merge our tests into Nova, partially because it uses a 3rd party library
 for executing the tests. We can try merging these things together, if
 that's what everyone wants.

I'd be totally cool with bringing DTest goodness into
https://github.com/sorenh/openstack-integration-tests. :)

Less duplication of effort, the better IMHO.
openstack-integration-tests should be in the business of writing
integration tests, not multi-processing testing libraries.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Sandy Walsh


From: Jay Pipes [jaypi...@gmail.com]

 Can we discuss pulling novaclient into Nova's source tree at the design 
 summit?

+1 
This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Tim Simpson
It would be advantageous for the tests to only use one client, whatever it 
turns out to be. I think it would be most practical to use a bleeding edge 
version of Nova client and add any features necessary to it in its own repo 
directly or by somehow mixing it in with the test code.

As for Dtest- I guess great minds think alike? It looks like both projects 
started at the same time (Proboscis began back in February in an internal 
Rackspace repo) so its unfortunate we're only just now discovering each other's 
work. Dtest looks very interesting and the ability to run tests in parallel is 
something I've been thought about doing in Proboscis, but unsure of how to 
proceed since Proboscis orders things before sending the list to Nose (for the 
purposes of backwards compatibility). It may be possible however for Proboscis 
to feed Dtest's execution routines.

At first glance the ordering functionality in dtest is similar to Proboscis but 
after reviewing some of the code in backfire I believe there are features of 
Proboscis not present there. Proboscis in general is TestNG in Python, and 
anytime I've broken away from how TestNG handles these things (usually on 
accident) I've found it to be a mistake-TestNG really did think out how to do 
this pretty well.

We should talk sometime, I think each project could gain a lot from it.


From: Matt Dietz
Sent: Monday, September 12, 2011 12:22 PM
To: Tim Simpson; Jay Pipes
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Regarding novaclient, I was debating this myself last night. Our backfire
suite currently uses it. However, I can see scenarios where nova has
advanced but novaclient has yet to, and because all three projects are
independently developed, we're stuck temporarily. I can see utility in
tests specifically for Novaclient, and ones that use a built in client (as
the suite that Soren provided currently does.)

Also, Backfire uses a module Kevin Mitchell wrote named Dtest that
actually provides a lot of that same functionality you mention, Tim. It
also gives you the ability to spawn the tests individually in greenthreads
so they execute in parallel. We got push back when we initially tried to
merge our tests into Nova, partially because it uses a 3rd party library
for executing the tests. We can try merging these things together, if
that's what everyone wants.

On 9/12/11 11:39 AM, Tim Simpson tim.simp...@rackspace.com wrote:

I'm with the Reddwarf (Database as a Service) team at Rackspace. We
employ a large set of integration / functional tests which run in a
Vagrant controlled VM.

Our tests use python-novaclient as well as a similar client for Reddwarf
we've written ourselves. The motivation is to eat our own dog food- if
the novaclient is the official rich client for Python, why shouldn't the
test make use of it?

We also use a library called Proboscis so we can order our tests without
prefixing numbers to them, automatically skip related tests when
something in a dependency fails, and avoid global variables even as we
pass state between related tests. I think the openstack-integration-tests
would definitely benefit from it.

I'd love to have a conversation about getting the traits outlined above
adopted into this standardized testing solution. :)

Output of our tests running: http://pastie.org/2521835
Proboscis documentation: http://packages.python.org/proboscis/

From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on
behalf of Jay Pipes [jaypi...@gmail.com]
Sent: Monday, September 12, 2011 8:20 AM
To: Matt Dietz
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Wow, another integration test framework emerges out of the woodwork :)

Looking forward to getting all of these into a single place... and
clearing up the lack of communication between teams on this subject!
There's Kong, Stacktester, and Backfire. Any others out there we
should know about?

Cheers,
jay

On Sun, Sep 11, 2011 at 9:09 PM, Matt Dietz matt.di...@rackspace.com
wrote:
 Suite looks great, Soren!
 Wanted to mention that we actually developed our own suite of tests,
which
 you can find at https://github.com/ohthree/backfire I'm planning to
 reconcile the difference so we can stop the independent efforts, but
it's
 going to take time. Something else I'd like to see in the suite, that we
 currently have in backfire via a custom test module, is the ability to
 parallelize the tests for execution.
 From: Soren Hansen so...@linux2go.dk
 Date: Sat, 10 Sep 2011 00:21:10 +0200
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Integration tests

 That link shouldn't have included the +bug bit. Copy/paste fail. :(

 Sent from my phone. Pardon my brevity.

 ___ Mailing list:
 https://launchpad.net/~openstack Post

Re: [Openstack] Integration tests

2011-09-12 Thread Matt Dietz
Ditto

On 9/12/11 2:10 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:



From: Jay Pipes [jaypi...@gmail.com]

 Can we discuss pulling novaclient into Nova's source tree at the design
summit?

+1 

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Gabe Westmaas

On 9/12/11 2:54 PM, Tim Simpson tim.simp...@rackspace.com wrote:

It would be advantageous for the tests to only use one client, whatever
it turns out to be. I think it would be most practical to use a bleeding
edge version of Nova client and add any features necessary to it in its
own repo directly or by somehow mixing it in with the test code.

My biggest concern in using novaclient exclusively is masking errors in
both the API and novaclient.  If this is resolved another way, then I'm
ok.  There are a few different ways to do this, including running through
a separate python client that hits the API directly (as kong and
stacktester do, I believe), or using language bindings from other
languages as a part of the tests to make sure everyone is in sync.

Of course it is important that we maintain novaclient as a part of the end
to end tests as well.

Gabe


As for Dtest- I guess great minds think alike? It looks like both
projects started at the same time (Proboscis began back in February in an
internal Rackspace repo) so its unfortunate we're only just now
discovering each other's work. Dtest looks very interesting and the
ability to run tests in parallel is something I've been thought about
doing in Proboscis, but unsure of how to proceed since Proboscis orders
things before sending the list to Nose (for the purposes of backwards
compatibility). It may be possible however for Proboscis to feed Dtest's
execution routines.

At first glance the ordering functionality in dtest is similar to
Proboscis but after reviewing some of the code in backfire I believe
there are features of Proboscis not present there. Proboscis in general
is TestNG in Python, and anytime I've broken away from how TestNG handles
these things (usually on accident) I've found it to be a mistake-TestNG
really did think out how to do this pretty well.

We should talk sometime, I think each project could gain a lot from it.


From: Matt Dietz
Sent: Monday, September 12, 2011 12:22 PM
To: Tim Simpson; Jay Pipes
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Regarding novaclient, I was debating this myself last night. Our backfire
suite currently uses it. However, I can see scenarios where nova has
advanced but novaclient has yet to, and because all three projects are
independently developed, we're stuck temporarily. I can see utility in
tests specifically for Novaclient, and ones that use a built in client (as
the suite that Soren provided currently does.)

Also, Backfire uses a module Kevin Mitchell wrote named Dtest that
actually provides a lot of that same functionality you mention, Tim. It
also gives you the ability to spawn the tests individually in greenthreads
so they execute in parallel. We got push back when we initially tried to
merge our tests into Nova, partially because it uses a 3rd party library
for executing the tests. We can try merging these things together, if
that's what everyone wants.

On 9/12/11 11:39 AM, Tim Simpson tim.simp...@rackspace.com wrote:

I'm with the Reddwarf (Database as a Service) team at Rackspace. We
employ a large set of integration / functional tests which run in a
Vagrant controlled VM.

Our tests use python-novaclient as well as a similar client for Reddwarf
we've written ourselves. The motivation is to eat our own dog food- if
the novaclient is the official rich client for Python, why shouldn't the
test make use of it?

We also use a library called Proboscis so we can order our tests without
prefixing numbers to them, automatically skip related tests when
something in a dependency fails, and avoid global variables even as we
pass state between related tests. I think the openstack-integration-tests
would definitely benefit from it.

I'd love to have a conversation about getting the traits outlined above
adopted into this standardized testing solution. :)

Output of our tests running: http://pastie.org/2521835
Proboscis documentation: http://packages.python.org/proboscis/

From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on
behalf of Jay Pipes [jaypi...@gmail.com]
Sent: Monday, September 12, 2011 8:20 AM
To: Matt Dietz
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Wow, another integration test framework emerges out of the woodwork :)

Looking forward to getting all of these into a single place... and
clearing up the lack of communication between teams on this subject!
There's Kong, Stacktester, and Backfire. Any others out there we
should know about?

Cheers,
jay

On Sun, Sep 11, 2011 at 9:09 PM, Matt Dietz matt.di...@rackspace.com
wrote:
 Suite looks great, Soren!
 Wanted to mention that we actually developed our own suite of tests,
which
 you can find at https://github.com/ohthree/backfire I'm planning to
 reconcile the difference so we can stop the independent efforts

Re: [Openstack] Integration tests

2011-09-12 Thread Matt Dietz
Actually to extend this, Novaclient is used for zone communication inside
of nova. It not being included is silly, at this point

On 9/12/11 1:37 PM, Jay Pipes jaypi...@gmail.com wrote:

On Mon, Sep 12, 2011 at 1:22 PM, Matt Dietz matt.di...@rackspace.com
wrote:
 Regarding novaclient, I was debating this myself last night. Our
backfire
 suite currently uses it. However, I can see scenarios where nova has
 advanced but novaclient has yet to, and because all three projects are
 independently developed, we're stuck temporarily.

This, I believe, is a problem. novaclient is currently the only client
that isn't in the core project itself. Can we discuss pulling
novaclient into Nova's source tree at the design summit?

 I can see utility in
 tests specifically for Novaclient, and ones that use a built in client
(as
 the suite that Soren provided currently does.)

Sure. This is actually something that Glance's functional test suite
does. It uses httplib2 as a direct HTTP client and uses bin/glance to
test the supplied Glance client.

 Also, Backfire uses a module Kevin Mitchell wrote named Dtest that
 actually provides a lot of that same functionality you mention, Tim. It
 also gives you the ability to spawn the tests individually in
greenthreads
 so they execute in parallel. We got push back when we initially tried to
 merge our tests into Nova, partially because it uses a 3rd party library
 for executing the tests. We can try merging these things together, if
 that's what everyone wants.

I'd be totally cool with bringing DTest goodness into
https://github.com/sorenh/openstack-integration-tests. :)

Less duplication of effort, the better IMHO.
openstack-integration-tests should be in the business of writing
integration tests, not multi-processing testing libraries.

-jay

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-12 Thread Tim Simpson
One solution is to make a second version of the Nova client with the same 
interface, so that the test config could use one or the other on different test 
runs. We could make this client communicate via XML to ensure we hit the XML 
serialization code.

To me the biggest benefit of using the client directly is it would make 
everyone have to buy into how it behaves and feels. As for testing it itself, I 
agree thats a secondary goal which is arguably negated by the risk of hiding 
bugs immune to nova-client which are not caught.

From: Gabe Westmaas
Sent: Monday, September 12, 2011 3:16 PM
To: Tim Simpson; Matt Dietz; Jay Pipes
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

On 9/12/11 2:54 PM, Tim Simpson tim.simp...@rackspace.com wrote:

It would be advantageous for the tests to only use one client, whatever
it turns out to be. I think it would be most practical to use a bleeding
edge version of Nova client and add any features necessary to it in its
own repo directly or by somehow mixing it in with the test code.

My biggest concern in using novaclient exclusively is masking errors in
both the API and novaclient.  If this is resolved another way, then I'm
ok.  There are a few different ways to do this, including running through
a separate python client that hits the API directly (as kong and
stacktester do, I believe), or using language bindings from other
languages as a part of the tests to make sure everyone is in sync.

Of course it is important that we maintain novaclient as a part of the end
to end tests as well.

Gabe


As for Dtest- I guess great minds think alike? It looks like both
projects started at the same time (Proboscis began back in February in an
internal Rackspace repo) so its unfortunate we're only just now
discovering each other's work. Dtest looks very interesting and the
ability to run tests in parallel is something I've been thought about
doing in Proboscis, but unsure of how to proceed since Proboscis orders
things before sending the list to Nose (for the purposes of backwards
compatibility). It may be possible however for Proboscis to feed Dtest's
execution routines.

At first glance the ordering functionality in dtest is similar to
Proboscis but after reviewing some of the code in backfire I believe
there are features of Proboscis not present there. Proboscis in general
is TestNG in Python, and anytime I've broken away from how TestNG handles
these things (usually on accident) I've found it to be a mistake-TestNG
really did think out how to do this pretty well.

We should talk sometime, I think each project could gain a lot from it.


From: Matt Dietz
Sent: Monday, September 12, 2011 12:22 PM
To: Tim Simpson; Jay Pipes
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests

Regarding novaclient, I was debating this myself last night. Our backfire
suite currently uses it. However, I can see scenarios where nova has
advanced but novaclient has yet to, and because all three projects are
independently developed, we're stuck temporarily. I can see utility in
tests specifically for Novaclient, and ones that use a built in client (as
the suite that Soren provided currently does.)

Also, Backfire uses a module Kevin Mitchell wrote named Dtest that
actually provides a lot of that same functionality you mention, Tim. It
also gives you the ability to spawn the tests individually in greenthreads
so they execute in parallel. We got push back when we initially tried to
merge our tests into Nova, partially because it uses a 3rd party library
for executing the tests. We can try merging these things together, if
that's what everyone wants.

On 9/12/11 11:39 AM, Tim Simpson tim.simp...@rackspace.com wrote:

I'm with the Reddwarf (Database as a Service) team at Rackspace. We
employ a large set of integration / functional tests which run in a
Vagrant controlled VM.

Our tests use python-novaclient as well as a similar client for Reddwarf
we've written ourselves. The motivation is to eat our own dog food- if
the novaclient is the official rich client for Python, why shouldn't the
test make use of it?

We also use a library called Proboscis so we can order our tests without
prefixing numbers to them, automatically skip related tests when
something in a dependency fails, and avoid global variables even as we
pass state between related tests. I think the openstack-integration-tests
would definitely benefit from it.

I'd love to have a conversation about getting the traits outlined above
adopted into this standardized testing solution. :)

Output of our tests running: http://pastie.org/2521835
Proboscis documentation: http://packages.python.org/proboscis/

From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on
behalf of Jay Pipes [jaypi...@gmail.com

Re: [Openstack] Integration tests

2011-09-12 Thread Ed Leafe
On Sep 12, 2011, at 3:16 PM, Gabe Westmaas wrote:

 It would be advantageous for the tests to only use one client, whatever
 it turns out to be. I think it would be most practical to use a bleeding
 edge version of Nova client and add any features necessary to it in its
 own repo directly or by somehow mixing it in with the test code.
 
 My biggest concern in using novaclient exclusively is masking errors in
 both the API and novaclient.  If this is resolved another way, then I'm
 ok.

There were many who wanted to separate the two functions: inter-zone 
communication would use a highly specialized, internally-sourced client, while 
'novaclient' would be an independent project managed outside of OpenStack, and 
would serve as to keep the API honest. While this would cause some duplication, 
the trade-off in cleanliness might be worth it.


-- Ed Leafe

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-11 Thread Matt Dietz
Suite looks great, Soren!

Wanted to mention that we actually developed our own suite of tests, which you 
can find at https://github.com/ohthree/backfire I'm planning to reconcile the 
difference so we can stop the independent efforts, but it's going to take time. 
Something else I'd like to see in the suite, that we currently have in backfire 
via a custom test module, is the ability to parallelize the tests for execution.

From: Soren Hansen so...@linux2go.dkmailto:so...@linux2go.dk
Date: Sat, 10 Sep 2011 00:21:10 +0200
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Integration tests


That link shouldn't have included the +bug bit. Copy/paste fail. :(

Sent from my phone. Pardon my brevity.

___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, 
please delete it.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Integration tests

2011-09-09 Thread Soren Hansen
Hi, guys.

As you may know, we have a small test cluster that runs whatever is in
the nova-core/trunk PPA. Whenever there's an update in the PPA, the
cluster gets blown away, and a fresh Nova gets installed, all driven by
a deploy job run by Jenkins.

In addition to that, there's now an integration test job that is run
whenever the deploy job finishes. It runs a set of tests against the
freshly installed cluster. At the moment, a lot of it its failing, some
of it due to reasons outlined below and some of it due to the deploy
script apparently being a tad fragile at times.

It lives at https://github.com/openstack/openstack-integration-tests and
is managed by Gerrit like everything else (see
http://wiki.openstack.org/GerritWorkflow for details).

A bit of history:

A number of different integration test projects existed. The most
extensive ones I knew of, Kong and Stacktester, were used to bootstrap
this project. I (arbitrarily) picked Kong as the starting point, and
then pulled Stacktester into the same tree and tried to get all the ends
to meet.  Somewhat ironically, testing this is awkward for me, since I
don't have direct access to a test cluster (it's just little, old me
with my laptop here) Anyways, this is why it has somewhat of a
Frankenstein feel to it, but I expect that to get sorted out shortly,
and I'd like to invite anyone interested in this effort to pitch in.

Going forward, the idea is to use this project as the canonical home for
integration tests for OpenStack. If someone has integration tests they
want to run, they can submit a patch to this project, and we can include
it. The more problems we can expose using this system, the better, so if
you have any tests you want to run, please submit them here.

Some of the short term goals for this include getting all the existing
tests to use the same method of issuing API calls[1], and getting them
all to pass as well.

[1]: https://bugs.launchpad.net/bugs/+bug/845993

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration tests

2011-09-09 Thread Soren Hansen
That link shouldn't have included the +bug bit. Copy/paste fail. :(

Sent from my phone. Pardon my brevity.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp