Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-09 Thread Jay Pipes

On 05/03/2012 03:54 PM, Daryl Walleck wrote:

So my first question is around this. So is the claim is that the client
tools are the default interface for the applications?


Sorry, perhaps a better term would have been the most common interface 
to OpenStack Compute...


  While that works

for coders in python, what about people using other languages? Even
then, there's no guarantee that the clients in different languages are
implemented in the same way. Tempest was designed originally because
while it does use an abstraction between the API and the tests, there is
nothing to assist the user by retrying and the like. While I think
there's a place for writing tests using the command line clients, to me
that would be a smoke test of a client and not as much a smoke test of
the API.


I understand your point, and I want to make it clear that I'm not 
advocating for getting rid of the Tempest REST client at all. I'm only 
advocating that we have a set of tests -- smoke tests -- that use the 
Python novaclient library, run a small subset of the total tests, and 
exercise only the most common and simple use cases for OpenStack.


Here is where I think a sensible breakdown is:

* Smoke tests

 - Test ONLY *simple*, *fast* code paths for common use cases
 - Use a Manager class that has a novaclient Client object

* Negative tests

 - Test all code paths that should produce errors or negative results 
(NotFound, etc)

 - Use a Manager class that has the Tempest REST client

* Fuzz tests

 - Test all code paths with randomized input data
 - Use a Manager class that has the Tempest REST client

* Advanced tests

 - Test extensions, interaction between multiple components, advanced 
features of the API, etc
 - Use a Manager class that has a novaclient Client object OR the 
Tempest REST client, depending on whether:

  - novaclient actually has the ability to perform the necessary actions
  - If the test needs to poke at the API in a way that novaclient would 
interfere with -- and thus the Tempest REST client would be ideal


Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-09 Thread Jay Pipes

On 05/03/2012 10:54 PM, Maru Newby wrote:

The rest api is the default interface, and the client tools target that
interface. Since the clients are cli more than python api, they can be
used by any language that can use a shell. What exactly does
reimplementing the clients for the sake of testing accomplish? Double
the maintenance effort for the same result, imho.


There is a very specific reason that the novaclient library wasn't used. 
We needed a client that would enable us to throw invalid and random bad 
input data at the APIs. novaclient is designed to eliminate the ability 
to get an API call wrong, since it successfully constructs the proper 
HTTP API calls. We need a client that allows Tempest's negative tests to 
throw bad stuff at the API and verify the server's robustness in 
responding to that invalid data.


Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Tim Simpson
Hi Daryl,

I understand what you're trying to accomplish by creating a new client for the 
tests, but I'm not sure why the interface has to be completely different from 
the official set of Python client bindings. If it used the same interface 
everyone in the community could then benefit from the extra features you're 
adding to the Tempest client by swapping it in should the need arise. It would 
also be easier for Tempest newbs to create and contribute new tests.

I understand that in some cases the current interface is too nice, but wouldn't 
the existing one work fine for the majority of the tests? If so, why not just 
write a few extra methods to send HTTP requests directly (or for these cases, 
use http directly)?

Additionally I've heard from some in Rackspace QA that the client allows them 
to see the HTTP codes which is more illustrative. I feel like this problem 
could be solved with helper methods this:

def assert_response(http_code, func, *args, **kwargs):
try:
func(*args, **kwargs)
assert_equal(http_code, 200)
except ClientException as ce:
assert_equal(http_code, ce.code)

Then you'd write tests like this:

server = assert_response(200, servers.get, some_id)

You could of course have additional methods if the success case indicated a 
different HTTP code. If more than one HTTP code could possibly lead to the same 
return value then maybe that indicates the official bindings should be changed. 
In this case it would be another win, as Tempest writers would be pushing to 
ensure the Python client interface was as useful as possible.

Tim

From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on behalf of 
Daryl Walleck [daryl.wall...@rackspace.com]
Sent: Friday, May 04, 2012 12:03 AM
To: Maru Newby
Cc: Rick Lopez; openstack-qa-t...@lists.launchpad.net; 
openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
test efforts

Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple approaches simply 
for the sake of using the clients for certain tests. I'm assuming these were 
things that were talked about during the CLI portions of OpenStack summit, 
which I wasn't able to attend. I wasn't aware of this or even some of the new 
parallel testing efforts which somehow did not come up during the QA track. The 
purpose of Tempest in the first place was to unify the functional and 
integration testing efforts for OpenStack projects, and I'm dedicated to doing 
everything I can to make that happen. If everyone is in agreement on the other 
side, I certainly don't want to be the one in the way against the majority. 
However, I just wanted to state my concerns before we take any further actions.

Daryl

On May 3, 2012, at 9:54 PM, Maru Newby wrote:

The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru

On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications

Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Thompson Lee
We're playing with Cucumber through fog https://github.com/MorphGlobal/foghorn. 
 We we're going to write a similar email as smoketests use Boto which is the 
Amazon api, not openstack.  We added the openstack essex api's to fog and while 
foghorn comes in through ruby, it hits the native APIs and is super easy to 
setup and run.  We won't get around to continue playing with it until next week 
so its an idea in the rough at the moment.

We love Cucumber...

Lee

On May 8, 2012, at 4:00 PM, Tim Simpson wrote:

 Hi Daryl, 
 
 I understand what you're trying to accomplish by creating a new client for 
 the tests, but I'm not sure why the interface has to be completely different 
 from the official set of Python client bindings. If it used the same 
 interface everyone in the community could then benefit from the extra 
 features you're adding to the Tempest client by swapping it in should the 
 need arise. It would also be easier for Tempest newbs to create and 
 contribute new tests.
 
 I understand that in some cases the current interface is too nice, but 
 wouldn't the existing one work fine for the majority of the tests? If so, why 
 not just write a few extra methods to send HTTP requests directly (or for 
 these cases, use http directly)?
 
 Additionally I've heard from some in Rackspace QA that the client allows them 
 to see the HTTP codes which is more illustrative. I feel like this problem 
 could be solved with helper methods this:
 
 def assert_response(http_code, func, *args, **kwargs):
 try:
 func(*args, **kwargs)
 assert_equal(http_code, 200)
 except ClientException as ce:
 assert_equal(http_code, ce.code)
 
 Then you'd write tests like this:
 
 server = assert_response(200, servers.get, some_id)
 
 You could of course have additional methods if the success case indicated a 
 different HTTP code. If more than one HTTP code could possibly lead to the 
 same return value then maybe that indicates the official bindings should be 
 changed. In this case it would be another win, as Tempest writers would be 
 pushing to ensure the Python client interface was as useful as possible.
 
 Tim
 From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net 
 [openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on behalf 
 of Daryl Walleck [daryl.wall...@rackspace.com]
 Sent: Friday, May 04, 2012 12:03 AM
 To: Maru Newby
 Cc: Rick Lopez; openstack-qa-t...@lists.launchpad.net; 
 openstack@lists.launchpad.net
 Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
 test efforts
 
 Perhaps it's just me, but given if I was developing in a different language, 
 I would not want to use a command line tool to interact with my application. 
 What is the point then of developing RESTful APIs if the primary client is 
 not it, but these command line tools instead?
 
 While it may appear that the approach I'm advocating is extra work, let me 
 try explain the purpose of this approach. If these aren't clear, I'd be more 
 than glad to give some concrete examples where these techniques have been 
 useful. A few benefits that come to mind are:
 
 Testing of XML implementations of the API. While this could be built into the 
 clients, I don't see many folks who would rush to that cause
 Direct access to the response. The clients hide any header/response code 
 information from the recipient, which can be very important. For example, the 
 response headers for Nova contain a token that can be captured and used when 
 troubleshooting issues.
 Avoiding the user friendliness of the clients. While retries and user 
 friendly exception handling are great for clients, they are not what I want 
 as a tester. As a tester, while I do want a minimal layer of abstraction 
 between my AUT and my code so that I'm not making HTTP requests directly from 
 my tests, from what I've seen the clients can make more efforts than I'd 
 prefer to be helpful.
 The ability to inject other metrics gathering into my clients for the purpose 
 of troubleshooting/logging/information handling
 
 While perhaps the idea is that only the smoke tests would use this approach, 
 I'm hesitant to the idea of developing tests using multiple approaches simply 
 for the sake of using the clients for certain tests. I'm assuming these were 
 things that were talked about during the CLI portions of OpenStack summit, 
 which I wasn't able to attend. I wasn't aware of this or even some of the new 
 parallel testing efforts which somehow did not come up during the QA track. 
 The purpose of Tempest in the first place was to unify the functional and 
 integration testing efforts for OpenStack projects, and I'm dedicated to 
 doing everything I can to make that happen. If everyone is in agreement on 
 the other side, I certainly don't want to be the one in the way against the 
 majority. However, I just wanted to state my concerns before we take any 
 further actions.
 
 Daryl
 
 On May 3, 2012, at 9:54

Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Daryl Walleck
Hi Tim,

I understand where you're coming from as well. I'm all for anything that gets 
the job done right and keeps folks engaged. It's fair to say that most 
positive/happy path tests could be developed using novaclient, and that there 
may be workarounds for some of the data is not readily available from 
novaclient responses. That said, the truly nasty negative tests that are so 
critical would be nearly impossible. I'm not comfortable with having hard coded 
HTTP requests inside of tests, as that can easily become a maintainability 
nightmare.

That being said, I would much rather work this out than have Tempest splinter 
into separate efforts. I think the one thing we can all agree on is that to 
some degree, tests using novaclient are a necessity. I'm the team captain for 
the QA meeting this week, so I'll set the agenda around this topic so we can 
have a more in depth discussion.

Daryl

Sent from my iPad

On May 8, 2012, at 4:00 PM, Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.com wrote:

Hi Daryl,

I understand what you're trying to accomplish by creating a new client for the 
tests, but I'm not sure why the interface has to be completely different from 
the official set of Python client bindings. If it used the same interface 
everyone in the community could then benefit from the extra features you're 
adding to the Tempest client by swapping it in should the need arise. It would 
also be easier for Tempest newbs to create and contribute new tests.

I understand that in some cases the current interface is too nice, but wouldn't 
the existing one work fine for the majority of the tests? If so, why not just 
write a few extra methods to send HTTP requests directly (or for these cases, 
use http directly)?

Additionally I've heard from some in Rackspace QA that the client allows them 
to see the HTTP codes which is more illustrative. I feel like this problem 
could be solved with helper methods this:

def assert_response(http_code, func, *args, **kwargs):
try:
func(*args, **kwargs)
assert_equal(http_code, 200)
except ClientException as ce:
assert_equal(http_code, ce.code)

Then you'd write tests like this:

server = assert_response(200, servers.get, some_id)

You could of course have additional methods if the success case indicated a 
different HTTP code. If more than one HTTP code could possibly lead to the same 
return value then maybe that indicates the official bindings should be changed. 
In this case it would be another win, as Tempest writers would be pushing to 
ensure the Python client interface was as useful as possible.

Tim

From: 
openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net]
 on behalf of Daryl Walleck 
[daryl.wall...@rackspace.commailto:daryl.wall...@rackspace.com]
Sent: Friday, May 04, 2012 12:03 AM
To: Maru Newby
Cc: Rick Lopez; 
openstack-qa-t...@lists.launchpad.netmailto:openstack-qa-t...@lists.launchpad.net;
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
test efforts

Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple

Re: [Openstack-qa-team] [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Thompson Lee
We're playing with Cucumber through fog https://github.com/MorphGlobal/foghorn. 
 We we're going to write a similar email as smoketests use Boto which is the 
Amazon api, not openstack.  We added the openstack essex api's to fog and while 
foghorn comes in through ruby, it hits the native APIs and is super easy to 
setup and run.  We won't get around to continue playing with it until next week 
so its an idea in the rough at the moment.

We love Cucumber...

Lee

On May 8, 2012, at 4:00 PM, Tim Simpson wrote:

 Hi Daryl, 
 
 I understand what you're trying to accomplish by creating a new client for 
 the tests, but I'm not sure why the interface has to be completely different 
 from the official set of Python client bindings. If it used the same 
 interface everyone in the community could then benefit from the extra 
 features you're adding to the Tempest client by swapping it in should the 
 need arise. It would also be easier for Tempest newbs to create and 
 contribute new tests.
 
 I understand that in some cases the current interface is too nice, but 
 wouldn't the existing one work fine for the majority of the tests? If so, why 
 not just write a few extra methods to send HTTP requests directly (or for 
 these cases, use http directly)?
 
 Additionally I've heard from some in Rackspace QA that the client allows them 
 to see the HTTP codes which is more illustrative. I feel like this problem 
 could be solved with helper methods this:
 
 def assert_response(http_code, func, *args, **kwargs):
 try:
 func(*args, **kwargs)
 assert_equal(http_code, 200)
 except ClientException as ce:
 assert_equal(http_code, ce.code)
 
 Then you'd write tests like this:
 
 server = assert_response(200, servers.get, some_id)
 
 You could of course have additional methods if the success case indicated a 
 different HTTP code. If more than one HTTP code could possibly lead to the 
 same return value then maybe that indicates the official bindings should be 
 changed. In this case it would be another win, as Tempest writers would be 
 pushing to ensure the Python client interface was as useful as possible.
 
 Tim
 From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net 
 [openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on behalf 
 of Daryl Walleck [daryl.wall...@rackspace.com]
 Sent: Friday, May 04, 2012 12:03 AM
 To: Maru Newby
 Cc: Rick Lopez; openstack-qa-team@lists.launchpad.net; 
 openst...@lists.launchpad.net
 Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
 test efforts
 
 Perhaps it's just me, but given if I was developing in a different language, 
 I would not want to use a command line tool to interact with my application. 
 What is the point then of developing RESTful APIs if the primary client is 
 not it, but these command line tools instead?
 
 While it may appear that the approach I'm advocating is extra work, let me 
 try explain the purpose of this approach. If these aren't clear, I'd be more 
 than glad to give some concrete examples where these techniques have been 
 useful. A few benefits that come to mind are:
 
 Testing of XML implementations of the API. While this could be built into the 
 clients, I don't see many folks who would rush to that cause
 Direct access to the response. The clients hide any header/response code 
 information from the recipient, which can be very important. For example, the 
 response headers for Nova contain a token that can be captured and used when 
 troubleshooting issues.
 Avoiding the user friendliness of the clients. While retries and user 
 friendly exception handling are great for clients, they are not what I want 
 as a tester. As a tester, while I do want a minimal layer of abstraction 
 between my AUT and my code so that I'm not making HTTP requests directly from 
 my tests, from what I've seen the clients can make more efforts than I'd 
 prefer to be helpful.
 The ability to inject other metrics gathering into my clients for the purpose 
 of troubleshooting/logging/information handling
 
 While perhaps the idea is that only the smoke tests would use this approach, 
 I'm hesitant to the idea of developing tests using multiple approaches simply 
 for the sake of using the clients for certain tests. I'm assuming these were 
 things that were talked about during the CLI portions of OpenStack summit, 
 which I wasn't able to attend. I wasn't aware of this or even some of the new 
 parallel testing efforts which somehow did not come up during the QA track. 
 The purpose of Tempest in the first place was to unify the functional and 
 integration testing efforts for OpenStack projects, and I'm dedicated to 
 doing everything I can to make that happen. If everyone is in agreement on 
 the other side, I certainly don't want to be the one in the way against the 
 majority. However, I just wanted to state my concerns before we take any 
 further actions.
 
 Daryl
 
 On May 3, 2012, at 9:54

Re: [Openstack-qa-team] [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Daryl Walleck
Hi Tim,

I understand where you're coming from as well. I'm all for anything that gets 
the job done right and keeps folks engaged. It's fair to say that most 
positive/happy path tests could be developed using novaclient, and that there 
may be workarounds for some of the data is not readily available from 
novaclient responses. That said, the truly nasty negative tests that are so 
critical would be nearly impossible. I'm not comfortable with having hard coded 
HTTP requests inside of tests, as that can easily become a maintainability 
nightmare.

That being said, I would much rather work this out than have Tempest splinter 
into separate efforts. I think the one thing we can all agree on is that to 
some degree, tests using novaclient are a necessity. I'm the team captain for 
the QA meeting this week, so I'll set the agenda around this topic so we can 
have a more in depth discussion.

Daryl

Sent from my iPad

On May 8, 2012, at 4:00 PM, Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.com wrote:

Hi Daryl,

I understand what you're trying to accomplish by creating a new client for the 
tests, but I'm not sure why the interface has to be completely different from 
the official set of Python client bindings. If it used the same interface 
everyone in the community could then benefit from the extra features you're 
adding to the Tempest client by swapping it in should the need arise. It would 
also be easier for Tempest newbs to create and contribute new tests.

I understand that in some cases the current interface is too nice, but wouldn't 
the existing one work fine for the majority of the tests? If so, why not just 
write a few extra methods to send HTTP requests directly (or for these cases, 
use http directly)?

Additionally I've heard from some in Rackspace QA that the client allows them 
to see the HTTP codes which is more illustrative. I feel like this problem 
could be solved with helper methods this:

def assert_response(http_code, func, *args, **kwargs):
try:
func(*args, **kwargs)
assert_equal(http_code, 200)
except ClientException as ce:
assert_equal(http_code, ce.code)

Then you'd write tests like this:

server = assert_response(200, servers.get, some_id)

You could of course have additional methods if the success case indicated a 
different HTTP code. If more than one HTTP code could possibly lead to the same 
return value then maybe that indicates the official bindings should be changed. 
In this case it would be another win, as Tempest writers would be pushing to 
ensure the Python client interface was as useful as possible.

Tim

From: 
openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net]
 on behalf of Daryl Walleck 
[daryl.wall...@rackspace.commailto:daryl.wall...@rackspace.com]
Sent: Friday, May 04, 2012 12:03 AM
To: Maru Newby
Cc: Rick Lopez; 
openstack-qa-team@lists.launchpad.netmailto:openstack-qa-team@lists.launchpad.net;
 openst...@lists.launchpad.netmailto:openst...@lists.launchpad.net
Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
test efforts

Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple

Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-04 Thread John Garbutt
+1 to this plan

  From the above, I would surmise that smoke tests should have all three of
 the following characteristics:
 
 * Test basic operations of an API, usually in a specific order that makes 
 sense
 as a bare-bones use case of the API
 * Test only the correct action paths -- in other words, smoke tests do not
 throw random or invalid input at an API
 * Use the default client tools to complete the actions
 * Take a short amount of time to complete

[JG] That sounds good. I wonder if we should also restrict them to be user 
actions only? Maybe we should also try to concentrate on actions that are 
likely to be available on all OpenStack installations? I am thinking things 
like resize are less important than VM start. However, I like the idea of 
testing every user facing compute CLI call using Tempest, but that scope would 
seem too large for a smoke test.
 
 It would be super-awesome if Tempest could be used to remove some of
 this duplication of efforts.

[JG] +1

 * Create a base test class that uses the default CLI tools instead of raw HTTP
 calls

[JG] we need to test the default tools, so this sounds like a good plan

 * Create a set of Manager classes that would provide a config object and one
 or more client objects to test cases -- this is similar to the
 tempest.openstack.Manager class currently in the source code, but would
 allow using the default project client libraries instead of the raw HTTP 
 client
 currently in Tempest
 * Allow test cases that derive from the base SmokeTest class to use an
 ordered test method system -- e.g.  test_001_foo, test_002_bar

[JG] I don't like order dependent tests, but I think it makes sense in this case

 * Have the base smoke test case class automatically annotate all of its test
 methods with type=smoke so that we can run all smoke tests with:
  nosetests --attr=type=smoke tempest
 
 I have put together some code that shows the general idea as a draft
 patchset [5]
 
[JG] I would like to take this and get it running on several XenServer setups.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-04 Thread Karajgi, Rohit
If the approach would be to use the client tools to only smoke test the 
Openstack API, I would somewhat
agree. I think the smoke tests should be shallow but wide, covering every API 
in OpenStack but not ever 'scenario'.
So to me a subset of test cases would be a subset of test cases from every 
OpenStack API  that would make a smoke suite.

If the default client tools are ahead of the Tempest Rest client library with 
regard to API coverage, then it would make sense to use them
to quickly build a smoke test suite, IMHO.
The client managers should be used only to contribute to smoke tests, and not 
used randomly for more detailed scenario tests.

I'd agree with Daryl in saying that we should avoid Tempest to smoke test or 
any test the client tools, rather use them to smoke test
the API. The client tools should have their own tests.

-Rohit

From: openstack-bounces+rohit.karajgi=nttdata@lists.launchpad.net 
[mailto:openstack-bounces+rohit.karajgi=nttdata@lists.launchpad.net] On 
Behalf Of Daryl Walleck
Sent: Friday, May 04, 2012 1:24 AM
To: Jay Pipes
Cc: openstack-qa-t...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
test efforts

So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to assist the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:


However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the smoke tests in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.


__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Jay Pipes

Hi all,

I'd like to get some alignment on the following:

1) The definition of what is a smoke test
2) How we envision the Tempest project's role in running such tests

First, a discussion on #1.

There seem to be very loose semantics used to describe what a smoke 
test is. Some groups use the term smoke test. Others use promotion 
test. And still others use acceptance test.


I will only use the term smoke test here and I'm hoping we can 
standardize on this term to reduce confusion.


Now, what exactly *is* a smoke test? According to Wikipedia [1], a smoke 
test is preliminary to further testing, intended to reveal simple 
failures severe enough to reject a prospective software release. 
Further, it states that smoke tests are a subset of test cases that 
cover the most important functionality of a component or system are 
selected and run, to ascertain if the most crucial functions of a 
program work correctly.


From the above, I would surmise that smoke tests should have all three 
of the following characteristics:


* Test basic operations of an API, usually in a specific order that 
makes sense as a bare-bones use case of the API
* Test only the correct action paths -- in other words, smoke tests do 
not throw random or invalid input at an API

* Use the default client tools to complete the actions
* Take a short amount of time to complete

Currently in the OpenStack community, there are a number of things that 
act as smoke tests, or something like them:


* The devstack exercises [2]
* The Torpedo tests that SmokeStack runs against proposed commits to 
core projects [3]

* The next incarnation of the Kong tests -- now called exerstack [4]
* Tests within Tempest that are annotated (using the nosetest @attr 
decorator) with type=smoke
* Various companies have promotion tests that are essentially the same 
as exercises from devstack -- for instance, at HPCS we have a set of 
promotion tests that do virtually the exact same thing as the devstack 
exercises, only are a mixture of some Python and bash code and have 
additional stuffs like validating the HPCS billing setup


It would be super-awesome if Tempest could be used to remove some of 
this duplication of efforts.


However, before this can happen, a number of improvements need to be 
made to Tempest. The issue with the smoke tests in Tempest is that 
they aren't really smoke tests. They do not use the default client tools 
(like novaclient, keystoneclient, etc) and are not annotated consistently.


I would like to have Tempest have a full set of real smoke tests that 
can be used as the upstream core projects' gates.


Steps I think we need to take:

* Create a base test class that uses the default CLI tools instead of 
raw HTTP calls
* Create a set of Manager classes that would provide a config object and 
one or more client objects to test cases -- this is similar to the 
tempest.openstack.Manager class currently in the source code, but would 
allow using the default project client libraries instead of the raw HTTP 
client currently in Tempest
* Allow test cases that derive from the base SmokeTest class to use an 
ordered test method system -- e.g.  test_001_foo, test_002_bar
* Have the base smoke test case class automatically annotate all of its 
test methods with type=smoke so that we can run all smoke tests with:


nosetests --attr=type=smoke tempest

I have put together some code that shows the general idea as a draft 
patchset [5]


I've asked David to include this in the QA meeting agenda for this week. 
Hopefully interested folks can take a look at the example code in the 
patchset above, give it some thought, and we can chat about this on 
Thursday.


Best,
-jay

[1] http://en.wikipedia.org/wiki/Smoke_testing#Software_development
[2] https://github.com/openstack-dev/devstack/tree/master/exercises
[3] https://github.com/dprince/torpedo
[4] https://github.com/rcbops/exerstack/
[5] https://review.openstack.org/#/c/7069/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Daryl Walleck
So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to assist the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:

However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the smoke tests in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Maru Newby
The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru
 
On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

 So my first question is around this. So is the claim is that the client tools 
 are the default interface for the applications? While that works for coders 
 in python, what about people using other languages? Even then, there's no 
 guarantee that the clients in different languages are implemented in the same 
 way. Tempest was designed originally because while it does use an abstraction 
 between the API and the tests, there is nothing to assist the user by 
 retrying and the like. While I think there's a place for writing tests using 
 the command line clients, to me that would be a smoke test of a client and 
 not as much a smoke test of the API.
 
 Daryl
 
 On May 3, 2012, at 12:01 PM, Jay Pipes wrote:
 
 However, before this can happen, a number of improvements need to be made to 
 Tempest. The issue with the smoke tests in Tempest is that they aren't 
 really smoke tests. They do not use the default client tools (like 
 novaclient, keystoneclient, etc) and are not annotated consistently.
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Daryl Walleck
Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple approaches simply 
for the sake of using the clients for certain tests. I'm assuming these were 
things that were talked about during the CLI portions of OpenStack summit, 
which I wasn't able to attend. I wasn't aware of this or even some of the new 
parallel testing efforts which somehow did not come up during the QA track. The 
purpose of Tempest in the first place was to unify the functional and 
integration testing efforts for OpenStack projects, and I'm dedicated to doing 
everything I can to make that happen. If everyone is in agreement on the other 
side, I certainly don't want to be the one in the way against the majority. 
However, I just wanted to state my concerns before we take any further actions.

Daryl

On May 3, 2012, at 9:54 PM, Maru Newby wrote:

The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru

On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to assist the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:

However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the smoke tests in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Daryl Walleck
Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple approaches simply 
for the sake of using the clients for certain tests. I'm assuming these were 
things that were talked about during the CLI portions of OpenStack summit, 
which I wasn't able to attend. I wasn't aware of this or even some of the new 
parallel testing efforts which somehow did not come up during the QA track. The 
purpose of Tempest in the first place was to unify the functional and 
integration testing efforts for OpenStack projects, and I'm dedicated to doing 
everything I can to make that happen. If everyone is in agreement on the other 
side, I certainly don't want to be the one in the way against the majority. 
However, I just wanted to state my concerns before we take any further actions.

Daryl

On May 3, 2012, at 9:54 PM, Maru Newby wrote:

The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru

On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to assist the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:

However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the smoke tests in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openst...@lists.launchpad.netmailto:openst...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp