Re: [Openstack-qa-team] [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Thompson Lee
We're playing with Cucumber through fog https://github.com/MorphGlobal/foghorn. 
 We we're going to write a similar email as smoketests use Boto which is the 
Amazon api, not openstack.  We added the openstack essex api's to fog and while 
foghorn comes in through ruby, it hits the native APIs and is super easy to 
setup and run.  We won't get around to continue playing with it until next week 
so its an idea in the rough at the moment.

We love Cucumber...

Lee

On May 8, 2012, at 4:00 PM, Tim Simpson wrote:

 Hi Daryl, 
 
 I understand what you're trying to accomplish by creating a new client for 
 the tests, but I'm not sure why the interface has to be completely different 
 from the official set of Python client bindings. If it used the same 
 interface everyone in the community could then benefit from the extra 
 features you're adding to the Tempest client by swapping it in should the 
 need arise. It would also be easier for Tempest newbs to create and 
 contribute new tests.
 
 I understand that in some cases the current interface is too nice, but 
 wouldn't the existing one work fine for the majority of the tests? If so, why 
 not just write a few extra methods to send HTTP requests directly (or for 
 these cases, use http directly)?
 
 Additionally I've heard from some in Rackspace QA that the client allows them 
 to see the HTTP codes which is more illustrative. I feel like this problem 
 could be solved with helper methods this:
 
 def assert_response(http_code, func, *args, **kwargs):
 try:
 func(*args, **kwargs)
 assert_equal(http_code, 200)
 except ClientException as ce:
 assert_equal(http_code, ce.code)
 
 Then you'd write tests like this:
 
 server = assert_response(200, servers.get, some_id)
 
 You could of course have additional methods if the success case indicated a 
 different HTTP code. If more than one HTTP code could possibly lead to the 
 same return value then maybe that indicates the official bindings should be 
 changed. In this case it would be another win, as Tempest writers would be 
 pushing to ensure the Python client interface was as useful as possible.
 
 Tim
 From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net 
 [openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on behalf 
 of Daryl Walleck [daryl.wall...@rackspace.com]
 Sent: Friday, May 04, 2012 12:03 AM
 To: Maru Newby
 Cc: Rick Lopez; openstack-qa-team@lists.launchpad.net; 
 openst...@lists.launchpad.net
 Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
 test efforts
 
 Perhaps it's just me, but given if I was developing in a different language, 
 I would not want to use a command line tool to interact with my application. 
 What is the point then of developing RESTful APIs if the primary client is 
 not it, but these command line tools instead?
 
 While it may appear that the approach I'm advocating is extra work, let me 
 try explain the purpose of this approach. If these aren't clear, I'd be more 
 than glad to give some concrete examples where these techniques have been 
 useful. A few benefits that come to mind are:
 
 Testing of XML implementations of the API. While this could be built into the 
 clients, I don't see many folks who would rush to that cause
 Direct access to the response. The clients hide any header/response code 
 information from the recipient, which can be very important. For example, the 
 response headers for Nova contain a token that can be captured and used when 
 troubleshooting issues.
 Avoiding the user friendliness of the clients. While retries and user 
 friendly exception handling are great for clients, they are not what I want 
 as a tester. As a tester, while I do want a minimal layer of abstraction 
 between my AUT and my code so that I'm not making HTTP requests directly from 
 my tests, from what I've seen the clients can make more efforts than I'd 
 prefer to be helpful.
 The ability to inject other metrics gathering into my clients for the purpose 
 of troubleshooting/logging/information handling
 
 While perhaps the idea is that only the smoke tests would use this approach, 
 I'm hesitant to the idea of developing tests using multiple approaches simply 
 for the sake of using the clients for certain tests. I'm assuming these were 
 things that were talked about during the CLI portions of OpenStack summit, 
 which I wasn't able to attend. I wasn't aware of this or even some of the new 
 parallel testing efforts which somehow did not come up during the QA track. 
 The purpose of Tempest in the first place was to unify the functional and 
 integration testing efforts for OpenStack projects, and I'm dedicated to 
 doing everything I can to make that happen. If everyone is in agreement on 
 the other side, I certainly don't want to be the one in the way against the 
 majority. However, I just wanted to state my concerns before we take any 
 further actions.
 
 Daryl
 
 On May 3, 2012, at 9:54 

Re: [Openstack-qa-team] [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Daryl Walleck
Hi Tim,

I understand where you're coming from as well. I'm all for anything that gets 
the job done right and keeps folks engaged. It's fair to say that most 
positive/happy path tests could be developed using novaclient, and that there 
may be workarounds for some of the data is not readily available from 
novaclient responses. That said, the truly nasty negative tests that are so 
critical would be nearly impossible. I'm not comfortable with having hard coded 
HTTP requests inside of tests, as that can easily become a maintainability 
nightmare.

That being said, I would much rather work this out than have Tempest splinter 
into separate efforts. I think the one thing we can all agree on is that to 
some degree, tests using novaclient are a necessity. I'm the team captain for 
the QA meeting this week, so I'll set the agenda around this topic so we can 
have a more in depth discussion.

Daryl

Sent from my iPad

On May 8, 2012, at 4:00 PM, Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.com wrote:

Hi Daryl,

I understand what you're trying to accomplish by creating a new client for the 
tests, but I'm not sure why the interface has to be completely different from 
the official set of Python client bindings. If it used the same interface 
everyone in the community could then benefit from the extra features you're 
adding to the Tempest client by swapping it in should the need arise. It would 
also be easier for Tempest newbs to create and contribute new tests.

I understand that in some cases the current interface is too nice, but wouldn't 
the existing one work fine for the majority of the tests? If so, why not just 
write a few extra methods to send HTTP requests directly (or for these cases, 
use http directly)?

Additionally I've heard from some in Rackspace QA that the client allows them 
to see the HTTP codes which is more illustrative. I feel like this problem 
could be solved with helper methods this:

def assert_response(http_code, func, *args, **kwargs):
try:
func(*args, **kwargs)
assert_equal(http_code, 200)
except ClientException as ce:
assert_equal(http_code, ce.code)

Then you'd write tests like this:

server = assert_response(200, servers.get, some_id)

You could of course have additional methods if the success case indicated a 
different HTTP code. If more than one HTTP code could possibly lead to the same 
return value then maybe that indicates the official bindings should be changed. 
In this case it would be another win, as Tempest writers would be pushing to 
ensure the Python client interface was as useful as possible.

Tim

From: 
openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net]
 on behalf of Daryl Walleck 
[daryl.wall...@rackspace.commailto:daryl.wall...@rackspace.com]
Sent: Friday, May 04, 2012 12:03 AM
To: Maru Newby
Cc: Rick Lopez; 
openstack-qa-team@lists.launchpad.netmailto:openstack-qa-team@lists.launchpad.net;
 openst...@lists.launchpad.netmailto:openst...@lists.launchpad.net
Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
test efforts

Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple 

[Openstack-qa-team] Agenda for QA Status Meeting for 5/10

2012-05-08 Thread Daryl Walleck
Weekly team meeting

The OpenStack QA Team holds public weekly meetings in #openstack-meeting, 
Thursday at 13:00 EST (17:00 UTC). Everyone interested in testing, quality 
assurance, performance engineering, etc, should attend!


Agenda for next meeting


* Review of last week's action items

(jaypipes) Get example smoke tests into Gerrit for review

* Blockers preventing work from moving forward

None

* Outstanding code reviews

Review all outstanding code reviews in the queue:

https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z

* New items for discussion (contact me with additional topics if needed)

1. (All) Feedback on Jay's Smoke Test Branch 
(https://review.openstack.org/#/c/7069/2)

2. (Jose) Update on Swift test development for Tempest

3. (Daryl) Thoughts on how to better document and convey functional test 
coverage

* Open discussion


Previous meetings

Previous meetings, with their notes and logs, can be found in under 
Meetings/QAMeetingLogshttp://wiki.openstack.org/Meetings/QAMeetingLogs.
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp