Hi Tim,

I understand where you're coming from as well. I'm all for anything that gets 
the job done right and keeps folks engaged. It's fair to say that most 
positive/happy path tests could be developed using novaclient, and that there 
may be workarounds for some of the data is not readily available from 
novaclient responses. That said, the truly nasty negative tests that are so 
critical would be nearly impossible. I'm not comfortable with having hard coded 
HTTP requests inside of tests, as that can easily become a maintainability 
nightmare.

That being said, I would much rather work this out than have Tempest splinter 
into separate efforts. I think the one thing we can all agree on is that to 
some degree, tests using novaclient are a necessity. I'm the team captain for 
the QA meeting this week, so I'll set the agenda around this topic so we can 
have a more in depth discussion.

Daryl

Sent from my iPad

On May 8, 2012, at 4:00 PM, "Tim Simpson" 
<tim.simp...@rackspace.com<mailto:tim.simp...@rackspace.com>> wrote:

Hi Daryl,

I understand what you're trying to accomplish by creating a new client for the 
tests, but I'm not sure why the interface has to be completely different from 
the official set of Python client bindings. If it used the same interface 
everyone in the community could then benefit from the extra features you're 
adding to the Tempest client by swapping it in should the need arise. It would 
also be easier for Tempest newbs to create and contribute new tests.

I understand that in some cases the current interface is too nice, but wouldn't 
the existing one work fine for the majority of the tests? If so, why not just 
write a few extra methods to send HTTP requests directly (or for these cases, 
use http directly)?

Additionally I've heard from some in Rackspace QA that the client allows them 
to see the HTTP codes which is more illustrative. I feel like this problem 
could be solved with helper methods this:

def assert_response(http_code, func, *args, **kwargs):
    try:
        func(*args, **kwargs)
        assert_equal(http_code, 200)
    except ClientException as ce:
        assert_equal(http_code, ce.code)

Then you'd write tests like this:

server = assert_response(200, servers.get, "some_id")

You could of course have additional methods if the success case indicated a 
different HTTP code. If more than one HTTP code could possibly lead to the same 
return value then maybe that indicates the official bindings should be changed. 
In this case it would be another win, as Tempest writers would be pushing to 
ensure the Python client interface was as useful as possible.

Tim
________________________________
From: 
openstack-bounces+tim.simpson=rackspace....@lists.launchpad.net<mailto:openstack-bounces+tim.simpson=rackspace....@lists.launchpad.net>
 
[openstack-bounces+tim.simpson=rackspace....@lists.launchpad.net<mailto:openstack-bounces+tim.simpson=rackspace....@lists.launchpad.net>]
 on behalf of Daryl Walleck 
[daryl.wall...@rackspace.com<mailto:daryl.wall...@rackspace.com>]
Sent: Friday, May 04, 2012 12:03 AM
To: Maru Newby
Cc: Rick Lopez; 
openstack-qa-t...@lists.launchpad.net<mailto:openstack-qa-t...@lists.launchpad.net>;
 <openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] [QA] Aligning "smoke" / "acceptance" / "promotion" 
test efforts

Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple approaches simply 
for the sake of using the clients for certain tests. I'm assuming these were 
things that were talked about during the CLI portions of OpenStack summit, 
which I wasn't able to attend. I wasn't aware of this or even some of the new 
parallel testing efforts which somehow did not come up during the QA track. The 
purpose of Tempest in the first place was to unify the functional and 
integration testing efforts for OpenStack projects, and I'm dedicated to doing 
everything I can to make that happen. If everyone is in agreement on the other 
side, I certainly don't want to be the one in the way against the majority. 
However, I just wanted to state my concerns before we take any further actions.

Daryl

On May 3, 2012, at 9:54 PM, Maru Newby wrote:

The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru

On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to "assist" the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:

However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the "smoke tests" in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : 
openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to