I re-ran these tests against my instance. Some of these tests could be marked 
as expected failures pending a decision on what could be considered a bug or 
what may not be reasonable behavior:


  *   The test_list_images class fails because of a 409 response (the 
visibility into the exact error is enhanced when using a branch we're trying to 
get merge propped). This occurs due to trying to take two back-to-back 
snapshots of the same server. Given that the test waits for the first image to 
become active before trying to take a second image, it seems like a reasonable 
expectation that this test should pass. I've been trying to find a dev to 
bounce this one off with no luck, but to me this is a bug.


  *   test_create_image_from_deleted_server was initially failing due to the 
fact that the test did not wait until the image was active before trying to 
delete it. I added the necessary code and re-ran the test. Even with that 
change, this test is still failing due to a race condition within the test 
itself. The scenario creates a server, issues a delete request, and then 
immediately attempts to make an image of that server. From what I've seen 
there's always a delay between the issuing of a delete request and the server 
changing status, which this test is slipping between. To make this test more 
correct, there should be a step polling for a 404 on the GET for that server id.


  *   test_resize_server_revert is a debatable race condition either in the 
test or Nova itself. I've noted that after a revert resize request is made and 
the server returns to active status, a GET request to the server will return 
showing that the flavor for the instance is the proposed value and not the 
value that the instance reverted to for several (1-5) seconds after the 
instance becomes active again. I haven't been able to get a decisive answer on 
this issue. As a tester, I'd like to assume that once an instance switches to a 
new state, it should be "done", but given that the update follows through very 
quickly, it may or may not be a priority issue.


  *   test_server_create_metadata_key_too_long is failing because of a bug. I 
don't see one open for this, but the correct response to this request should be 
a 413, not a 500. If I can't find an open issue, I'll report this.

We haven't discussed as a group yet the use of the expectedFailure annotation 
on tests that are known or undecided bugs, but I think it would be a good 
conversation for this week's QA meeting. I've seen both sides of the "let them 
fail" and "only show me new failures" debate, and I didn't want to make that 
decision for anyone. Still, a way to denote that a test is failing due to a 
known issue really is a must. What I've been doing on my development branch is 
to add a "bug=lpXXXXX" attribute to any test failing due to a known issue and 
then deciding on adding the expectedFailure annotation based on the situation. 
I'd really like to hear other's input on this as we certainly don't want to 
create confusion going forward.

Daryl

On Jan 16, 2012, at 5:25 PM, Mark McLoughlin wrote:

Hi David,

Could you put the errors in http://paste.openstack.org/ for us to look
at?

Since we're about to release 2011.3.1, I'm curious whether these are
likely to be recent regressions on stable/diablo.

If they look like a recent regression, you could try using 'git bisect'
to identify which commit introduced them.

Thanks,
Mark.

On Mon, 2012-01-16 at 15:01 -0500, David Kranz wrote:
Just some test failures. I will track them down, but first wanted to
make sure Tempest was not intended to be Essex-only.



On 1/16/2012 2:52 PM, Daryl Walleck wrote:
Hi David,

Yes, it should be working (except for any tests failing due to bugs of course). 
What types of problems are you having?

Daryl

On Jan 16, 2012, at 1:44 PM, David Kranz wrote:

I tried running Tempest against an existing diablo-stable cluster and had 
problems. Is this expected to work?

-David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to     : 
openstack-qa-team@lists.launchpad.net<mailto:openstack-qa-team@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp





-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to     : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to