Hi Ronelle, Marios, all

Last week we modified the api tests slightly so they would pass on fgcp.
I would like to do the same with the cimi tests.
Before I complete my patch, I'd like to consult with you to ensure the
direction I'd like to take is acceptable with you.

Unlike the api tests, I appreciate that the cimi tests (at least the
part* ones) are based on documented scenarios that we need to follow. So
I'm trying to come up with a solution that complies with the scenarios.

To get the tests to pass on the fgcp I have to work around the same
restriction with the fgcp endpoint API that affected the api tests: when
you create a resource (machine/volume) or delete one, it does not accept
any other resource creation/deletion requests in that system until the
creation/deletion has completed.

Currently, I'm considering to make the following changes:

1) For tests that create resources and deletion is not part of the test
scenario, perform the deletion in a teardown operation (as is already
done in most cases). The teardown method would loop through the
resources to stop and destroy them. When a destroy operation returns a
405 (Method Not Allowed) or 409 (Conflict), the operation is retried
again and again after a number of seconds.

As the teardown is not part of the scenario, I hope this non-ideal
method is acceptable.

2) For tests that create resources, I'd like to add a checkpoint: that
the resource has actually been created (by sleeping and performing a GET
on it until its state becomes AVAILABLE/ STARTED/ STOPPED. The test then
continues as per the scenario.

I would say this is actually a better implementation of the scenario:
Where e.g. the scenario says the success criteria is "A new Machine
resource is created.", our current test is just checking that the
response of the creation request is 201. There is no check whether the
resource has been created. If it failed during the creation process, our
test would not catch that. With my proposal it would, because we'd
actually be checking that the machine left the CREATING state and
transitioned into a stable success state.

I expect again that added sleeps will not affect performance with the
mock driver. I can imagine the extra check introduced above does incur a
performance impact, depending on the performance of the backend cloud
provider under testing.

What do you think?

Cheers,
Dies Koper

Reply via email to