Hi Nikesh, > -----Original Message----- > From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com] > Sent: Saturday, September 20, 2014 9:49 PM > To: openst...@lists.openstack.org; OpenStack Development Mailing List (not > for usage questions) > Subject: Re: [Openstack] No one replying on tempest issue?Please share your > experience > > Still i didnot get any reply.
Jay has already replied to this mail, please check the nova-compute and cinder-volume log as he said[1]. [1]: http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html > Now i ran below command: > ./run_tempest.sh > tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot > > and i am getting test failed. > > > Actually,after analyzing tempest.log,i found that: > during creation of a volume from snapshot,tearDownClass is called and it is > deleting snapshot bfore creation of volume > and my test is getting failed. I guess the failure you mentioned at the above is: 2014-09-20 00:42:12.519 10684 INFO tempest.common.rest_client [req-d4dccdcd-bbfa-4ddf-acd8-5a7dcd5b15db None] Request (VolumesSnapshotTest:tearDownClass): 404 GET http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/snapshots/71d3cad4-440d-4fbb-8758-76da17b6ace6 0.029s and 2014-09-20 00:42:22.511 10684 INFO tempest.common.rest_client [req-520a54ad-7e0a-44ba-95c0-17f4657bc3b0 None] Request (VolumesSnapshotTest:tearDownClass): 404 GET http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/volumes/7469271a-d2a7-4ee6-b54a-cd0bf767be6b 0.034s right? If so, that is not a problem. VolumesSnapshotTest creates two volumes, and the tearDownClass checks these volumes deletions by getting volume status until 404(NotFound) [2]. [2]: https://github.com/openstack/tempest/blob/master/tempest/api/volume/base.py#L128 > I deployed a juno devstack setup for a cinder volume driver. > I changed cinder.conf file and tempest.conf file for single backend and > restarted cinder services. > Now i ran tempest test as below: > /opt/stack/tempest/run_tempest.sh tempest.api.volume.test_volumes_snapshots > > I am getting below output: > Traceback (most recent call last): > File "/opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py", > line 176, in test_volume_from_snapshot > snapshot = self.create_snapshot(self.volume_origin['id']) > File "/opt/stack/tempest/tempest/api/volume/base.py", line 112, in > create_snapshot > 'available') > File "/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py", > line 126, in wait_for_snapshot_status > value = self._get_snapshot_status(snapshot_id) > File "/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py", > line 99, in _get_snapshot_status > snapshot_id=snapshot_id) > SnapshotBuildErrorException: Snapshot 6b1eb319-33ef-4357-987a-58eb15549520 > failed to build and is in > ERROR status What happens if running the same operation as Tempest by hands on your environment like the following ? [1] $ cinder create 1 [2] $ cinder snapshot-create <id of the created volume at [1]> [3] $ cinder create --snapshot-id <id of the created snapshot at [2]> 1 [4] $ cinder show <id of the created volume at [3]> Please check whether the status of created volume at [3] is "available" or not. Thanks Ken'ichi Ohmichi _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev