On Wed, May 8, 2019 at 1:51 PM <francisco.gar...@wbsgo.com> wrote:

> Description of the problem:
> I am not able to restore (upload) a VM when its disk (Either Thin
> provisioning or Thick provisioned) is located in an iSCSI storage and this
> VM has at least one snapshot larger than 1Gb.


You need to set the disk initial_size to the size of the file you upload.

For example, see how upload_disk.py example:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

image_size = os.path.getsize(args.filename)

disks_service = connection.system_service().disks_service()
disk = disks_service.add(
    disk=types.Disk(
        name=os.path.basename(args.filename),
        content_type=image_info["content_type"],
        description='Uploaded disk',
        format=new_disk_format,
        initial_size=image_size,
        provisioned_size=image_info["virtual-size"],
        sparse=new_disk_format == types.DiskFormat.COW,
        storage_domains=[
            types.StorageDomain(
                name=args.sd_name
            )
        ]
    )
)


> I am using REST API through Java SDK.
> I am using the same upload procedure described here:
> https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots.html,
> of course, making little adjustments to disk format (COW, Sparse..
> Depending on the type of the VM to be restored).
> When I try restore any VM with this conditions, all snapshot disks with a
> size smaller than 1Gb are uploaded correctly. But when I am uploading a
> snapshot disk with a size bigger than 1Gb, the following error ocurrs:
>
> The error triggered from Java SDK is: "The server response was 403 in the
> range request {bytes 1073741824-1207959551/4831838208}"
> In the host's imageio log: "2019-05-07 14:26:30,253 WARNING (Thread-71960)
> [web] ERROR [172.19.33.146] PUT
> /images/8d353735-0a29-463c-b772-ec37f451e2e9 [403] Requested range out of
> allowed range [request=0.000488]"
>
>
> For example (real scenario), I have these snapshot disks from a vm:
> 2.1G
> DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_33b3e74a-cc6b-474a-97cc-960891716110_0.img
> -> First snap chain -> OK upload
> 1.1G
> DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_9349e4dd-85b3-4145-8674-bb3f39546020_1.img
> -> Ok upload
> 1.1G
> DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_4d10cb58-9f52-4d64-b6e9-ead4bed4c6a6_2.img
> -> Ok upload
> 4.6G
> DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_429cbe6d-1148-4a84-861a-56ad6902859e_3.img
> -> Last snap chain -> Fail upload with previous error
> [Qemu-img info in attached file: qemu-imgs-iSCSI.txt]
>
> In the restore process, when I create the disk(In this case), I put the
> following values:
> Disk: {
>         Name:T_iSCSI_Thin_Disk1_restore,
>         Id:null,
>         Interface:VIRTIO_SCSI,
>         Format:COW,
>         WipeAfterDelete: false,
>         Shareable: false,
>         Sparse: true,
>         Boot:true,
>         Activo:true,
>         Sizes:{
>                 Initial:4294967296,
>                 Actual:null,
>                 Total:null,
>                 Provisioned: 4294967296
>         }
> }
>
> And the disk created is:
> Disk: {
>         Name:T_iSCSI_Thin_Disk1_restore,
>         Id:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
>         Image:c8654d05-a18c-47bd-ab0b-f4d746e23efb,
>         Format:COW,
>         WipeAfterDelete: false,
>         Shareable: false,
>         Sparse: true,
>         Sizes:{
>                 Initial:null,
>                 Actual:0,
>                 Total:0,
>                 Provisioned: 4294967296
>         }
> }
>
> Therefore, the first upload can finish correctly. However, when I create
> snapshots with this disk, the snapshot disks have these parameters:
> DiskSnapshot: {
>         Id: 29dd6a18-c17c-4938-be67-d7af6de713ec,
>         Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
>         Snapshot:ceb2f29e-1ae8-441c-a5a7-836308cfeb8d,
>         Sizes:{
>                 Actual:1073741824
>                 Provisioned:4294967296
>                 Total: 0
>                 Initial: null


this allocates 1g, may not be enough for the upload
         }

> }
> DiskSnapshot: {
>         Id: 69620911-d490-4872-b353-ba29a762ea3e,
>         Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
>         Snapshot:c35423d5-ddd6-46f2-a5d7-0080941c3f30,
>         Sizes:{
>                 Actual:1073741824
>                 Provisioned:4294967296
>                 Total: 0
>                 Initial: null
>         }
> }
> DiskSnapshot: {
>         Id: c8654d05-a18c-47bd-ab0b-f4d746e23efb,
>         Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
>         Snapshot:fc135efd-a67b-4eb3-bfdb-9090aa3b267e,
>         Sizes:{
>                 Actual:4831838208
>                 Provisioned:4294967296
>                 Total: 0
>                 Initial: null
>         }
> }
> DiskSnapshot: {
>         Id: e3389916-4e80-4593-a116-3484f295ff7f,
>         Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
>         Snapshot:74aff5e7-e1c1-4f51-ae31-ea44caa180ef,
>         Sizes:{
>                 Actual:1073741824
>                 Provisioned:4294967296
>                 Total: 0
>                 Initial: null
>         }
> }
>
> As we can observe with these values, the actual size of snapshot disks
> indicates 1Gb, and when I try upload a snapshot disk (Bigger than 1.1Gb),
> the system doesn't allow to complete the upload, launching the mentionated
> previous error.
>
> However, using NFS storage, the previous error doesn't ocurr. The system
> allows me upload disks bigger than 1 Gb, and when the upload finishes, the
> system refreshs the actualSize to actual value.
>
> For example (real scenario), I have these snapshot disks from a vm:
> 3.1G
> DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_3dffb5f4-3f9c-410c-8b97-133922721f9a_0.img
> -> First snap chain -> OK upload
> 22M
>  
> DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_d3307b3c-f930-4faa-90cb-6aedd38bea93_1.img
> -> Ok upload
> 31M
>  
> DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_aa8648dd-d6c0-4d9d-8373-6764e806359f_2.img
> -> Ok upload
> 1.3G
> DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_2cf31d95-3b86-4d97-b39b-672d0066f503_3.img
> -> Last snap chain -> OK upload
> [Qemu-img info in attached file: qemu-imgs-NFS.txt]
>
> The restore with NFS storage is the same process but I change the format
> disk and sparse. With following values:
>
> Disk: {
>         Name:985a02a0-0752-420a-a2fb-a3e89d1ac3bc_restore,
>         Id:null,
>         Interface:VIRTIO_SCSI,
>         Format:RAW,
>         WipeAfterDelete: false,
>         Shareable: false,
>         Sparse: true,
>         Boot:true,
>         Activo:true,
>         Sizes:{
>                 Initial:3221225472,
>                 Actual:null,
>                 Total:null,
>                 Provisioned: 3221225472
>         }
> }
>
> And the disk created was:
> Disk: {
>         Name:985a02a0-0752-420a-a2fb-a3e89d1ac3bc_restore,
>         Id:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Image:9c80aead-88de-48f9-be7e-0fde917e7f55,
>         Format:RAW,
>         WipeAfterDelete: false,
>         Shareable: false,
>         Sparse: true,
>         Sizes:{
>                 Initial:null,
>                 Actual:0,
>                 Total:0,
>                 Provisioned: 3221225472
>         }
> }
>
>
> When I create snapshots with this disk, the snapshot disks have these
> parameters:
>
> DiskSnapshot: {
>         Id: 87c4cb08-502a-4937-bbba-6d455395e990,
>         Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Snapshot:dd257190-ef50-4990-a61f-40d84e2b08e9,
>         Sizes:{
>                 Actual:200704
>                 Provisioned:3221225472
>                 Total: 0
>                 Initial: null
>         }
> }
> DiskSnapshot: {
>         Id: e7d6f1fb-ab37-45f8-8e23-5964ed194581,
>         Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Snapshot:b9f3f3d3-710e-4ce2-b05f-563227f5ec04,
>         Sizes:{
>                 Actual:200704
>                 Provisioned:3221225472
>                 Total: 0
>                 Initial: null
>         }
> }
> DiskSnapshot: {
>         Id: 8ef6f502-d00f-4fa5-b407-2cdc0e876045,
>         Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Snapshot:274994de-5c2b-4b38-965f-4cadea3e0db3,
>         Sizes:{
>                 Actual:200704
>                 Provisioned:3221225472
>                 Total: 0
>                 Initial: null
>         }
> }
> DiskSnapshot: {
>         Id: 9c80aead-88de-48f9-be7e-0fde917e7f55,
>         Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Snapshot:f1d7382d-4427-4e21-9544-1b3cd85f23ae,
>         Sizes:{
>                 Actual:0
>                 Provisioned:3221225472
>                 Total: 0
>                 Initial: null
>         }
> }
>
>
> As we can observe with these values, the actual size of snapshot disks
> indicates 196Kb, and when I try upload any snapshot disk (Even more 1.1Gb),
> the system allows upload all content disk and when it is finalished, it
> changes actual and total size. When the restore ends, I obtain these values:
>
> Disk: {
>         Name:985a02a0-0752-420a-a2fb-a3e89d1ac3bc_restoreBacula,
>         Id:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Image:8ef6f502-d00f-4fa5-b407-2cdc0e876045,
>         Format:COW,
>         WipeAfterDelete: false,
>         Shareable: false,
>         Status: OK,
>         Sparse: true,
>         Sizes:{
>                 Initial:null,
>                 Actual:1318850560,
>                 Total:4593303552,
>                 Provisioned: 3221225472
>         }
> }
>
> DiskSnapshot: {
>         Id: 87c4cb08-502a-4937-bbba-6d455395e990,
>         Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Snapshot:dd257190-ef50-4990-a61f-40d84e2b08e9,
>         Sizes:{
>                 Actual:22282240
>                 Provisioned:3221225472
>                 Total: 0
>                 Initial: null
>         }
> }
> DiskSnapshot: {
>         Id: e7d6f1fb-ab37-45f8-8e23-5964ed194581,
>         Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Snapshot:b9f3f3d3-710e-4ce2-b05f-563227f5ec04,
>         Sizes:{
>                 Actual:32374784
>                 Provisioned:3221225472
>                 Total: 0
>                 Initial: null
>         }
> }
> DiskSnapshot: {
>         Id: 9c80aead-88de-48f9-be7e-0fde917e7f55,
>         Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
>         Snapshot:f1d7382d-4427-4e21-9544-1b3cd85f23ae,
>         Sizes:{
>                 Actual:3219795968
>                 Provisioned:3221225472
>                 Total: 0
>                 Initial: null
>         }
> }
>
> There is a disk snapshot less since at the end of the restoration I merge
> the last snapshot. But the values showed changes in actual Size in the
> DiskSnapshot.
>
> Could you help me, please? Is there a different procedure to the one
> described in
> https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots.html
> to work with iSCSI storage or is this simply a bug?
>
>
>
> How reproducible:
> Backup and restore VM with snapshots in an iSCSI storage, using Java SDK.
>
> Steps to Reproduce:
> 1.- Follow steps indicated in
> https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots.html
> to backup a VM allocated iSCSI Storage and containing snapshots bigger than
> 1.1GB
> 2.- Try to restore it using the instructions of that documment
>
> Actual results:
> Upload of images of disk snapshots bigger than 1.1Gb do not suceed ant the
> restore process fails.
>
> Expected results:
> Restored vm
> _______________________________________________
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OZWADBWM2RWJ2DXKPEM2PIQNR2OBIVBJ/
>
_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QHN52A6ZOM6542VWZ4VNRPD7MGZ3UDK6/

Reply via email to