[ 
https://issues.apache.org/jira/browse/LIBCLOUD-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862717#comment-13862717
 ] 

Tomaz Muraus commented on LIBCLOUD-490:
---------------------------------------

I just checked and I think we could make it work for objects which are <= 
MIN_PART_SIZE.

The way the S3 multipart upload code currently works means we already need to 
buffer 5 MB in memory to fill the whole chunk and this change doesn't require 
us to buffer any more data in memory.

We could do something along those lines:

- Read 5 MB from the iterator and if:
 - returned data is > 5 MB and iterator is not exhausted yet - perform a 
multipart upload
 - returned data is <= 5 MB and iterator is exhausted - perform a regular upload

> Zero-byte uploads to S3 fail
> ----------------------------
>
>                 Key: LIBCLOUD-490
>                 URL: https://issues.apache.org/jira/browse/LIBCLOUD-490
>             Project: Libcloud
>          Issue Type: Bug
>          Components: Storage
>    Affects Versions: 0.13.3
>            Reporter: Noah Kantrowitz
>
> Calling storage.upload_object_via_stream(iter(('',)), path) fails with:
> {{libcloud.common.types.LibcloudError: <LibcloudError in 
> <libcloud.storage.drivers.s3.S3StorageDriver object at 0x10b786610> 'Error in 
> multipart commit'>}}
> A workaround is temporarily monkeypatch 
> {{S3StorageDriver.supports_s3_multipart_upload = False}}. It would be nice if 
> I could just call put_object directly in some useful way, for data that is 
> small enough to fit in RAM (which in the case of an empty file is a bit of a 
> tautology).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to