I am curious whether the object can be uploaded without MultiUpload,
so we can determine which part is wrong.

On 21 January 2015 at 09:15, Gleb Borisov <borisov.g...@gmail.com> wrote:
> Hi,
>
> We're experiencing some issues with our radosgw setup. Today we tried to
> copy bunch of objects between two separate clusters (using our own tool
> built on top of java s3 api).
>
> All went smooth until we start copying large objects (200G+). We can see
> that our code handles this case correctly and started multipart upload
> (s3.initiateMultipartUpload), then it uploaded all the parts in serial mode
> (s3.uploadPart) and finally completed upload (s3.completeMultipartUpload).
>
> When we've checked consistency of two clusters we found that we have a lot
> of zero-sized objects (which turns to be our large objects).
>
> I've made more verbose log from radosgw:
>
> two requests (put_obj, complete_multipart) -
> https://gist.github.com/anonymous/840e0aee5a7ce0326368 (all finished with
> 200)
>
> radosgw-admin object stat output:
> https://gist.github.com/anonymous/2b6771bbbad3021364e2
>
> We've tried to upload these objects several times without any luck.
>
> # radosgw --version
> ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)
>
> Thanks in advance.
>
> --
> Best regards,
> Gleb M Borisov
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Dong Yuan
Email:yuandong1...@gmail.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to