Hi,

We recently upgraded our Ceph Cluster to Jewel including RGW. Everything seems 
to be in order except for RGW which doesn't let us create buckets or add new 
files.

# s3cmd --version
s3cmd version 1.6.1

# s3cmd mb s3://test
WARNING: Retrying failed request: /
WARNING: 500 (UnknownError)
WARNING: Waiting 3 sec...

# s3cmd put test s3://nginx-proxy/test
upload: 'test' -> 's3://nginx-proxy/test'  [1 of 1]
7 of 7   100% in    0s   224.55 B/s  done
WARNING: Upload failed: /test (500 (UnknownError))
WARNING: Waiting 3 sec...

I am able to read and even remove files, I just can't add anything new.

I enabled RGW logs to check what went wrong and got the following trying to 
upload a file:

2016-07-18 12:09:22.301512 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6800/4104 -- osd_op(client.199724.0:927 11.1f0a02a1 
default.194977.1_test [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected 
e479) v7 -- ?+0 0x7fdd64020220 con 0x7fde100487c0
2016-07-18 12:09:22.303323 7fddef3f3700  1 -- 10.251.97.13:0/563287553 <== 
osd.27 10.251.97.1:6800/4104 10 ==== osd_op_reply(927 default.194977.1_test 
[getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6 ==== 
230+0+0 (25
91304629 0 0) 0x7fda70000d00 con 0x7fde100487c0
2016-07-18 12:09:22.303629 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6818/6493 -- osd_op(client.199724.0:928 10.cecde97a 
.dir.default.194977.1 [call rgw.bucket_prepare_op] snapc 0=[] 
ondisk+write+known_if_redirected e479
) v7 -- ?+0 0x7fdd6402af60 con 0x7fde10032110
2016-07-18 12:09:22.308437 7fddee9e9700  1 -- 10.251.97.13:0/563287553 <== 
osd.6 10.251.97.1:6818/6493 13 ==== osd_op_reply(928 .dir.default.194977.1 
[call] v479'126 uv126 ondisk = 0) v6 ==== 188+0+0 (1238951509 0 0) 
0x7fda6c000cc0 con 0x
7fde10032110
2016-07-18 12:09:22.308528 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6800/4104 -- osd_op(client.199724.0:929 11.1f0a02a1 
default.194977.1_test [create 0~0 [excl],setxattr user.rgw.idtag (17),writefull 
0~7,setxattr user.r
gw.manifest (413),setxattr user.rgw.acl (127),setxattr user.rgw.content_type 
(11),setxattr user.rgw.etag (33),setxattr user.rgw.x-amz-content-sha256 
(65),setxattr user.rgw.x-amz-date (17),setxattr user.rgw.x-amz-meta-s3cmd-attrs 
(133),set
xattr user.rgw.x-amz-storage-class (9),call rgw.obj_store_pg_ver,setxattr 
user.rgw.source_zone (4)] snapc 0=[] ondisk+write+known_if_redirected e479) v7 
-- ?+0 0x7fdd64024ae0 con 0x7fde100487c0
2016-07-18 12:09:22.309371 7fddef3f3700  1 -- 10.251.97.13:0/563287553 <== 
osd.27 10.251.97.1:6800/4104 11 ==== osd_op_reply(929 default.194977.1_test 
[create 0~0 [excl],setxattr (17),writefull 0~7,setxattr (413),setxattr 
(127),setxattr (
11),setxattr (33),setxattr (65),setxattr (17),setxattr (133),setxattr 
(9),call,setxattr (4)] v0'0 uv0 ondisk = -95 ((95) Operation not supported)) v6 
==== 692+0+0 (982388421 0 0) 0x7fda70000d00 con 0x7fde100487c0
2016-07-18 12:09:22.309471 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6818/6493 -- osd_op(client.199724.0:930 10.cecde97a 
.dir.default.194977.1 [call rgw.bucket_complete_op] snapc 0=[] 
ack+ondisk+write+known_if_redirected
e479) v7 -- ?+0 0x7fdd64024ae0 con 0x7fde10032110
2016-07-18 12:09:22.309504 7fdcc57fa700  2 req 3:0.047834:s3:PUT 
/nginx-proxy/test:put_obj:completing
2016-07-18 12:09:22.309509 7fdcc57fa700  0 WARNING: set_req_state_err err_no=95 
resorting to 500
2016-07-18 12:09:22.309580 7fdcc57fa700  2 req 3:0.047910:s3:PUT 
/nginx-proxy/test:put_obj:op status=-95
2016-07-18 12:09:22.309585 7fdcc57fa700  2 req 3:0.047915:s3:PUT 
/nginx-proxy/test:put_obj:http status=500

I tried to look for any information around this error but I only found one 
similar unanswered thread.

The issue disappears if I use RGW Infernalis instead, the create does not fail 
and everything goes smoothly. It is also not dependent on the daemons version, 
the situation is the same in our second Infernalis-based cluster where only RGW 
was updated for tests.

Could anyone recommend what is wrong here?

Thanks,
MN
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to