I am not sure why this is happening someone used s3cmd to upload around
130,000 7mb objects to a single bucket. Now we are tearing down the
cluster to rebuild it better, stronger, and hopefully faster. Before we
destroy it we need to download all of the data. I am running through all
of the keys in this bucket with boto and some of them are returning 404
when issuing key.get_contents_to_filename(d) while spitting out xml_acl
without issue.
Here is a paste of the radosgw client log::
https://paste.fedoraproject.org/310769/78608314/
Here is a paste of the output from boto in ipython::
https://paste.fedoraproject.org/310763/45278568/
The cluster is health_ok::
lacadmin@kh28-10:~$ ceph -s
cluster bd284b72-085a-4524-a059-29b0179c057c
health HEALTH_OK
monmap e1: 3 mons at
{kh28-10=10.24.64.20:6789/0,kh28-11=10.24.64.21:6789/0,kh28-12=10.24.64.22:6789/0}
election epoch 258, quorum 0,1,2 kh28-10,kh28-11,kh28-12
osdmap e23500: 100 osds: 100 up, 100 in
pgmap v5173017: 3720 pgs, 20 pools, 54038 GB data, 15942 kobjects
163 TB used, 199 TB / 363 TB avail
3711 active+clean
6 active+clean+scrubbing
3 active+clean+scrubbing+deep
client io 45480 kB/s rd, 13058 B/s wr, 41 op/s
lacadmin@kh28-10:~$ ceph health detail
HEALTH_OK
radosgw config in /etc/ceph/ceph.conf on kh28-10 (The radosgw)
https://paste.fedoraproject.org/311064/28224001/
I tried googling
1.
ERROR: got unexpected error when trying toreadobject:-2
2.
but I have yet to find anything.
I am generating a list of all of the objects in .rgw.buckets now to try
and trace the object down but I was hoping someone may have some other
insight or tips to the right direction I should be heading to
troubleshoot this?
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com