So an update for anyone else having this issue. It looks like radosgw
either has a memory leak or it spools the whole object into ram or
something.
root@kh11-9:/etc/apt/sources.list.d# free -m
total used free sharedbuffers cached
Mem: 64397 63775
I have a cluster of around 630 OSDs with 3 dedicated monitors and 2
dedicated gateways. The entire cluster is running hammer (0.94.5
(9764da52395923e0b32908d83a9f7304401fee43)).
(Both of my gateways have stopped responding to curl right now.
root@host:~# timeout 5 curl localhost ; echo $?
124
I am trying to deploy ceph 94.5 (hammer) across a few nodes using
ceph-deploy and passing the --dmcrypt flag. The first osd:journal pair
seems to succeed but all remaining osds that have a journal on the same
ssd seem to silently fail::
http://pastebin.com/2TGG4tq4
In the end I end up with 5
if you set a RGW user to have abucket quota of 0 buckets you can still
create buckets. The only way I have found to prevent a user from being
able to create buckets is to set the op_mask to read. 1.) it looks like
bucket_policy is not enforced when you have it set to anything below 1.
It looks
So when I create a new user with the admin api. If the user already
exists it just generates a new keypair for that user. Shouldn't the
admin api report that the user already exists? I ask because I can end
up with multiple keypairs for the same user unintentionally which could
be an issue. I
I haven't been able to reproduce the issue on my end but I do not fully
understand how the bug exists or why it is happening. I was finally
given the code they are using to upload the files::
http://pastebin.com/N0j86NQJ
I don't know if this helps at all :-(. the other thing is that I have
On 1/19/16 4:00 PM, Yehuda Sadeh-Weinraub wrote:
On Fri, Jan 15, 2016 at 5:04 PM, seapasu...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
I have looked all over and I do not see any explicit mention of
"NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959" in the l
wrote:
On Wed, Jan 20, 2016 at 10:43 AM, seapasu...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
On 1/19/16 4:00 PM, Yehuda Sadeh-Weinraub wrote:
On Fri, Jan 15, 2016 at 5:04 PM, seapasu...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
I have looked all over and I do not see
be nice to have some kind of a unitest that reproduces
it.
Yehuda
On Wed, Jan 20, 2016 at 1:34 PM, seapasu...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
So is there any way to prevent this from happening going forward? I mean
ideally this should never be possible, right? Even with a co
assume you have any
logs from when the object was uploaded?
Yehuda
On Fri, Jan 15, 2016 at 2:12 PM, seapasu...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
Sorry for the confusion::
When I grepped for the prefix of the missing object::
"2015\/0
Hello Yehuda,
Here it is::
radosgw-admin object stat --bucket="noaa-nexrad-l2"
--object="2015/01/01/PAKC/NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959.tar"
{
"name":
"2015\/01\/01\/PAKC\/NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959.tar",
"size": 7147520,
"policy":
...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
Hello Yehuda,
Here it is::
radosgw-admin object stat --bucket="noaa-nexrad-l2"
--object="2015/01/01/PAKC/NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959.tar"
{
&
default.384153.1__shadow_2015/01/01/KABR/NWS_NEXRAD_NXL2DP_KABR_2015010113_20150101135959.tar.2~${dest_upload_id}.1_1
Yehuda
On Fri, Jan 15, 2016 at 1:02 PM, seapasu...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
lacadmin@kh28-10:~$ rados -p .rgw.buckets ls | grep 'pcu5Hz6'
lacadmin@k
me and
see if there are pieces of it lying around under a different upload
id.
Yehuda
On Fri, Jan 15, 2016 at 1:44 PM, seapasu...@uchicago.edu
<seapasu...@uchicago.edu> wrote:
Sorry I am a bit confused. The successful list that I provided is from a
different object of the same size to show that I
62378 "bytes_sent": 19,
17462379 "bytes_received": 0,
17462380 "object_size": 0,
17462381 "total_time": 0,
17462382 "user_agent": "Boto\/2.38.0 Python\/2.7.7
Linux\/2.6.32-573.7.1.el6.x8
I am not sure why this is happening someone used s3cmd to upload around
130,000 7mb objects to a single bucket. Now we are tearing down the
cluster to rebuild it better, stronger, and hopefully faster. Before we
destroy it we need to download all of the data. I am running through all
of the
It looks like the gateway is experiencing a similar race condition to
what we reported before.
The rados object has a size of 0 bytes but the bucket index shows the
object listed and the object metadata shows a size of
7147520 bytes.
I have a lot of logs but I don't think any of them have
17 matches
Mail list logo