[ceph-users] Broken bucket problems

2018-08-23 Thread DHD.KOHA

I really have hard time to grasp and manage to Delete a bucket that is freaking 
me out!
Even though all s3 clients claim that the bucket is empty except 2 multi part 
uploads that I am not able to get rid off.

radosgw-admin bucket check --bucket=whatever

[
    
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1",
    
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2"
]

trying to abort them from  from any client ( s3cmd, radula, ... etc ) fails with

ERROR: S3 error: 404 (NoSuchUpload) ... etc

how ever
radosgw-admin bi list

shows the entries bellow ...

[
{
"type": "plain",
"idx": 
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H.meta",
"entry": {
"name": 
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H.meta",
"instance": "",
"ver": {
"pool": 13,
"epoch": 21488
},
"locator": "",
"exists": "true",
"meta": {
"category": 3,
"size": 0,
"mtime": "2018-08-03 17:59:01.269527Z",
"etag": "",
"owner": "johnmaj",
"owner_display_name": "johnmaj",
"content_type": "",
"accounted_size": 0,
"user_data": ""
},
"tag": "_e73mjn27u0-49btzh8-1bhLDFndmqgu",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
},
{
"type": "plain",
"idx": 
"_multipart_./DISK_P/collection_1/anonymous/argnew/GRARG_000140.pdf.2~CDIJMxZvy8aQejBGBPeNyQK-AJ1lmO4.meta",
"entry": {
"name": 
"_multipart_./DISK_P/collection_1/anonymous/argnew/GRARG_000140.pdf.2~CDIJMxZvy8aQejBGBPeNyQK-AJ1lmO4.meta",
"instance": "",
"ver": {
"pool": 13,
"epoch": 7090
},
"locator": "",
"exists": "true",
"meta": {
"category": 3,
"size": 0,
"mtime": "2018-07-26 09:08:42.391779Z",
"etag": "",
"owner": "johnmaj",
"owner_display_name": "johnmaj",
"content_type": "",
"accounted_size": 0,
"user_data": ""
},
"tag": "_DBsumSl0iuPbZxmB9-flVx0IcSiMbOc",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
},
{
"type": "plain",
"idx": 
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1",
"entry": {
"name": 
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1",
"instance": "",
"ver": {
"pool": 12,
"epoch": 28528
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 1,
"mtime": "2018-08-03 17:59:36.654803Z",
"etag": "6d371641381bc2179e66aa05318d4dae",
"owner": "johnmaj",
"owner_display_name": "johnmaj",
"content_type": "application/octet-stream",
"accounted_size": 1,
"user_data": ""
},
"tag": "_JQ2sRGM_koBLM1UdFDUS7kKhOvsAkIQ",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
},
{
"type": "plain",
"idx": 
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2",
"entry": {
"name": 
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2",
"instance": "",
"ver": {
"pool": 12,
"epoch": 28618
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 93811669,
"mtime": "2018-08-03 17:59:31.602642Z",
"etag": "4c527cfbcd146400654cdef54b12b2c3",
"owner": "johnmaj",
"owner_display_name": "johnmaj",
"content_type": "application/octet-stream",
"accounted_size": 93811669,
"user_data": ""
},
"tag": "_cia7WfJ700Jgs7LnoYRzFbEkg6jb7Ym",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
}
]

I suspect these are causing my problems and cannot 

Re: [ceph-users] Broken multipart uploads

2018-08-07 Thread DHD.KOHA

But still,

I get No Such key !!

s3cmd abortmp s3://weird_bucket 2~CDIJMxZvy8aQejBGBPeNyQK-AJ1lmO4
ERROR: S3 error: 404 (NoSuchKey)

s3cmd abortmp s3://weird_bucket 2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H
ERROR: S3 error: 404 (NoSuchKey)

Regards,

Harry.



On 06/08/2018 04:29 πμ, Konstantin Shalygin wrote:



after emtpying the bucket, cannot deleted since there are some aborted
multipart uploads

radosgw-admin bucket check --bucket=weird_bucket
[
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1",
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2"
]


bucket check --bucket=  --fix

does not work  neither --check-objects ... etc


Currently radosgw-admin can't find mp leaks.

Here [1] is some PR but in stale state.



grharry at GDell :~$ 
s3cmd abortmp
s3://weird_bucket/DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf
2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H
ERROR: S3 error: 404 (NoSuchUpload)

returns error 404 (NoSuchUpload)


s3cmd need mp_id (last column of 's3cmd multipart' output), you give 
mp_path instead.



[1] https://github.com/ceph/ceph/pull/17349




k



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Broken multipart uploads

2018-08-05 Thread DHD.KOHA

Hello!!!

after emtpying the bucket, cannot deleted since there are some aborted 
multipart uploads


radosgw-admin bucket check --bucket=weird_bucket
[
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1",
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2"
]


bucket check --bucket=  --fix

does not work  neither --check-objects ... etc

while

s3cmd multipart s3://weird_bucket

s3://weird_bucket/
Initiated    Path    Id
2018-07-26T09:08:42.391Z 
s3://weird_bucket/./DISK_P/collection_1/anonymous/argnew/GRARG_000140.pdf 
2~CDIJMxZvy8aQejBGBPeNyQK-AJ1lmO4
2018-08-03T17:59:01.269Z 
s3://weird_bucket/DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf 
2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H


but

s3cmd abortmp 
s3://weird_bucket/./DISK_P/collection_1/anonymous/argnew/GRARG_000140.pdf 
2~CDIJMxZvy8aQejBGBPeNyQK-AJ1lmO4

ERROR: S3 error: 404 (NoSuchUpload)
grharry@GDell:~$ s3cmd abortmp 
s3://weird_bucket/DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf 
2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H

ERROR: S3 error: 404 (NoSuchUpload)

returns error 404 (NoSuchUpload)

rados -p default.rgw.buckets.data ls  | grep 
DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf


gives these listings

7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2_12
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2_10
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1_11
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2_4
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H.1
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1_1
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2_19
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1_21
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2_15
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.a34nYCsUrr9KeozSYBiUEW4QmITvXZP.2_5
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__shadow_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1_22
7143960f-d8ff-4d32-8de8-867ddf878c16.38712.66__multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.2~alvAZmF5tAlSeiJrUjOwXV7Io22uH0H.2

..

How am I supposed to delete the aborted uploads I can remove the bucket?

I cannot find an answer to that by searching

Please Help!

Regards,

Harry

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] blocked buckets in pool

2018-08-05 Thread DHD.KOHA

Hello,

I am having a problem with the default.rgw.buckets.data.
there are about 10+ buckets within the pool

Buckets are filled with objects around 100K+ and resharding blocks them.

However the resharding process gets also blocked with problems within 
the buckets


# ceph version
ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous 
(stable)


default.rgw.buckets.index  11  0 0  
220T 1020
default.rgw.buckets.data   12 36238G 13.84  220T 
16450199
default.rgw.buckets.non-ec 13  0 0  
220T   23


Along that I see that the default.rgw.buckets.data pool is 
active+clean+scrubbing+deep for 3 days now.


I've tried this procedure

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-August/020012.html

but with no success

after "deleting" the dead objects
running  radosgw-admin bucket check --bucket=big_bucket --check-objects

dead objects still appear, and cannot proceed with manual resharding.

Trying also to delete the zombie file entries from s3 client returns 
with error File not Found!



Is there a way to solve this?

Also when resharding a healthy bucket
I get the message
*** NOTICE: operation will not remove old bucket index objects ***
*** these will need to be removed manually ***

How am I suppose to remove the old indexes ?


Thanks in advance,
Regars
Harry.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 3 monitor servers to monitor 2 different OSD set of servers

2018-04-26 Thread DHD.KOHA

Hello,

I am wondering if this is possible,

I am currently running a ceph cluster consisting of 3 servers as 
monitors  and 6 OSD  servers that host the disk drives, that is 10x8T 
for osds on  each server.


Since we did some cleaning around of the old servers

I am able to create another OSD cluster with 3 servers having 16Drives 
of 10T each.


Since adding the above servers to expand the current servers of OSDs 
doesn't seem to be a good idea according to the documentation, because 
the disk drives are of different size 8T and 10T


I wonder if it is possible to create another cluster

using the same 3 monitors and have them to monitor a second cluster also,

so that

--cluster ceph

--cluster ceph2

processes are running on the same monitor set of servers.

Thank's in advance for your info.
Regards.
Harry.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Calamari ( what a nightmare !!! )

2017-12-11 Thread DHD.KOHA

Hello list,

Newbie here,

After managing to install ceph, with all possible ways that I could 
manage  on 4 nodes, 4 osd and 3 monitors , with ceph-deploy and latter 
with ceph-ansible, I thought to to give a try to install CALAMARI on 
UBUNTU 14.04 ( another separate server being not a node or anything in a 
cluster ).


After all the mess of salt 2014.7.5 and different UBUNTU's since I am 
installing nodes on xenial but CALAMARI on trusty while the calamari 
packages on node come from download.ceph.com and trusty, I ended up 
having a server that refuses to gather anything from anyplace at all.



# salt '*' ceph.get_heartbeats
c1.zz.prv:
The minion function caused an exception: Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/salt/minion.py", line 1020, in 
_thread_return
return_data = func(*args, **kwargs)
  File "/var/cache/salt/minion/extmods/modules/ceph.py", line 467, in 
get_heartbeats
service_data = service_status(filename)
  File "/var/cache/salt/minion/extmods/modules/ceph.py", line 526, in 
service_status
fsid = json.loads(admin_socket(socket_path, ['status'], 
'json'))['cluster_fsid']
KeyError: 'cluster_fsid'
c2.zz.prv:
The minion function caused an exception: Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/salt/minion.py", line 1020, in 
_thread_return
return_data = func(*args, **kwargs)
  File "/var/cache/salt/minion/extmods/modules/ceph.py", line 467, in 
get_heartbeats
service_data = service_status(filename)
  File "/var/cache/salt/minion/extmods/modules/ceph.py", line 526, in 
service_status
fsid = json.loads(admin_socket(socket_path, ['status'], 
'json'))['cluster_fsid']
KeyError: 'cluster_fsid'
c3.zz.prv:
The minion function caused an exception: Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/salt/minion.py", line 1020, in 
_thread_return
return_data = func(*args, **kwargs)
  File "/var/cache/salt/minion/extmods/modules/ceph.py", line 467, in 
get_heartbeats
service_data = service_status(filename)
  File "/var/cache/salt/minion/extmods/modules/ceph.py", line 526, in 
service_status
fsid = json.loads(admin_socket(socket_path, ['status'], 
'json'))['cluster_fsid']
KeyError: 'cluster_fsid'

which means obviously that I am doing something WRONG and I have no IDEA 
what is it.


Given the fact that documentation on the matter is very poor to limited,

Is there anybody out-there with some clues or hints that is willing to 
share ?


Regards,

Harry.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com