Re: [ceph-users] RGW 10.2.5->10.2.7 authentication fail?

2017-05-22 Thread Ingo Reimann
Hi Radek,

are there any news about this issue? We are also stuck with 10.2.5 and can`t 
update to 10.2.7.
We use a couple of radosgws that are loadbalanced behind a Keepalived/LVS. 
Removal of rgw_dns_name does only help, if I address the gateway directly, 
but not in general.

Best regards,

Ingo

-Ursprüngliche Nachricht-
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von 
Radoslaw Zarzynski
Gesendet: Mittwoch, 3. Mai 2017 11:59
An: Łukasz Jagiełło
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] RGW 10.2.5->10.2.7 authentication fail?

Hello Łukasz,

Thanks for your testing and sorry for my mistake. It looks that two commits 
need to be reverted to get the previous behaviour:

The already mentioned one:
  https://github.com/ceph/ceph/commit/c9445faf7fac2ccb8a05b53152c0ca16d7f4c6d0
Its dependency:
  https://github.com/ceph/ceph/commit/b72fc1b820ede3cd186d887d9d30f7f91fe3764b

They have been merged in the same pull request:
  https://github.com/ceph/ceph/pull/11760
and form the difference visible between v10.2.5 and v10.2.6 in the matter of 
"in_hosted_domain" handling:
  https://github.com/ceph/ceph/blame/v10.2.5/src/rgw/rgw_rest.cc#L1773
  https://github.com/ceph/ceph/blame/v10.2.6/src/rgw/rgw_rest.cc#L1781-L1782

I'm really not sure we want to revert them. Still, it can be that they just 
unhide a misconfiguration issue while fixing the problems we had with 
handling of virtual hosted buckets.

Regards,
Radek

On Wed, May 3, 2017 at 3:12 AM, Łukasz Jagiełło  
wrote:
> Hi,
>
> I tried today revert [1] from 10.2.7 but the problem is still there
> even without the change. Revert to 10.2.5 fix the issue instantly.
>
> https://github.com/ceph/ceph/commit/c9445faf7fac2ccb8a05b53152c0ca16d7
> f4c6d0
>
> On Thu, Apr 27, 2017 at 4:53 AM, Radoslaw Zarzynski
>  wrote:
>>



Ingo Reimann

Teamleiter Technik
Dunkel GmbH <https://www.dunkel.de/>
Dunkel GmbH
Philipp-Reis-Straße 2
65795 Hattersheim
Fon: +49 6190 889-100
Fax: +49 6190 889-399
eMail: supp...@dunkel.de
http://www.Dunkel.de/   Amtsgericht Frankfurt/Main
HRB: 37971
Geschäftsführer: Axel Dunkel
Ust-ID: DE 811622001
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW 10.2.5->10.2.7 authentication fail?

2017-05-23 Thread Ingo Reimann
Hi Ben!



Thanks for your advice. I included the names of our gateways, but did omit 
the external name of the service itself. Now, everything is working again.



And yes, this change is worth a note J



Best regards,



Ingo



Von: Ben Hines [mailto:bhi...@gmail.com]
Gesendet: Dienstag, 23. Mai 2017 02:16
An: Ingo Reimann
Cc: Radoslaw Zarzynski; ceph-users
Betreff: Re: [ceph-users] RGW 10.2.5->10.2.7 authentication fail?



We used this workaround when upgrading to Kraken (which had a similar issue)



>modify the zonegroup and populate the 'hostnames' array with all backend 
>server hostnames as well as the hostname terminated by haproxy



Which i'm fine with. It's definitely a change that should be noted in a more 
prominent release note. Without the hostname in there, ceph interpreted the 
hostname as a bucket name if the hostname rgw was being hit with differed 
from the hostname of the actual server. Pre Kraken, i didn't need that 
setting at all and it just worked.



-Ben



On Mon, May 22, 2017 at 1:11 AM, Ingo Reimann  wrote:

Hi Radek,

are there any news about this issue? We are also stuck with 10.2.5 and can`t
update to 10.2.7.
We use a couple of radosgws that are loadbalanced behind a Keepalived/LVS.
Removal of rgw_dns_name does only help, if I address the gateway directly,
but not in general.

Best regards,

Ingo

-Ursprüngliche Nachricht-
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Radoslaw Zarzynski
Gesendet: Mittwoch, 3. Mai 2017 11:59
An: Łukasz Jagiełło
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] RGW 10.2.5->10.2.7 authentication fail?


Hello Łukasz,

Thanks for your testing and sorry for my mistake. It looks that two commits
need to be reverted to get the previous behaviour:

The already mentioned one:
  https://github.com/ceph/ceph/commit/c9445faf7fac2ccb8a05b53152c0ca16d7f4c6d0
Its dependency:
  https://github.com/ceph/ceph/commit/b72fc1b820ede3cd186d887d9d30f7f91fe3764b

They have been merged in the same pull request:
  https://github.com/ceph/ceph/pull/11760
and form the difference visible between v10.2.5 and v10.2.6 in the matter of
"in_hosted_domain" handling:
  https://github.com/ceph/ceph/blame/v10.2.5/src/rgw/rgw_rest.cc#L1773
  https://github.com/ceph/ceph/blame/v10.2.6/src/rgw/rgw_rest.cc#L1781-L1782

I'm really not sure we want to revert them. Still, it can be that they just
unhide a misconfiguration issue while fixing the problems we had with
handling of virtual hosted buckets.

Regards,
Radek

On Wed, May 3, 2017 at 3:12 AM, Łukasz Jagiełło 
wrote:
> Hi,
>
> I tried today revert [1] from 10.2.7 but the problem is still there
> even without the change. Revert to 10.2.5 fix the issue instantly.
>
> https://github.com/ceph/ceph/commit/c9445faf7fac2ccb8a05b53152c0ca16d7
> f4c6d0
>
> On Thu, Apr 27, 2017 at 4:53 AM, Radoslaw Zarzynski
>  wrote:
>>




Ingo Reimann

Teamleiter Technik
Dunkel GmbH <https://www.dunkel.de/>
Dunkel GmbH
Philipp-Reis-Straße 2
65795 Hattersheim
Fon: +49 6190 889-100 
Fax: +49 6190 889-399 
eMail: supp...@dunkel.de
http://www.Dunkel.de/   Amtsgericht Frankfurt/Main
HRB: 37971
Geschäftsführer: Axel Dunkel
Ust-ID: DE 811622001

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Broken Buckets after Jewel->Luminous Upgrade

2018-01-30 Thread Ingo Reimann
Hi,

We got a nasty issue during our Jewel->Luminous upgrade. The Mon/MGR and
OSD part went well, but the first of our rgws is incompatible to the
others:

The problem:
Some Buckets are not accessible from the luminous gateway. The metadata
for that buckets seemed ok, but listing was not possible. A local s3cmd
got "404 NoSuchKey". I exported and imported the metadata for one instance
and ran radosgw-admin --check. Now the bucket is listable but empty under
luminous and broken under jewel. The corresponding directory object still
contains the file in its omap.  

I am afraid to corrupt my cluster so I stopped the upgrade for the other
gateways.

What could be the problem,and how may I solve that?

Any help is apreciated.

Regards,

Ingo Reimann
Dunkel GmbH 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken Buckets after Jewel->Luminous Upgrade

2018-01-30 Thread Ingo Reimann
Hi Robin,

thanks for your reply.

Concerning "https://tracker.ceph.com/issues/22756 - buckets showing as
empty": Our cluster is rather old - argonaut, but the affected bucket and
user are created under jewel.

If you need more data, I may post it.

Best regards,
Ingo

-Ursprüngliche Nachricht-
Von: Robin H. Johnson [mailto:robb...@gentoo.org]
Gesendet: Dienstag, 30. Januar 2018 19:17
An: Ingo Reimann
Cc: ceph-users
Betreff: Re: [ceph-users] Broken Buckets after Jewel->Luminous Upgrade

On Tue, Jan 30, 2018 at 10:32:04AM +0100, Ingo Reimann wrote:
> The problem:
> Some Buckets are not accessible from the luminous gateway. The
> metadata for that buckets seemed ok, but listing was not possible. A
> local s3cmd got "404 NoSuchKey". I exported and imported the metadata
> for one instance and ran radosgw-admin --check. Now the bucket is
> listable but empty under luminous and broken under jewel. The
> corresponding directory object still contains the file in its omap.
>
> I am afraid to corrupt my cluster so I stopped the upgrade for the
> other gateways.
I have a couple of bugs open for possible the same issue:
https://tracker.ceph.com/issues/22756 - buckets showing as empty
http://tracker.ceph.com/issues/22714 - old AccessKeys not working

One more to come after more diagnosis my side, where some old files don't
work properly anymore (dropping off at a multiple of 512K)


--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP :
7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Infinite loop in radosgw-usage show

2018-02-01 Thread Ingo Reimann
Hi,

after our jewel 10.2.10 -> luminous 12.2.2 upgrade of mons and osds but
not the rgws we got some nasty behavior.

Radosgw-admin usage show loops forever when there is any data. I can`t
reproduce it exactly, but for some users we got an output once during our
tests, but then it changed. When we surpress the output with
--show-log-entries=false, we don`t get any output at all, without that,
the first entries are repeated forever. I thought, this is
http://tracker.ceph.com/issues/21196, but this should have been closed in
12.2.12. We tried the command with jewel and luminous clients. Same
behaviour.

Any idea, how to get rid of that?


Best regards,
    
Ingo Reimann
Dunkel GmbH 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Infinite loop in radosgw-usage show

2018-02-06 Thread Ingo Reimann
Just to add -

We wrote a little wrapper, that reads the output of "radosgw-admin usage
show" and stops, when the loop starts. When we add all entries by
ourselves, the result is correct. Moreover - the duplicate timestamp, that
we detect to break the loop, is not the last taken into  account. Eg:
./radosgw-admin-break-loop --uid=TestUser --start-date=2017-12-01
--end-date=2018-01-01 [...]
"bytes_received": 1472051975516, Loop detected at
"2017-12-21 08:00:00.00Z"

./radosgw-admin-break-loop --uid=TestUser --start-date=2017-12-01
--end-date=2017-12-22 [...]
"bytes_received": 1245051973424, Loop detected at
"2017-12-21 08:00:00.00Z"

 This leads to the assumption, that the loop occurs after processing of
raw data.

Looks like a bug?

Best regards,

Ingo Reimann
Dunkel GmbH
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rgw: Moving index objects to the right index_pool

2018-02-13 Thread Ingo Reimann
Hi List,

we want to brush up our cluster and correct things, that have been changed
over time. When we started with bobtail, we put all index objects together
with data into the pool rgw.buckets:

root@cephadmin:~# radosgw-admin metadata get bucket:some-bucket
{
"key": "bucket:some-bucket",
"ver": {
"tag": "_zgv1FXm604BQtdiZnLkaiXN",
"ver": 1
},
"mtime": "2016-01-08 04:53:38.00Z",
"data": {
"bucket": {
"name": "some-bucket",
"pool": "rgw.buckets",
"data_extra_pool": "",
"index_pool": "rgw.buckets",
"marker": "default.101387371.6",
"bucket_id": "default.101387371.6",
"tenant": ""
  [...]

With Jewel, we introduced a default-placement and put the index of new
buckets into the pool rgw.buckets.index. Now, we`d like to correct the old
buckets and shift all the indices where they belong to. 
I can 
* copy the .dir.MARKER.SHARD# objects to the index pool
* modify the buckets metadata 
But:
* when I try to modify the metadata of the bucket instance, the index_pool
does not get changed
* radosgw-admin bucket stats still shows the old index pool

Did I miss something?

NB: Just now, I perform all the operations with a jewel 10.2.2 rgw.
Luminous is available but not active yet.

Best regards,

Ingo Reimann
Dunkel GmbH 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw: Moving index objects to the right index_pool

2018-02-14 Thread Ingo Reimann
Hi Yehuda,

Thanks for you help.

No, listing does not work, if I remove the old index objects.

I guessed, I could use the resharding for my purpose. I just tried
* copy the index object
* rewrite bucket metadata
* reshard
=> I get new index objects at the old place. Metadata gets turned back 
again.

Maybe this is more complicated as expected?

Best regards,

Ingo


-Ursprüngliche Nachricht-
Von: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com]
Gesendet: Donnerstag, 15. Februar 2018 00:21
An: Ingo Reimann
Cc: ceph-users
Betreff: Re: [ceph-users] rgw: Moving index objects to the right index_pool

On Tue, Feb 13, 2018 at 11:27 PM, Ingo Reimann  wrote:
> Hi List,
>
> we want to brush up our cluster and correct things, that have been
> changed over time. When we started with bobtail, we put all index
> objects together with data into the pool rgw.buckets:
>
> root@cephadmin:~# radosgw-admin metadata get bucket:some-bucket {
> "key": "bucket:some-bucket",
> "ver": {
> "tag": "_zgv1FXm604BQtdiZnLkaiXN",
> "ver": 1
> },
> "mtime": "2016-01-08 04:53:38.00Z",
> "data": {
> "bucket": {
> "name": "some-bucket",
> "pool": "rgw.buckets",
> "data_extra_pool": "",
> "index_pool": "rgw.buckets",
> "marker": "default.101387371.6",
> "bucket_id": "default.101387371.6",
> "tenant": ""
>   [...]
>
> With Jewel, we introduced a default-placement and put the index of new
> buckets into the pool rgw.buckets.index. Now, we`d like to correct the
> old buckets and shift all the indices where they belong to.
> I can
> * copy the .dir.MARKER.SHARD# objects to the index pool
> * modify the buckets metadata
> But:
> * when I try to modify the metadata of the bucket instance, the
> index_pool does not get changed
> * radosgw-admin bucket stats still shows the old index pool
>
> Did I miss something?

You didn't miss much. There is a guard in the code that prevents you from 
modifying the placement pools. I have this commit that changes that (but 
that probably doesn't help you much):

https://github.com/yehudasa/ceph/commit/0baebec32e388f4cb7bdf1fee9afe2144eeeb354

The way to go forward for you I think would be by reshrding the buckets, 
which will put the bucket indexes in the correct place. Can you actually 
list the bucket indexes right now?

Yehuda

>
> NB: Just now, I perform all the operations with a jewel 10.2.2 rgw.
> Luminous is available but not active yet.
>
> Best regards,
>
> Ingo Reimann
> Dunkel GmbH
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Multipart Upload - POST fails

2018-03-02 Thread Ingo Reimann
e2053ca700 15 server
signature=48cc8c61a70dde17932d925f65f843116199c1ca10094db83e7de05bfbd57dc4
2018-03-02 13:59:04.928060 7fe2053ca700 15 client
signature=48cc8c61a70dde17932d925f65f843116199c1ca10094db83e7de05bfbd57dc4
2018-03-02 13:59:04.928062 7fe2053ca700 15 compare=0
2018-03-02 13:59:04.928075 7fe2053ca700 20 rgw::auth::s3::LocalEngine
granted access
2018-03-02 13:59:04.928078 7fe2053ca700 20 rgw::auth::s3::AWSAuthStrategy
granted access
2018-03-02 13:59:04.928084 7fe2053ca700  2 req 61:0.000731:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:normalizing buckets and tenants
2018-03-02 13:59:04.928089 7fe2053ca700 10 s->object=Data128MB
s->bucket=luminous-12-2-4
2018-03-02 13:59:04.928095 7fe2053ca700  2 req 61:0.000742:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:init permissions
2018-03-02 13:59:04.928134 7fe2053ca700 15 decode_policy Read
AccessControlPolicyhttp://s3.amazonaws.com/doc/2006-03-01/";>Xa*IngoReimannhttp://www.w3.org/2001/XMLSchema-instance";
xsi:type="CanonicalUser">Xa*IngoReimannFULL_CONTROL
2018-03-02 13:59:04.928207 7fe2053ca700  2 req 61:0.000852:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:recalculating target
2018-03-02 13:59:04.928221 7fe2053ca700  2 req 61:0.000868:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:reading permissions
2018-03-02 13:59:04.928227 7fe2053ca700  2 req 61:0.000874:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:init op
2018-03-02 13:59:04.928231 7fe2053ca700  2 req 61:0.000878:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:verifying op mask
2018-03-02 13:59:04.928234 7fe2053ca700 20 required_mask= 2 user.op_mask=7
2018-03-02 13:59:04.928237 7fe2053ca700  2 req 61:0.000884:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:verifying op permissions
2018-03-02 13:59:04.928242 7fe2053ca700 20 -- Getting permissions begin
with perm_mask=50
2018-03-02 13:59:04.928244 7fe2053ca700  5 Searching permissions for
identity=rgw::auth::SysReqApplier ->
rgw::auth::LocalApplier(acct_user=Xa*, acct_name=IngoReimann,
subuser=, perm_mask=15, is_admin=0) mask=50
2018-03-02 13:59:04.928248 7fe2053ca700  5 Searching permissions for
uid=Xa*
2018-03-02 13:59:04.928253 7fe2053ca700  5 Found permission: 15
2018-03-02 13:59:04.928256 7fe2053ca700  5 Searching permissions for
group=1 mask=50
2018-03-02 13:59:04.928258 7fe2053ca700  5 Permissions for group not found
2018-03-02 13:59:04.928261 7fe2053ca700  5 Searching permissions for
group=2 mask=50
2018-03-02 13:59:04.928264 7fe2053ca700  5 Permissions for group not found
2018-03-02 13:59:04.928265 7fe2053ca700  5 -- Getting permissions done for
identity=rgw::auth::SysReqApplier ->
rgw::auth::LocalApplier(acct_user=Xa*, acct_name=IngoReimann,
subuser=, perm_mask=15, is_admin=0), owner=Xa*, perm=2
2018-03-02 13:59:04.928269 7fe2053ca700 10
identity=rgw::auth::SysReqApplier ->
rgw::auth::LocalApplier(acct_user=Xa*, acct_name=IngoReimann,
subuser=, perm_mask=15, is_admin=0) requested perm (type)=2, policy
perm=2, user_perm_mask=2, acl perm=2
2018-03-02 13:59:04.928273 7fe2053ca700  2 req 61:0.000920:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:verifying op params
2018-03-02 13:59:04.928276 7fe2053ca700  2 req 61:0.000923:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:pre-executing
2018-03-02 13:59:04.928279 7fe2053ca700  2 req 61:0.000926:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:executing
2018-03-02 13:59:04.928358 7fe2053ca700 10 x>>
x-amz-content-sha256:254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e4
52b97453917
2018-03-02 13:59:04.928386 7fe2053ca700 10 x>> x-amz-date:20180302T125904Z
2018-03-02 13:59:04.928473 7fe2053ca700 20 get_obj_state:
rctx=0x7fe2053c2e80
obj=luminous-12-2-4:_multipart_Data128MB.2~F8MNbtuGGxv_tsWiC9lQC3r_M65EEdp
.meta state=0x55c249dfa608 s->prefetch_data=0
2018-03-02 13:59:04.931750 7fe2053ca700  2 req 61:0.004395:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:completing
2018-03-02 13:59:04.932171 7fe2053ca700  2 req 61:0.004817:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:op status=-1
2018-03-02 13:59:04.932194 7fe2053ca700  2 req 61:0.004841:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:http status=403

Any hints, what could be the problem?

Best regards,

Ingo Reimann
http://www.Dunkel.de/

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multipart Upload - POST fails

2018-03-07 Thread Ingo Reimann
No-one?


-Ursprüngliche Nachricht-
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Ingo Reimann
Gesendet: Freitag, 2. März 2018 14:15
An: ceph-users
Betreff: [ceph-users] Multipart Upload - POST fails

Hi,

we discovered some problem with our installation - Multipart upload is not
working.

What we did:
* tried upload with cyberduck as well as with script from
http://tracker.ceph.com/issues/12790
* tried against jewel gateways and luminous gateways from old cluster
* tried against 12.2.4 gateway with jewel-era cluster

Surprisingly this is no signature problem as in the issue above, instead I
get the following in the logs:

2018-03-02 13:59:04.927353 7fe2053ca700  1 == starting new request
req=0x7fe2053c42c0 =
2018-03-02 13:59:04.927383 7fe2053ca700  2 req 61:0.30::POST
/luminous-12-2-4/Data128MB::initializing for trans_id =
tx0003d-005a994a98-10c84997-default
2018-03-02 13:59:04.927396 7fe2053ca700 10 rgw api priority: s3=5
s3website=4
2018-03-02 13:59:04.927399 7fe2053ca700 10 host=cephrgw01.dunkel.de
2018-03-02 13:59:04.927422 7fe2053ca700 20 subdomain=
domain=cephrgw01.dunkel.de in_hosted_domain=1 in_hosted_domain_s3website=0
2018-03-02 13:59:04.927427 7fe2053ca700 20 final domain/bucket subdomain=
domain=cephrgw01.dunkel.de in_hosted_domain=1 in_hosted_domain_s3website=0
s->info.domain=cephrgw01.dunkel.de
s->info.request_uri=/luminous-12-2-4/Data128MB
2018-03-02 13:59:04.927447 7fe2053ca700 10 meta>>
HTTP_X_AMZ_CONTENT_SHA256
2018-03-02 13:59:04.927454 7fe2053ca700 10 meta>> HTTP_X_AMZ_DATE
2018-03-02 13:59:04.927459 7fe2053ca700 10 x>>
x-amz-content-sha256:254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e4
52b97453917
2018-03-02 13:59:04.927464 7fe2053ca700 10 x>> x-amz-date:20180302T125904Z
2018-03-02 13:59:04.927493 7fe2053ca700 20 get_handler
handler=22RGWHandler_REST_Obj_S3
2018-03-02 13:59:04.927500 7fe2053ca700 10
handler=22RGWHandler_REST_Obj_S3
2018-03-02 13:59:04.927505 7fe2053ca700  2 req 61:0.000152:s3:POST
/luminous-12-2-4/Data128MB::getting op 4
2018-03-02 13:59:04.927512 7fe2053ca700 10
op=28RGWInitMultipart_ObjStore_S3
2018-03-02 13:59:04.927514 7fe2053ca700  2 req 61:0.000161:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:verifying requester
2018-03-02 13:59:04.927519 7fe2053ca700 20
rgw::auth::StrategyRegistry::s3_main_strategy_t: trying
rgw::auth::s3::AWSAuthStrategy
2018-03-02 13:59:04.927524 7fe2053ca700 20 rgw::auth::s3::AWSAuthStrategy:
trying rgw::auth::s3::S3AnonymousEngine
2018-03-02 13:59:04.927531 7fe2053ca700 20
rgw::auth::s3::S3AnonymousEngine denied with reason=-1
2018-03-02 13:59:04.927533 7fe2053ca700 20 rgw::auth::s3::AWSAuthStrategy:
trying rgw::auth::s3::LocalEngine
2018-03-02 13:59:04.927569 7fe2053ca700 10 v4 signature format =
48cc8c61a70dde17932d925f65f843116199c1ca10094db83e7de05bfbd57dc4
2018-03-02 13:59:04.927584 7fe2053ca700 10 v4 credential format =
8DGDGA57XL9YPM8DGEQQ/20180302/us-east-1/s3/aws4_request
2018-03-02 13:59:04.927587 7fe2053ca700 10 access key id =
8DGDGA57XL9YPM8DGEQQ
2018-03-02 13:59:04.927589 7fe2053ca700 10 credential scope =
20180302/us-east-1/s3/aws4_request
2018-03-02 13:59:04.927620 7fe2053ca700 10 canonical headers format =
content-type:application/octet-stream
date:Fri, 02 Mar 2018 12:59:04 GMT
host:cephrgw01.dunkel.de
x-amz-content-sha256:254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e4
52b97453917
x-amz-date:20180302T125904Z

2018-03-02 13:59:04.927634 7fe2053ca700 10 payload request hash =
254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e452b97453917
2018-03-02 13:59:04.927690 7fe2053ca700 10 canonical request = POST
/luminous-12-2-4/Data128MB uploads= content-type:application/octet-stream
date:Fri, 02 Mar 2018 12:59:04 GMT
host:cephrgw01.dunkel.de
x-amz-content-sha256:254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e4
52b97453917
x-amz-date:20180302T125904Z

content-type;date;host;x-amz-content-sha256;x-amz-date
254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e452b97453917
2018-03-02 13:59:04.927696 7fe2053ca700 10 canonical request hash =
54e9858263535b46a3c4e51b2ae5c1d0bf5e7a7690c5bba722eea749e7b936c4
2018-03-02 13:59:04.927716 7fe2053ca700 10 string to sign =
AWS4-HMAC-SHA256
20180302T125904Z
20180302/us-east-1/s3/aws4_request
54e9858263535b46a3c4e51b2ae5c1d0bf5e7a7690c5bba722eea749e7b936c4
2018-03-02 13:59:04.927920 7fe2053ca700 10 date_k=
dcef1f3be70873f1cb3240f7a56320e3c6763e7cf4bfae0e3182d2f9525292cd
2018-03-02 13:59:04.927954 7fe2053ca700 10 region_k  =
3d83dd9161cf7ba15e6c8c28d264f6cfce9b848e927359f34364a6c8c98209b7
2018-03-02 13:59:04.927963 7fe2053ca700 10 service_k =
e0708e00dc6b52aa1d889f45cd1dcced2bb1b2eee1b62e94ad9813c555e8eda9
2018-03-02 13:59:04.927972 7fe2053ca700 10 signing_k =
1ae362c4b2f1666786404fdb56c62d4f393635b2ce76d46ba325097fd3aa645e
2018-03-02 13:59:04.928021 7fe2053ca700 10 generated signature =
48cc8c61a70dde17932d925f65f843116199c1ca10094db83e7de05bfbd57dc4
2018-03-02 1

[ceph-users] Multipart Failure SOLVED - Missing Pool not created automatically

2018-03-28 Thread Ingo Reimann
Hi all,

i was able to track down the problem:

our zone config contains a default-placement with a value for
"data_extra_pool". This pool, e.g. "dev.rgw.buckets.non-ec" did not exists
and could not be created automatically.

Logs show:
2018-03-28 11:26:33.151533 7f569b30e700  1 -- 10.197.115.31:0/388946122
--> 10.197.115.21:6789/0 -- pool_op(create pool 0 auid 0 tid 33 name
dev.rgw.buckets.non-ec v0) v4 -- 0x564e68394780 con 0
2018-03-28 11:26:33.151775 7f56f75cd700  1 -- 10.197.115.31:0/388946122
<== mon.0 10.197.115.21:6789/0 11  pool_op_reply(tid 33 (1) Operation
not permitted v1083) v1  43+0+0 (967791429 0 0) 0x564e68394780 con
0x564e66898000

I created the pool and that solved the problem.

Could anybody tell me, which caps are needed, that this could happen
automatically?

Best regards,

Ingo


-Ursprüngliche Nachricht-
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Ingo Reimann
Gesendet: Freitag, 2. März 2018 14:15
An: ceph-users
Betreff: [ceph-users] Multipart Upload - POST fails

Hi,

we discovered some problem with our installation - Multipart upload is not
working.

What we did:
* tried upload with cyberduck as well as with script from
http://tracker.ceph.com/issues/12790
* tried against jewel gateways and luminous gateways from old cluster
* tried against 12.2.4 gateway with jewel-era cluster

Surprisingly this is no signature problem as in the issue above, instead I
get the following in the logs:

2018-03-02 13:59:04.927353 7fe2053ca700  1 == starting new request
req=0x7fe2053c42c0 =
2018-03-02 13:59:04.927383 7fe2053ca700  2 req 61:0.30::POST
/luminous-12-2-4/Data128MB::initializing for trans_id =
tx0003d-005a994a98-10c84997-default
2018-03-02 13:59:04.927396 7fe2053ca700 10 rgw api priority: s3=5
s3website=4
2018-03-02 13:59:04.927399 7fe2053ca700 10 host=cephrgw01.dunkel.de
2018-03-02 13:59:04.927422 7fe2053ca700 20 subdomain=
domain=cephrgw01.dunkel.de in_hosted_domain=1 in_hosted_domain_s3website=0
2018-03-02 13:59:04.927427 7fe2053ca700 20 final domain/bucket subdomain=
domain=cephrgw01.dunkel.de in_hosted_domain=1 in_hosted_domain_s3website=0
s->info.domain=cephrgw01.dunkel.de
s->info.request_uri=/luminous-12-2-4/Data128MB
2018-03-02 13:59:04.927447 7fe2053ca700 10 meta>>
HTTP_X_AMZ_CONTENT_SHA256
2018-03-02 13:59:04.927454 7fe2053ca700 10 meta>> HTTP_X_AMZ_DATE
2018-03-02 13:59:04.927459 7fe2053ca700 10 x>>
x-amz-content-sha256:254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e4
52b97453917
2018-03-02 13:59:04.927464 7fe2053ca700 10 x>> x-amz-date:20180302T125904Z
2018-03-02 13:59:04.927493 7fe2053ca700 20 get_handler
handler=22RGWHandler_REST_Obj_S3
2018-03-02 13:59:04.927500 7fe2053ca700 10
handler=22RGWHandler_REST_Obj_S3
2018-03-02 13:59:04.927505 7fe2053ca700  2 req 61:0.000152:s3:POST
/luminous-12-2-4/Data128MB::getting op 4
2018-03-02 13:59:04.927512 7fe2053ca700 10
op=28RGWInitMultipart_ObjStore_S3
2018-03-02 13:59:04.927514 7fe2053ca700  2 req 61:0.000161:s3:POST
/luminous-12-2-4/Data128MB:init_multipart:verifying requester
2018-03-02 13:59:04.927519 7fe2053ca700 20
rgw::auth::StrategyRegistry::s3_main_strategy_t: trying
rgw::auth::s3::AWSAuthStrategy
2018-03-02 13:59:04.927524 7fe2053ca700 20 rgw::auth::s3::AWSAuthStrategy:
trying rgw::auth::s3::S3AnonymousEngine
2018-03-02 13:59:04.927531 7fe2053ca700 20
rgw::auth::s3::S3AnonymousEngine denied with reason=-1
2018-03-02 13:59:04.927533 7fe2053ca700 20 rgw::auth::s3::AWSAuthStrategy:
trying rgw::auth::s3::LocalEngine
2018-03-02 13:59:04.927569 7fe2053ca700 10 v4 signature format =
48cc8c61a70dde17932d925f65f843116199c1ca10094db83e7de05bfbd57dc4
2018-03-02 13:59:04.927584 7fe2053ca700 10 v4 credential format =
8DGDGA57XL9YPM8DGEQQ/20180302/us-east-1/s3/aws4_request
2018-03-02 13:59:04.927587 7fe2053ca700 10 access key id =
8DGDGA57XL9YPM8DGEQQ
2018-03-02 13:59:04.927589 7fe2053ca700 10 credential scope =
20180302/us-east-1/s3/aws4_request
2018-03-02 13:59:04.927620 7fe2053ca700 10 canonical headers format =
content-type:application/octet-stream
date:Fri, 02 Mar 2018 12:59:04 GMT
host:cephrgw01.dunkel.de
x-amz-content-sha256:254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e4
52b97453917
x-amz-date:20180302T125904Z

2018-03-02 13:59:04.927634 7fe2053ca700 10 payload request hash =
254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e452b97453917
2018-03-02 13:59:04.927690 7fe2053ca700 10 canonical request = POST
/luminous-12-2-4/Data128MB
uploads=
content-type:application/octet-stream
date:Fri, 02 Mar 2018 12:59:04 GMT
host:cephrgw01.dunkel.de
x-amz-content-sha256:254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e4
52b97453917
x-amz-date:20180302T125904Z

content-type;date;host;x-amz-content-sha256;x-amz-date
254bcc3fc4f27172636df4bf32de9f107f620d559b20d760197e452b97453917
2018-03-02 13:59:04.927696 7fe2053ca700 10 canonical request hash =
54e9858263535b46a3c4e51b2ae5c1d0bf5e7a7690c5bba722eea749e7

[ceph-users] long running jobs with radosgw adminops

2018-12-19 Thread Ingo Reimann
Hi,

i just start with the Admin Ops API and wonder how to deal with jobs that
take hours or even days.

Worst example: I have a user with tons of files. When I send the DELETE
/admin/user - command with purge-data=True the HTTPS-Request times out,
but the job is still being processed. I have no idea, which of our
radosgws performs the job and what is the status. I have to query the user
stats, if it still exists and if total_bytes still decrement. When this
halts, maybe the radosgw had a reboot and I have to start the job again.

Am I missing something and is there probably a better way to deal with
that?

Best regards,

Ingo Reimann

Dunkel GmbH 
http://www.Dunkel.de/
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com