[ceph-users] problem with RGW

2015-07-31 Thread Butkeev Stas
Hello everybody

We have ceph cluster that consist of 8 host with 12 osd per each host. It's 2T 
SATA disks.

[13:23]:[root@se087  ~]# ceph osd tree
ID  WEIGHTTYPE NAMEUP/DOWN REWEIGHT 
PRIMARY-AFFINITY 
 -1 182.99203 root default  
 
 -2 182.99203 region RU 
 
 -3  91.49487 datacenter ru-msk-comp1p  
 
 -9  22.87500 host 1 
 48   1.90599 osd.48up  1.0  
1.0 
 49   1.90599 osd.49up  1.0  
1.0 
 50   1.90599 osd.50up  1.0  
1.0 
 51   1.90599 osd.51up  1.0  
1.0 
 52   1.90599 osd.52up  1.0  
1.0 
 53   1.90599 osd.53up  1.0  
1.0 
 54   1.90599 osd.54up  1.0  
1.0 
 55   1.90599 osd.55up  1.0  
1.0 
 56   1.90599 osd.56up  1.0  
1.0 
 57   1.90599 osd.57up  1.0  
1.0 
 58   1.90599 osd.58up  1.0  
1.0 
 59   1.90599 osd.59up  1.0  
1.0 
-10  22.87216 host 2 
 60   1.90599 osd.60up  1.0  
1.0 
 61   1.90599 osd.61up  1.0  
1.0 
 62   1.90599 osd.62up  1.0  
1.0 
 63   1.90599 osd.63up  1.0  
1.0 
 64   1.90599 osd.64up  1.0  
1.0 
 65   1.90599 osd.65up  1.0  
1.0 
 66   1.90599 osd.66up  1.0  
1.0 
 67   1.90599 osd.67up  1.0  
1.0 
 69   1.90599 osd.69up  1.0  
1.0 
 70   1.90599 osd.70up  1.0  
1.0 
 71   1.90599 osd.71up  1.0  
1.0 
 68   1.90627 osd.68up  1.0  
1.0 
-11  22.87500 host 3 
 72   1.90599 osd.72up  1.0  
1.0 
 73   1.90599 osd.73up  1.0  
1.0 
 74   1.90599 osd.74up  1.0  
1.0 
 75   1.90599 osd.75up  1.0  
1.0 
 76   1.90599 osd.76up  1.0  
1.0 
 77   1.90599 osd.77up  1.0  
1.0 
 78   1.90599 osd.78up  1.0  
1.0 
 79   1.90599 osd.79up  1.0  
1.0 
 80   1.90599 osd.80up  1.0  
1.0 
 81   1.90599 osd.81up  1.0  
1.0 
 82   1.90599 osd.82up  1.0  
1.0 
 83   1.90599 osd.83up  1.0  
1.0 
-12  22.87271 host 4 
 84   1.90599 osd.84up  1.0  
1.0 
 86   1.90599 osd.86up  1.0  
1.0 
 89   1.90599 osd.89up  1.0  
1.0 
 90   1.90599 osd.90up  1.0  
1.0 
 91   1.90599 osd.91up  1.0  
1.0 
 92   1.90599 osd.92up  1.0  
1.0 
 93   1.90599 osd.93up  1.0  
1.0 
 94   1.90599 osd.94up  1.0  
1.0 
 95   1.90599 osd.95up  1.0  
1.0 
 85   1.90627 osd.85up  1.0  
1.0 
 88   1.90627 osd.88up  1.0  
1.0 
 87   1.90627 osd.87up  1.0  
1.0 
 -4  91.49716 datacenter ru-msk-vol51   
 
 -5  22.87216 host 5 
  1   1.90599 osd.1 up  1.000

Re: [ceph-users] problem with RGW

2015-07-31 Thread Brad Hubbard


- Original Message -
From: "Butkeev Stas" 
To: ceph-us...@ceph.com, ceph-commun...@lists.ceph.com, supp...@ceph.com
Sent: Friday, 31 July, 2015 9:10:40 PM
Subject: [ceph-users] problem with RGW

>Hello everybody
>
>We have ceph cluster that consist of 8 host with 12 osd per each host. It's 2T 
>SATA disks.
>In log osd.0
>
>2015-07-31 14:03:24.490774 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 35 
>slow requests, 9 included below; oldest blocked for > 3003.952332 secs
>2015-07-31 14:03:24.490782 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 960.179599 seconds old, received at 2015-07-31 13:47:24.311080: 
>osd_op(client.67321.0:7856 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [writefull 0~0] 
>26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) currently no flag 
>points reached
>2015-07-31 14:03:24.490791 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 960.179357 seconds old, received at 2015-07-31 13:47:24.311323: 
>osd_op(client.67321.0:7857 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [writefull 
>0~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) currently no 
>flag points reached
>2015-07-31 14:03:24.490794 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 960.167539 seconds old, received at 2015-07-31 13:47:24.323141: 
>osd_op(client.67321.0:7858 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [write 
>524288~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) 
>currently no flag points reached
>2015-07-31 14:03:24.490797 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 960.14 seconds old, received at 2015-07-31 13:47:24.335126: 
>osd_op(client.67321.0:7859 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [write 
>1048576~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) 
>currently no flag points reached
>2015-07-31 14:03:24.490801 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 960.145867 seconds old, received at 2015-07-31 13:47:24.344813: 
>osd_op(client.67321.0:7860 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [write 
>1572864~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) 
>currently no flag points reached
>2015-07-31 14:03:25.491062 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 35 
>slow requests, 4 included below; oldest blocked for > 3004.952621 secs
>2015-07-31 14:03:25.491078 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 961.140790 seconds old, received at 2015-07-31 13:47:24.350178: 
>osd_op(client.67321.0:7861 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [write 
>2097152~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) 
>currently no flag points reached
>2015-07-31 14:03:25.491084 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 961.097870 seconds old, received at 2015-07-31 13:47:24.393098: 
>osd_op(client.67321.0:7862 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [write 
>2621440~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) 
>currently no flag points reached
>2015-07-31 14:03:25.491089 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 961.093229 seconds old, received at 2015-07-31 13:47:24.397740: 
>osd_op(client.67321.0:7863 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [write 
>3145728~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) 
>currently no flag points reached
>2015-07-31 14:03:25.491095 7f2cd95c5700  0 log_channel(cluster) log [WRN] : 
>slow request 961.002957 seconds old, received at 2015-07-31 13:47:24.488012: 
>osd_op(client.67321.0:7864 
>default.34169.37__shadow_.AnULxoR-51Q7fGdIVVP92CPeptlQJIm_226 [write 
>3670016~524288] 26.f9af7c89 ack+ondisk+write+known_if_redirected e9467) 
>currently no flag points reached
>
>How I can avoid these blocked requests? What is root cause of this problem?
>

Do a "ceph pg dump" and look for the pgs in this state,
ack+ondisk+write+known_if_redirected then do a "ceph pg [pgid] query" and post
the output here (if there aren't too many, otherwise a representative sample).
Also look carefully at the acting OSDs for these pgs and check the output of
"ceph daemon /var/run/ceph/ceph-osd.NNN.asok dump_ops_in_flight". There could be
problems with these OSDs slowing down the requests, including hardware problems
so check thoroughly.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem with RGW after update to Jewel

2016-07-24 Thread Frank Enderle
Hi,

a while ago I updated a cluster from Infernalis to Jewel. After the update some 
problems occured, which I fixed (I had to create some additional pool which I 
was helped with in the IRC channel) - so the cluster now ran fine until we 
tried to add an addtional bucket. Now I get the following error in the error 
log:

2016-07-24 
19:50:45.978005 7f6ce97fa700  1 == starting new request req=0x7f6ce97f4710 
=
2016-07-24 
19:50:46.021122 7f6ce97fa700  0 sending create_bucket request to master 
zonegroup
2016-07-24 
19:50:46.021135 7f6ce97fa700  0 ERROR: endpoints not configured for upstream 
zone
2016-07-24 
19:50:46.021148 7f6ce97fa700  0 WARNING: set_req_state_err err_no=5 resorting 
to 500
2016-07-24 
19:50:46.021249 7f6ce97fa700  1 == req done req=0x7f6ce97f4710 op status=-5 
http_status=500 ==
2016-07-24 
19:50:46.021304 7f6ce97fa700  1 civetweb: 0x7f6dac001420: 10.42.20.5 - - 
[24/Jul/2016:19:50:45 
+] "PUT /abc/ 
HTTP/1.1" 500 0 - Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)

I already tried to fix the problem using the script at

https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html

with the outcome that all users disappeared and no bucket could be access. So I 
restored the backup .rgw.root and it now works again, but still I can't create 
buckets. Obviously something has been mixed up with the zone/zonegroup stuff 
during the update.

Would be somebody able to take a look at this? I'm happy to provide all the 
required files; just name them.

Thanks,

Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-25 Thread Wido den Hollander

> Op 24 juli 2016 om 21:58 schreef Frank Enderle :
> 
> 
> Hi,
> 
> a while ago I updated a cluster from Infernalis to Jewel. After the update 
> some problems occured, which I fixed (I had to create some additional pool 
> which I was helped with in the IRC channel) - so the cluster now ran fine 
> until we tried to add an addtional bucket. Now I get the following error in 
> the error log:
> 
> 2016-07-24 
> 19:50:45.978005 7f6ce97fa700  1 == starting new request 
> req=0x7f6ce97f4710 =
> 2016-07-24 
> 19:50:46.021122 7f6ce97fa700  0 sending create_bucket request to master 
> zonegroup
> 2016-07-24 
> 19:50:46.021135 7f6ce97fa700  0 ERROR: endpoints not configured for upstream 
> zone
> 2016-07-24 
> 19:50:46.021148 7f6ce97fa700  0 WARNING: set_req_state_err err_no=5 resorting 
> to 500
> 2016-07-24 
> 19:50:46.021249 7f6ce97fa700  1 == req done req=0x7f6ce97f4710 op 
> status=-5 http_status=500 ==
> 2016-07-24 
> 19:50:46.021304 7f6ce97fa700  1 civetweb: 0x7f6dac001420: 10.42.20.5 - - 
> [24/Jul/2016:19:50:45 
> +] "PUT /abc/ 
> HTTP/1.1" 500 0 - Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
> 
> I already tried to fix the problem using the script at
> 
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html
> 
> with the outcome that all users disappeared and no bucket could be access. So 
> I restored the backup .rgw.root and it now works again, but still I can't 
> create buckets. Obviously something has been mixed up with the zone/zonegroup 
> stuff during the update.
> 
> Would be somebody able to take a look at this? I'm happy to provide all the 
> required files; just name them.
> 

It seems there is no endpoint configured. Can you dump the regions and zones 
configuration using radosgw-admin tool.

>From the top of my head:

$ radosgw-admin zone get --rgw-zone=zoneX
$ radosgw-admin region get --rgw-region=regionY

Wido

> Thanks,
> 
> Frank
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-25 Thread Shilpa Manjarabad Jagannath

- Original Message -
> From: "Frank Enderle" 
> To: ceph-users@lists.ceph.com
> Sent: Monday, July 25, 2016 1:28:10 AM
> Subject: [ceph-users] Problem with RGW after update to Jewel
> 
> Hi,
> 
> a while ago I updated a cluster from Infernalis to Jewel. After the update
> some problems occured, which I fixed (I had to create some additional pool
> which I was helped with in the IRC channel) - so the cluster now ran fine
> until we tried to add an addtional bucket. Now I get the following error in
> the error log:
> 
> 2016-07-24 19:50:45.978005 7f6ce97fa700 1 == starting new request
> req=0x7f6ce97f4710 =
> 2016-07-24 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request to
> master zonegroup
> 2016-07-24 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured for
> upstream zone
> 2016-07-24 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err err_no=5
> resorting to 500
> 2016-07-24 19:50:46.021249 7f6ce97fa700 1 == req done req=0x7f6ce97f4710
> op status=-5 http_status=500 ==
> 2016-07-24 19:50:46.021304 7f6ce97fa700 1 civetweb: 0x7f6dac001420:
> 10.42.20.5 - - [24/Jul/2016: 19:50:45 + ] "PUT /abc/ HTTP/1.1" 500 0 -
> Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
> 
> I already tried to fix the problem using the script at
> 
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html
> 
> with the outcome that all users disappeared and no bucket could be access. So
> I restored the backup .rgw.root and it now works again, but still I can't
> create buckets. Obviously something has been mixed up with the
> zone/zonegroup stuff during the update.
> 
> Would be somebody able to take a look at this? I'm happy to provide all the
> required files; just name them.
> 
> Thanks,
> 
> Frank
> 

It looks like http://tracker.ceph.com/issues/16627, pending backport.


> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-25 Thread Frank Enderle
Hi,

here the outputs:

radosgw-admin zone get --rgw-zone=default
{
"id": "default",
"name": "default",
"domain_root": ".rgw",
"control_pool": ".rgw.control",
"gc_pool": ".rgw.gc",
"log_pool": ".log",
"intent_log_pool": ".intent-log",
"usage_log_pool": ".usage",
"user_keys_pool": ".users",
"user_email_pool": ".users.email",
"user_swift_pool": ".users.swift",
"user_uid_pool": ".users.uid",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": ".rgw.buckets.index",
"data_pool": ".rgw.buckets",
"data_extra_pool": ".rgw.buckets.extra",
"index_type": 0
}
}
],
"metadata_heap": ".rgw.meta",
    "realm_id": ""
}

radosgw-admin --cluster pbs region get --rgw-zone=default
failed to init zonegroup: (2) No such file or directory

Does that bring any light into the darkness?

Frank.


From: Wido den Hollander <mailto:w...@42on.com>
Date: 25 July 2016 at 09:48:40
To: ceph-users@lists.ceph.com 
<mailto:ceph-users@lists.ceph.com>, Frank Enderle 
<mailto:frank.ende...@anamica.de>
Subject:  Re: [ceph-users] Problem with RGW after update to Jewel


> Op 24 juli 2016 om 21:58 schreef Frank Enderle :
>
>
> Hi,
>
> a while ago I updated a cluster from Infernalis to Jewel. After the update 
> some problems occured, which I fixed (I had to create some additional pool 
> which I was helped with in the IRC channel) - so the cluster now ran fine 
> until we tried to add an addtional bucket. Now I get the following error in 
> the error log:
>
> 2016-07-24<http://airmail.calendar/2016-07-24%2012:00:00%20GMT+2> 
> 19:50:45.978005 7f6ce97fa700 1 == starting new request req=0x7f6ce97f4710 
> =
> 2016-07-24<http://airmail.calendar/2016-07-24%2012:00:00%20GMT+2> 
> 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request to master 
> zonegroup
> 2016-07-24<http://airmail.calendar/2016-07-24%2012:00:00%20GMT+2> 
> 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured for upstream 
> zone
> 2016-07-24<http://airmail.calendar/2016-07-24%2012:00:00%20GMT+2> 
> 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err err_no=5 resorting 
> to 500
> 2016-07-24<http://airmail.calendar/2016-07-24%2012:00:00%20GMT+2> 
> 19:50:46.021249 7f6ce97fa700 1 == req done req=0x7f6ce97f4710 op 
> status=-5 http_status=500 ==
> 2016-07-24<http://airmail.calendar/2016-07-24%2012:00:00%20GMT+2> 
> 19:50:46.021304 7f6ce97fa700 1 civetweb: 0x7f6dac001420: 10.42.20.5 - - 
> [24/Jul/2016:19:50:45 
> +<http://airmail.calendar/2016-07-24%2021:50:45%20GMT+2>] "PUT /abc/ 
> HTTP/1.1" 500 0 - Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
>
> I already tried to fix the problem using the script at
>
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html
>
> with the outcome that all users disappeared and no bucket could be access. So 
> I restored the backup .rgw.root and it now works again, but still I can't 
> create buckets. Obviously something has been mixed up with the zone/zonegroup 
> stuff during the update.
>
> Would be somebody able to take a look at this? I'm happy to provide all the 
> required files; just name them.
>

It seems there is no endpoint configured. Can you dump the regions and zones 
configuration using radosgw-admin tool.

From the top of my head:

$ radosgw-admin zone get --rgw-zone=zoneX
$ radosgw-admin region get --rgw-region=regionY

Wido

> Thanks,
>
> Frank
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-25 Thread Frank Enderle
It most certainly looks very much like the same problem.. Is there a way to 
patch the configuration by hand to get the cluster back in a working state?

--

From: Shilpa Manjarabad Jagannath 
<mailto:smanj...@redhat.com>
Date: 25 July 2016 at 10:34:42
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: ceph-users@lists.ceph.com 
<mailto:ceph-users@lists.ceph.com>
Subject:  Re: [ceph-users] Problem with RGW after update to Jewel


- Original Message -
> From: "Frank Enderle" 
> To: ceph-users@lists.ceph.com
> Sent: Monday, July 25, 2016 1:28:10 AM
> Subject: [ceph-users] Problem with RGW after update to Jewel
>
> Hi,
>
> a while ago I updated a cluster from Infernalis to Jewel. After the update
> some problems occured, which I fixed (I had to create some additional pool
> which I was helped with in the IRC channel) - so the cluster now ran fine
> until we tried to add an addtional bucket. Now I get the following error in
> the error log:
>
> 2016-07-24 19:50:45.978005 7f6ce97fa700 1 == starting new request
> req=0x7f6ce97f4710 =
> 2016-07-24 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request to
> master zonegroup
> 2016-07-24 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured for
> upstream zone
> 2016-07-24 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err err_no=5
> resorting to 500
> 2016-07-24 19:50:46.021249 7f6ce97fa700 1 == req done req=0x7f6ce97f4710
> op status=-5 http_status=500 ==
> 2016-07-24 19:50:46.021304 7f6ce97fa700 1 civetweb: 0x7f6dac001420:
> 10.42.20.5 - - [24/Jul/2016: 19:50:45 + ] "PUT /abc/ HTTP/1.1" 500 0 -
> Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
>
> I already tried to fix the problem using the script at
>
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html
>
> with the outcome that all users disappeared and no bucket could be access. So
> I restored the backup .rgw.root and it now works again, but still I can't
> create buckets. Obviously something has been mixed up with the
> zone/zonegroup stuff during the update.
>
> Would be somebody able to take a look at this? I'm happy to provide all the
> required files; just name them.
>
> Thanks,
>
> Frank
>

It looks like http://tracker.ceph.com/issues/16627, pending backport.


> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Orit Wasserman
you need to set the default zone as master zone.
you can try:
radosgw-admin zonegroup set < zg.json
where the json is the json return from radosgw-admin zonegroup get
with master_zone field set to  "default"

Orit

On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
 wrote:
> It most certainly looks very much like the same problem.. Is there a way to
> patch the configuration by hand to get the cluster back in a working state?
>
> --
>
> From: Shilpa Manjarabad Jagannath 
> Date: 25 July 2016 at 10:34:42
> To: Frank Enderle 
> Cc: ceph-users@lists.ceph.com 
> Subject:  Re: [ceph-users] Problem with RGW after update to Jewel
>
>
> - Original Message -
>> From: "Frank Enderle" 
>> To: ceph-users@lists.ceph.com
>> Sent: Monday, July 25, 2016 1:28:10 AM
>> Subject: [ceph-users] Problem with RGW after update to Jewel
>>
>> Hi,
>>
>> a while ago I updated a cluster from Infernalis to Jewel. After the update
>> some problems occured, which I fixed (I had to create some additional pool
>> which I was helped with in the IRC channel) - so the cluster now ran fine
>> until we tried to add an addtional bucket. Now I get the following error
>> in
>> the error log:
>>
>> 2016-07-24 19:50:45.978005 7f6ce97fa700 1 == starting new request
>> req=0x7f6ce97f4710 =
>> 2016-07-24 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request to
>> master zonegroup
>> 2016-07-24 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured
>> for
>> upstream zone
>> 2016-07-24 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err
>> err_no=5
>> resorting to 500
>> 2016-07-24 19:50:46.021249 7f6ce97fa700 1 == req done
>> req=0x7f6ce97f4710
>> op status=-5 http_status=500 ==
>> 2016-07-24 19:50:46.021304 7f6ce97fa700 1 civetweb: 0x7f6dac001420:
>> 10.42.20.5 - - [24/Jul/2016: 19:50:45 + ] "PUT /abc/ HTTP/1.1" 500 0 -
>> Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
>>
>> I already tried to fix the problem using the script at
>>
>> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html
>>
>> with the outcome that all users disappeared and no bucket could be access.
>> So
>> I restored the backup .rgw.root and it now works again, but still I can't
>> create buckets. Obviously something has been mixed up with the
>> zone/zonegroup stuff during the update.
>>
>> Would be somebody able to take a look at this? I'm happy to provide all
>> the
>> required files; just name them.
>>
>> Thanks,
>>
>> Frank
>>
>
> It looks like http://tracker.ceph.com/issues/16627, pending backport.
>
>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
I get this error when I try to execute the command:

radosgw-admin --cluster=pbs zonegroup get
failed to init zonegroup: (2) No such file or directory

also with

radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
failed to init zonegroup: (2) No such file or directory


--

anamica GmbH
Heppacher Str. 39
71404 Korb

Telefon:   +49 7151 1351565 0
Telefax: +49 7151 1351565 9
E-Mail: frank.ende...@anamica.de
Internet: www.anamica.de


Handelsregister: AG Stuttgart HRB 732357
Geschäftsführer: Yvonne Holzwarth, Frank Enderle


From: Orit Wasserman <mailto:owass...@redhat.com>
Date: 26 July 2016 at 09:55:58
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: Shilpa Manjarabad Jagannath 
<mailto:smanj...@redhat.com>, ceph-users@lists.ceph.com 
<mailto:ceph-users@lists.ceph.com>
Subject:  Re: [ceph-users] Problem with RGW after update to Jewel

you need to set the default zone as master zone.
you can try:
radosgw-admin zonegroup set < zg.json
where the json is the json return from radosgw-admin zonegroup get
with master_zone field set to "default"

Orit

On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
 wrote:
> It most certainly looks very much like the same problem.. Is there a way to
> patch the configuration by hand to get the cluster back in a working state?
>
> --
>
> From: Shilpa Manjarabad Jagannath 
> Date: 25 July 2016 at 10:34:42
> To: Frank Enderle 
> Cc: ceph-users@lists.ceph.com 
> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>
>
> - Original Message -
>> From: "Frank Enderle" 
>> To: ceph-users@lists.ceph.com
>> Sent: Monday, July 25, 2016 1:28:10 AM
>> Subject: [ceph-users] Problem with RGW after update to Jewel
>>
>> Hi,
>>
>> a while ago I updated a cluster from Infernalis to Jewel. After the update
>> some problems occured, which I fixed (I had to create some additional pool
>> which I was helped with in the IRC channel) - so the cluster now ran fine
>> until we tried to add an addtional bucket. Now I get the following error
>> in
>> the error log:
>>
>> 2016-07-24 19:50:45.978005 7f6ce97fa700 1 == starting new request
>> req=0x7f6ce97f4710 =
>> 2016-07-24 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request to
>> master zonegroup
>> 2016-07-24 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured
>> for
>> upstream zone
>> 2016-07-24 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err
>> err_no=5
>> resorting to 500
>> 2016-07-24 19:50:46.021249 7f6ce97fa700 1 == req done
>> req=0x7f6ce97f4710
>> op status=-5 http_status=500 ==
>> 2016-07-24 19:50:46.021304 7f6ce97fa700 1 civetweb: 0x7f6dac001420:
>> 10.42.20.5 - - [24/Jul/2016: 19:50:45 + ] "PUT /abc/ HTTP/1.1" 500 0 -
>> Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
>>
>> I already tried to fix the problem using the script at
>>
>> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html
>>
>> with the outcome that all users disappeared and no bucket could be access.
>> So
>> I restored the backup .rgw.root and it now works again, but still I can't
>> create buckets. Obviously something has been mixed up with the
>> zone/zonegroup stuff during the update.
>>
>> Would be somebody able to take a look at this? I'm happy to provide all
>> the
>> required files; just name them.
>>
>> Thanks,
>>
>> Frank
>>
>
> It looks like http://tracker.ceph.com/issues/16627, pending backport.
>
>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Orit Wasserman
does adding --rgw-zonegroup=default helps?

On Tue, Jul 26, 2016 at 11:09 AM, Frank Enderle
 wrote:
> I get this error when I try to execute the command:
>
> radosgw-admin --cluster=pbs zonegroup get
> failed to init zonegroup: (2) No such file or directory
>
> also with
>
> radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
> failed to init zonegroup: (2) No such file or directory
>
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon:   +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman 
> Date: 26 July 2016 at 09:55:58
> To: Frank Enderle 
> Cc: Shilpa Manjarabad Jagannath ,
> ceph-users@lists.ceph.com 
>
> Subject:  Re: [ceph-users] Problem with RGW after update to Jewel
>
> you need to set the default zone as master zone.
> you can try:
> radosgw-admin zonegroup set < zg.json
> where the json is the json return from radosgw-admin zonegroup get
> with master_zone field set to "default"
>
> Orit
>
> On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
>  wrote:
>> It most certainly looks very much like the same problem.. Is there a way
>> to
>> patch the configuration by hand to get the cluster back in a working
>> state?
>>
>> --
>>
>> From: Shilpa Manjarabad Jagannath 
>> Date: 25 July 2016 at 10:34:42
>> To: Frank Enderle 
>> Cc: ceph-users@lists.ceph.com 
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>>
>> - Original Message -
>>> From: "Frank Enderle" 
>>> To: ceph-users@lists.ceph.com
>>> Sent: Monday, July 25, 2016 1:28:10 AM
>>> Subject: [ceph-users] Problem with RGW after update to Jewel
>>>
>>> Hi,
>>>
>>> a while ago I updated a cluster from Infernalis to Jewel. After the
>>> update
>>> some problems occured, which I fixed (I had to create some additional
>>> pool
>>> which I was helped with in the IRC channel) - so the cluster now ran fine
>>> until we tried to add an addtional bucket. Now I get the following error
>>> in
>>> the error log:
>>>
>>> 2016-07-24 19:50:45.978005 7f6ce97fa700 1 == starting new request
>>> req=0x7f6ce97f4710 =
>>> 2016-07-24 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request
>>> to
>>> master zonegroup
>>> 2016-07-24 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured
>>> for
>>> upstream zone
>>> 2016-07-24 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err
>>> err_no=5
>>> resorting to 500
>>> 2016-07-24 19:50:46.021249 7f6ce97fa700 1 == req done
>>> req=0x7f6ce97f4710
>>> op status=-5 http_status=500 ==
>>> 2016-07-24 19:50:46.021304 7f6ce97fa700 1 civetweb: 0x7f6dac001420:
>>> 10.42.20.5 - - [24/Jul/2016: 19:50:45 + ] "PUT /abc/ HTTP/1.1" 500 0
>>> -
>>> Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
>>>
>>> I already tried to fix the problem using the script at
>>>
>>> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg28620.html
>>>
>>> with the outcome that all users disappeared and no bucket could be
>>> access.
>>> So
>>> I restored the backup .rgw.root and it now works again, but still I can't
>>> create buckets. Obviously something has been mixed up with the
>>> zone/zonegroup stuff during the update.
>>>
>>> Would be somebody able to take a look at this? I'm happy to provide all
>>> the
>>> required files; just name them.
>>>
>>> Thanks,
>>>
>>> Frank
>>>
>>
>> It looks like http://tracker.ceph.com/issues/16627, pending backport.
>>
>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
Yes! that worked :-)

now I changed the master_zone to default like so:

{
"id": "default",
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [
"***REDACTED***",
"***REDACTED***",
"***REDACTED***"
],
"hostnames_s3website": [],
"master_zone": "default",
"zones": [
{
"id": "default",
"name": "default",
"endpoints": [],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": ""
}

and

radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default

gives me

failed to init realm: (2) No such file or directory


--

anamica GmbH
Heppacher Str. 39
71404 Korb

Telefon:   +49 7151 1351565 0
Telefax: +49 7151 1351565 9
E-Mail: frank.ende...@anamica.de
Internet: www.anamica.de


Handelsregister: AG Stuttgart HRB 732357
Geschäftsführer: Yvonne Holzwarth, Frank Enderle


From: Orit Wasserman <mailto:owass...@redhat.com>
Date: 26 July 2016 at 11:27:43
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: ceph-users@lists.ceph.com 
<mailto:ceph-users@lists.ceph.com>, Shilpa 
Manjarabad Jagannath <mailto:smanj...@redhat.com>
Subject:  Re: [ceph-users] Problem with RGW after update to Jewel

does adding --rgw-zonegroup=default helps?

On Tue, Jul 26, 2016 at 11:09 AM, Frank Enderle
 wrote:
> I get this error when I try to execute the command:
>
> radosgw-admin --cluster=pbs zonegroup get
> failed to init zonegroup: (2) No such file or directory
>
> also with
>
> radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
> failed to init zonegroup: (2) No such file or directory
>
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon: +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman 
> Date: 26 July 2016 at 09:55:58
> To: Frank Enderle 
> Cc: Shilpa Manjarabad Jagannath ,
> ceph-users@lists.ceph.com 
>
> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>
> you need to set the default zone as master zone.
> you can try:
> radosgw-admin zonegroup set < zg.json
> where the json is the json return from radosgw-admin zonegroup get
> with master_zone field set to "default"
>
> Orit
>
> On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
>  wrote:
>> It most certainly looks very much like the same problem.. Is there a way
>> to
>> patch the configuration by hand to get the cluster back in a working
>> state?
>>
>> --
>>
>> From: Shilpa Manjarabad Jagannath 
>> Date: 25 July 2016 at 10:34:42
>> To: Frank Enderle 
>> Cc: ceph-users@lists.ceph.com 
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>>
>> - Original Message -
>>> From: "Frank Enderle" 
>>> To: ceph-users@lists.ceph.com
>>> Sent: Monday, July 25, 2016 1:28:10 AM
>>> Subject: [ceph-users] Problem with RGW after update to Jewel
>>>
>>> Hi,
>>>
>>> a while ago I updated a cluster from Infernalis to Jewel. After the
>>> update
>>> some problems occured, which I fixed (I had to create some additional
>>> pool
>>> which I was helped with in the IRC channel) - so the cluster now ran fine
>>> until we tried to add an addtional bucket. Now I get the following error
>>> in
>>> the error log:
>>>
>>> 2016-07-24 19:50:45.978005 7f6ce97fa700 1 == starting new request
>>> req=0x7f6ce97f4710 =
>>> 2016-07-24 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request
>>> to
>>> master zonegroup
>>> 2016-07-24 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured
>>> for
>>> upstream zone
>>> 2016-07-24 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err
>>> err_no=5
>>> resorting to 500
>>> 2016-07-24 19:50:46.02124

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Orit Wasserman
Lets try:

radosgw-admin realm create --rgw-realm= --default

radosgw-admin zonegroup set --rgw-zonegroup=default  < json

radosgw-admin period update --commit

In the next jewel release the upgrade will be smoother.

Orit

On Tue, Jul 26, 2016 at 11:34 AM, Frank Enderle
 wrote:
> Yes! that worked :-)
>
> now I changed the master_zone to default like so:
>
> {
> "id": "default",
> "name": "default",
> "api_name": "",
> "is_master": "true",
> "endpoints": [],
> "hostnames": [
> "***REDACTED***",
> "***REDACTED***",
> "***REDACTED***"
> ],
> "hostnames_s3website": [],
> "master_zone": "default",
> "zones": [
> {
> "id": "default",
> "name": "default",
> "endpoints": [],
> "log_meta": "false",
> "log_data": "false",
> "bucket_index_max_shards": 0,
> "read_only": "false"
> }
> ],
> "placement_targets": [
> {
> "name": "default-placement",
> "tags": []
> }
> ],
> "default_placement": "default-placement",
> "realm_id": ""
> }
>
> and
>
> radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default
>
> gives me
>
> failed to init realm: (2) No such file or directory
>
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon:   +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman 
> Date: 26 July 2016 at 11:27:43
> To: Frank Enderle 
> Cc: ceph-users@lists.ceph.com , Shilpa Manjarabad
> Jagannath 
>
> Subject:  Re: [ceph-users] Problem with RGW after update to Jewel
>
> does adding --rgw-zonegroup=default helps?
>
> On Tue, Jul 26, 2016 at 11:09 AM, Frank Enderle
>  wrote:
>> I get this error when I try to execute the command:
>>
>> radosgw-admin --cluster=pbs zonegroup get
>> failed to init zonegroup: (2) No such file or directory
>>
>> also with
>>
>> radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
>> failed to init zonegroup: (2) No such file or directory
>>
>>
>> --
>>
>> anamica GmbH
>> Heppacher Str. 39
>> 71404 Korb
>>
>> Telefon: +49 7151 1351565 0
>> Telefax: +49 7151 1351565 9
>> E-Mail: frank.ende...@anamica.de
>> Internet: www.anamica.de
>>
>>
>> Handelsregister: AG Stuttgart HRB 732357
>> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>>
>>
>> From: Orit Wasserman 
>> Date: 26 July 2016 at 09:55:58
>> To: Frank Enderle 
>> Cc: Shilpa Manjarabad Jagannath ,
>> ceph-users@lists.ceph.com 
>>
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>> you need to set the default zone as master zone.
>> you can try:
>> radosgw-admin zonegroup set < zg.json
>> where the json is the json return from radosgw-admin zonegroup get
>> with master_zone field set to "default"
>>
>> Orit
>>
>> On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
>>  wrote:
>>> It most certainly looks very much like the same problem.. Is there a way
>>> to
>>> patch the configuration by hand to get the cluster back in a working
>>> state?
>>>
>>> --
>>>
>>> From: Shilpa Manjarabad Jagannath 
>>> Date: 25 July 2016 at 10:34:42
>>> To: Frank Enderle 
>>> Cc: ceph-users@lists.ceph.com 
>>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>>
>>>
>>> - Original Message -
>>>> From: "Frank Enderle" 
>>>> To: ceph-users@lists.ceph.com
>>>> Sent: Monday, July 25, 2016 1:28:10 AM
>>>> Subject: [ceph-users] Problem with RGW after update to Jewel
>>>>
>>>> Hi,
>>>>
>>>> a while ago I updated a cluster from Infernalis to Jewel. After the
>>>> update
>>>> some problems occured, which I fixed (I had to create some additional
>>>> pool
>>&

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
What should I choose for realm name? I never selected one - does it matter what 
I put there?

--

anamica GmbH
Heppacher Str. 39
71404 Korb

Telefon:   +49 7151 1351565 0
Telefax: +49 7151 1351565 9
E-Mail: frank.ende...@anamica.de
Internet: www.anamica.de


Handelsregister: AG Stuttgart HRB 732357
Geschäftsführer: Yvonne Holzwarth, Frank Enderle


From: Orit Wasserman <mailto:owass...@redhat.com>
Date: 26 July 2016 at 12:13:21
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: ceph-users@lists.ceph.com 
<mailto:ceph-users@lists.ceph.com>, Shilpa 
Manjarabad Jagannath <mailto:smanj...@redhat.com>
Subject:  Re: [ceph-users] Problem with RGW after update to Jewel

Lets try:

radosgw-admin realm create --rgw-realm= --default

radosgw-admin zonegroup set --rgw-zonegroup=default < json

radosgw-admin period update --commit

In the next jewel release the upgrade will be smoother.

Orit

On Tue, Jul 26, 2016 at 11:34 AM, Frank Enderle
 wrote:
> Yes! that worked :-)
>
> now I changed the master_zone to default like so:
>
> {
> "id": "default",
> "name": "default",
> "api_name": "",
> "is_master": "true",
> "endpoints": [],
> "hostnames": [
> "***REDACTED***",
> "***REDACTED***",
> "***REDACTED***"
> ],
> "hostnames_s3website": [],
> "master_zone": "default",
> "zones": [
> {
> "id": "default",
> "name": "default",
> "endpoints": [],
> "log_meta": "false",
> "log_data": "false",
> "bucket_index_max_shards": 0,
> "read_only": "false"
> }
> ],
> "placement_targets": [
> {
> "name": "default-placement",
> "tags": []
> }
> ],
> "default_placement": "default-placement",
> "realm_id": ""
> }
>
> and
>
> radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default
>
> gives me
>
> failed to init realm: (2) No such file or directory
>
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon: +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman 
> Date: 26 July 2016 at 11:27:43
> To: Frank Enderle 
> Cc: ceph-users@lists.ceph.com , Shilpa Manjarabad
> Jagannath 
>
> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>
> does adding --rgw-zonegroup=default helps?
>
> On Tue, Jul 26, 2016 at 11:09 AM, Frank Enderle
>  wrote:
>> I get this error when I try to execute the command:
>>
>> radosgw-admin --cluster=pbs zonegroup get
>> failed to init zonegroup: (2) No such file or directory
>>
>> also with
>>
>> radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
>> failed to init zonegroup: (2) No such file or directory
>>
>>
>> --
>>
>> anamica GmbH
>> Heppacher Str. 39
>> 71404 Korb
>>
>> Telefon: +49 7151 1351565 0
>> Telefax: +49 7151 1351565 9
>> E-Mail: frank.ende...@anamica.de
>> Internet: www.anamica.de
>>
>>
>> Handelsregister: AG Stuttgart HRB 732357
>> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>>
>>
>> From: Orit Wasserman 
>> Date: 26 July 2016 at 09:55:58
>> To: Frank Enderle 
>> Cc: Shilpa Manjarabad Jagannath ,
>> ceph-users@lists.ceph.com 
>>
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>> you need to set the default zone as master zone.
>> you can try:
>> radosgw-admin zonegroup set < zg.json
>> where the json is the json return from radosgw-admin zonegroup get
>> with master_zone field set to "default"
>>
>> Orit
>>
>> On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
>>  wrote:
>>> It most certainly looks very much like the same problem.. Is there a way
>>> to
>>> patch the configuration by hand to get the cluster back in a working
>>> state?
>>>
>>> --
>>>
>>> From: Shilpa Manjarabad Jagannath 
>>> Date: 25 July 2016 at 10:34:42
>>> To: Frank Enderle 
>>> Cc: ceph-users@lists.ceph.com 
>>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>>
>>>
>>> - Original Message -
>>>> From: "Frank En

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Orit Wasserman
it doesn't matter, you can call it gold like in the documentation

On Tue, Jul 26, 2016 at 12:15 PM, Frank Enderle
 wrote:
> What should I choose for realm name? I never selected one - does it matter
> what I put there?
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon:   +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman 
> Date: 26 July 2016 at 12:13:21
>
> To: Frank Enderle 
> Cc: ceph-users@lists.ceph.com , Shilpa Manjarabad
> Jagannath 
> Subject:  Re: [ceph-users] Problem with RGW after update to Jewel
>
> Lets try:
>
> radosgw-admin realm create --rgw-realm= --default
>
> radosgw-admin zonegroup set --rgw-zonegroup=default < json
>
> radosgw-admin period update --commit
>
> In the next jewel release the upgrade will be smoother.
>
> Orit
>
> On Tue, Jul 26, 2016 at 11:34 AM, Frank Enderle
>  wrote:
>> Yes! that worked :-)
>>
>> now I changed the master_zone to default like so:
>>
>> {
>> "id": "default",
>> "name": "default",
>> "api_name": "",
>> "is_master": "true",
>> "endpoints": [],
>> "hostnames": [
>> "***REDACTED***",
>> "***REDACTED***",
>> "***REDACTED***"
>> ],
>> "hostnames_s3website": [],
>> "master_zone": "default",
>> "zones": [
>> {
>> "id": "default",
>> "name": "default",
>> "endpoints": [],
>> "log_meta": "false",
>> "log_data": "false",
>> "bucket_index_max_shards": 0,
>> "read_only": "false"
>> }
>> ],
>> "placement_targets": [
>> {
>> "name": "default-placement",
>> "tags": []
>> }
>> ],
>> "default_placement": "default-placement",
>> "realm_id": ""
>> }
>>
>> and
>>
>> radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default
>>
>> gives me
>>
>> failed to init realm: (2) No such file or directory
>>
>>
>> --
>>
>> anamica GmbH
>> Heppacher Str. 39
>> 71404 Korb
>>
>> Telefon: +49 7151 1351565 0
>> Telefax: +49 7151 1351565 9
>> E-Mail: frank.ende...@anamica.de
>> Internet: www.anamica.de
>>
>>
>> Handelsregister: AG Stuttgart HRB 732357
>> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>>
>>
>> From: Orit Wasserman 
>> Date: 26 July 2016 at 11:27:43
>> To: Frank Enderle 
>> Cc: ceph-users@lists.ceph.com , Shilpa
>> Manjarabad
>> Jagannath 
>>
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>> does adding --rgw-zonegroup=default helps?
>>
>> On Tue, Jul 26, 2016 at 11:09 AM, Frank Enderle
>>  wrote:
>>> I get this error when I try to execute the command:
>>>
>>> radosgw-admin --cluster=pbs zonegroup get
>>> failed to init zonegroup: (2) No such file or directory
>>>
>>> also with
>>>
>>> radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
>>> failed to init zonegroup: (2) No such file or directory
>>>
>>>
>>> --
>>>
>>> anamica GmbH
>>> Heppacher Str. 39
>>> 71404 Korb
>>>
>>> Telefon: +49 7151 1351565 0
>>> Telefax: +49 7151 1351565 9
>>> E-Mail: frank.ende...@anamica.de
>>> Internet: www.anamica.de
>>>
>>>
>>> Handelsregister: AG Stuttgart HRB 732357
>>> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>>>
>>>
>>> From: Orit Wasserman 
>>> Date: 26 July 2016 at 09:55:58
>>> To: Frank Enderle 
>>> Cc: Shilpa Manjarabad Jagannath ,
>>> ceph-users@lists.ceph.com 
>>>
>>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>>
>>> you need to set the default zone as master zone.
>>> you can try:
>>> radosgw-admin zonegroup set < zg.json
>>> where the json is the json return from radosgw-admin zonegroup get
>>> with master_zone field set to "default"
>>>
>>> Orit
>>>
>>>

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
ok - i did now the following:

radosgw-admin --cluster=pbs realm create --rgw-realm=pbs --default
2016-07-26 10:34:15.216404 7fdf346bc9c0  0 error read_lastest_epoch 
.rgw.root:periods.d94c5208-fc1f-4e02-9773-bc709e4d8a34.latest_epoch
{
"id": "98089a5c-6c61-4cc2-a5d8-fce0cb0a9704",
"name": "pbs",
"current_period": "d94c5208-fc1f-4e02-9773-bc709e4d8a34",
"epoch": 1
}

radosgw-admin --cluster=pbs zonegroup get --rgw-zonegroup=default >zg.json

set master_zone to default

radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default <mailto:owass...@redhat.com>
Date: 26 July 2016 at 12:32:58
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: ceph-users@lists.ceph.com 
<mailto:ceph-users@lists.ceph.com>, Shilpa 
Manjarabad Jagannath <mailto:smanj...@redhat.com>
Subject:  Re: [ceph-users] Problem with RGW after update to Jewel

it doesn't matter, you can call it gold like in the documentation

On Tue, Jul 26, 2016 at 12:15 PM, Frank Enderle
 wrote:
> What should I choose for realm name? I never selected one - does it matter
> what I put there?
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon: +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman 
> Date: 26 July 2016 at 12:13:21
>
> To: Frank Enderle 
> Cc: ceph-users@lists.ceph.com , Shilpa Manjarabad
> Jagannath 
> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>
> Lets try:
>
> radosgw-admin realm create --rgw-realm= --default
>
> radosgw-admin zonegroup set --rgw-zonegroup=default < json
>
> radosgw-admin period update --commit
>
> In the next jewel release the upgrade will be smoother.
>
> Orit
>
> On Tue, Jul 26, 2016 at 11:34 AM, Frank Enderle
>  wrote:
>> Yes! that worked :-)
>>
>> now I changed the master_zone to default like so:
>>
>> {
>> "id": "default",
>> "name": "default",
>> "api_name": "",
>> "is_master": "true",
>> "endpoints": [],
>> "hostnames": [
>> "***REDACTED***",
>> "***REDACTED***",
>> "***REDACTED***"
>> ],
>> "hostnames_s3website": [],
>> "master_zone": "default",
>> "zones": [
>> {
>> "id": "default",
>> "name": "default",
>> "endpoints": [],
>> "log_meta": "false",
>> "log_data": "false",
>> "bucket_index_max_shards": 0,
>> "read_only": "false"
>> }
>> ],
>> "placement_targets": [
>> {
>> "name": "default-placement",
>> "tags": []
>> }
>> ],
>> "default_placement": "default-placement",
>> "realm_id": ""
>> }
>>
>> and
>>
>> radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default
>>
>> gives me
>>
>> failed to init realm: (2) No such file or directory
>>
>>
>> --
>>
>> anamica GmbH
>> Heppacher Str. 39
>> 71404 Korb
>>
>> Telefon: +49 7151 1351565 0
>> Telefax: +49 7151 1351565 9
>> E-Mail: frank.ende...@anamica.de
>> Internet: www.anamica.de
>>
>>
>> Handelsregister: AG Stuttgart HRB 732357
>> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>>
>>
>> From: Orit Wasserman 
>> Date: 26 July 2016 at 11:27:43
>> To: Frank Enderle 
>> Cc: ceph-users@lists.ceph.com , Shilpa
>> Manjarabad
>> Jagannath 
>>
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>> does adding --rgw-zonegroup=default helps?
>>
>> On Tue, Jul 26, 2016 at 11:09 AM, Frank Enderle
>>  wrote:
>>> I get this error when I try to execute the command:
>>>
>>> radosgw-admin --cluster=pbs zonegroup get
>>> failed to init zonegroup: (2) No such file or directory
>>>
>>> also with
>>>
>>> radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
>>> failed to init zonegroup: (2) No such file or directory
>>>
>>>
>>> --
>>>
>>> anamica GmbH
>>> Heppacher Str. 39
>>> 71404 Korb
>>>
>>> Telefon: +49 7151 13

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Orit Wasserman
can you get the print out of radosgw-admin zonegroupmap?
and radosgw-admin zonegroup get --rgw-zonegroup=default

On Tue, Jul 26, 2016 at 12:36 PM, Frank Enderle
 wrote:
> ok - i did now the following:
>
> radosgw-admin --cluster=pbs realm create --rgw-realm=pbs --default
> 2016-07-26 10:34:15.216404 7fdf346bc9c0  0 error read_lastest_epoch
> .rgw.root:periods.d94c5208-fc1f-4e02-9773-bc709e4d8a34.latest_epoch
> {
> "id": "98089a5c-6c61-4cc2-a5d8-fce0cb0a9704",
> "name": "pbs",
> "current_period": "d94c5208-fc1f-4e02-9773-bc709e4d8a34",
> "epoch": 1
> }
>
> radosgw-admin --cluster=pbs zonegroup get --rgw-zonegroup=default >zg.json
>
> set master_zone to default
>
> radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default  {
> "id": "default",
> "name": "default",
> "api_name": "",
> "is_master": "true",
> "endpoints": [],
> "hostnames": [
> "***REDACTED***",
> "***REDACTED***",
> "***REDACTED***"
> ],
> "hostnames_s3website": [],
> "master_zone": "default",
> "zones": [
> {
> "id": "default",
> "name": "default",
> "endpoints": [],
> "log_meta": "false",
> "log_data": "false",
> "bucket_index_max_shards": 0,
> "read_only": "false"
> }
> ],
> "placement_targets": [
> {
> "name": "default-placement",
> "tags": []
> }
> ],
> "default_placement": "default-placement",
> "realm_id": "98089a5c-6c61-4cc2-a5d8-fce0cb0a9704"
> }
>
> radosgw-admin --cluster=pbs period update --commit
> 2016-07-26 10:34:56.160525 7f0e22ccf9c0  0 RGWZoneParams::create(): error
> creating default zone params: (17) File exists
> 2016-07-26 10:34:56.264927 7f0e22ccf9c0  0 error read_lastest_epoch
> .rgw.root:periods.98089a5c-6c61-4cc2-a5d8-fce0cb0a9704:staging.latest_epoch
> cannot commit period: period does not have a master zone of a master
> zonegroup
> failed to commit period: (22) Invalid argument
>
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon:   +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman 
> Date: 26 July 2016 at 12:32:58
> To: Frank Enderle 
> Cc: ceph-users@lists.ceph.com , Shilpa Manjarabad
> Jagannath 
>
> Subject:  Re: [ceph-users] Problem with RGW after update to Jewel
>
> it doesn't matter, you can call it gold like in the documentation
>
> On Tue, Jul 26, 2016 at 12:15 PM, Frank Enderle
>  wrote:
>> What should I choose for realm name? I never selected one - does it matter
>> what I put there?
>>
>> --
>>
>> anamica GmbH
>> Heppacher Str. 39
>> 71404 Korb
>>
>> Telefon: +49 7151 1351565 0
>> Telefax: +49 7151 1351565 9
>> E-Mail: frank.ende...@anamica.de
>> Internet: www.anamica.de
>>
>>
>> Handelsregister: AG Stuttgart HRB 732357
>> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>>
>>
>> From: Orit Wasserman 
>> Date: 26 July 2016 at 12:13:21
>>
>> To: Frank Enderle 
>> Cc: ceph-users@lists.ceph.com , Shilpa
>> Manjarabad
>> Jagannath 
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>> Lets try:
>>
>> radosgw-admin realm create --rgw-realm= --default
>>
>> radosgw-admin zonegroup set --rgw-zonegroup=default < json
>>
>> radosgw-admin period update --commit
>>
>> In the next jewel release the upgrade will be smoother.
>>
>> Orit
>>
>> On Tue, Jul 26, 2016 at 11:34 AM, Frank Enderle
>>  wrote:
>>> Yes! that worked :-)
>>>
>>> now I changed the master_zone to default like so:
>>>
>>> {
>>> "id": "default",
>>> "name": "default",
>>> "api_name": "",
>>> "is_master": "true",
>>> "endpoints": [],
>>> &quo

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
radosgw-admin --cluster=pbs zonegroup-map get
{
"zonegroups": [],
"master_zonegroup": "",
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
}
}

radosgw-admin --cluster=pbs zonegroup get --rgw-zonegroup=default
{
"id": "default",
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [
"**REDACTED***",
"**REDACTED***",
"**REDACTED***"
],
"hostnames_s3website": [],
"master_zone": "",
"zones": [
{
"id": "default",
"name": "default",
"endpoints": [],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": ""
}


thanks for looking into it..


--

anamica GmbH
Heppacher Str. 39
71404 Korb

Telefon:   +49 7151 1351565 0
Telefax: +49 7151 1351565 9
E-Mail: frank.ende...@anamica.de
Internet: www.anamica.de


Handelsregister: AG Stuttgart HRB 732357
Geschäftsführer: Yvonne Holzwarth, Frank Enderle


From: Orit Wasserman <mailto:owass...@redhat.com>
Date: 26 July 2016 at 13:46:07
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: ceph-users@lists.ceph.com 
<mailto:ceph-users@lists.ceph.com>, Shilpa 
Manjarabad Jagannath <mailto:smanj...@redhat.com>
Subject:  Re: [ceph-users] Problem with RGW after update to Jewel

radosgw-admin zonegroupmap
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com