Re: [ceph-users] Luminous RGW Metadata Search

2018-01-16 Thread Yehuda Sadeh-Weinraub
Yes, you're definitely right, docs can be improved. We'd be happy to
get a pull request with any improvements if someone wants to pick it
up.

Thanks,
Uejida

On Tue, Jan 16, 2018 at 1:30 PM, Youzhong Yang  wrote:
> My bad ... Once I sent config request to us-east-1 (the master zone), it
> works, and 'obo mdsearch' against "us-east-es" zone works like a charm.
>
> May I suggest that the following page be modified to reflect this
> requirement so that someone else won't run into the same issue? I understand
> it may sound obvious to experienced users ...
>
> http://ceph.com/rgw/new-luminous-rgw-metadata-search/
>
> Thanks a lot.
>
>
> On Tue, Jan 16, 2018 at 3:59 PM, Yehuda Sadeh-Weinraub 
> wrote:
>>
>> On Tue, Jan 16, 2018 at 12:20 PM, Youzhong Yang 
>> wrote:
>> > Hi Yehuda,
>> >
>> > I can use your tool obo to create a bucket, and upload a file to the
>> > object
>> > store, but when I tried to run the following command, it failed:
>> >
>> > # obo mdsearch buck --config='x-amz-meta-foo; string, x-amz-meta-bar;
>> > integer'
>> > ERROR: {"status": 405, "resource": null, "message": "", "error_code":
>> > "MethodNotAllowed", "reason": "Method Not Allowed"}
>> >
>> > How to make the method 'Allowed'?
>>
>>
>> Which rgw are you sending this request to?
>>
>> >
>> > Thanks in advance.
>> >
>> > On Fri, Jan 12, 2018 at 7:25 PM, Yehuda Sadeh-Weinraub
>> > 
>> > wrote:
>> >>
>> >> The errors you're seeing there don't look like related to
>> >> elasticsearch. It's a generic radosgw related error that says that it
>> >> failed to reach the rados (ceph) backend. You can try bumping up the
>> >> messenger log (debug ms =1) and see if there's any hint in there.
>> >>
>> >> Yehuda
>> >>
>> >> On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang 
>> >> wrote:
>> >> > So I did the exact same thing using Kraken and the same set of VMs,
>> >> > no
>> >> > issue. What is the magic to make it work in Luminous? Anyone lucky
>> >> > enough to
>> >> > have this RGW ElasticSearch working using Luminous?
>> >> >
>> >> > On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang 
>> >> > wrote:
>> >> >>
>> >> >> Hi Yehuda,
>> >> >>
>> >> >> Thanks for replying.
>> >> >>
>> >> >> >radosgw failed to connect to your ceph cluster. Does the rados
>> >> >> > command
>> >> >> >with the same connection params work?
>> >> >>
>> >> >> I am not quite sure what to do by running rados command to test.
>> >> >>
>> >> >> So I tried again, could you please take a look and check what could
>> >> >> have
>> >> >> gone wrong?
>> >> >>
>> >> >> Here are what I did:
>> >> >>
>> >> >>  On ceph admin node, I removed installation on ceph-rgw1 and
>> >> >> ceph-rgw2, reinstalled rgw on ceph-rgw1, stoped rgw service, removed
>> >> >> all rgw
>> >> >> pools. Elasticsearch is running on ceph-rgw2 node on port 9200.
>> >> >>
>> >> >> ceph-deploy purge ceph-rgw1
>> >> >> ceph-deploy purge ceph-rgw2
>> >> >> ceph-deploy purgedata ceph-rgw2
>> >> >> ceph-deploy purgedata ceph-rgw1
>> >> >> ceph-deploy install --release luminous ceph-rgw1
>> >> >> ceph-deploy admin ceph-rgw1
>> >> >> ceph-deploy rgw create ceph-rgw1
>> >> >> ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1
>> >> >> rados rmpool default.rgw.log default.rgw.log
>> >> >> --yes-i-really-really-mean-it
>> >> >> rados rmpool default.rgw.meta default.rgw.meta
>> >> >> --yes-i-really-really-mean-it
>> >> >> rados rmpool default.rgw.control default.rgw.control
>> >> >> --yes-i-really-really-mean-it
>> >> >> rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it
>> >> >>
>> >> >>  On ceph-rgw1 node:
>> >> >>
>> >> >> export RGWHOST="ceph-rgw1"
>> >> >> export ELASTICHOST="ceph-rgw2"
>> >> >> export REALM="demo"
>> >> >> export ZONEGRP="zone1"
>> >> >> export ZONE1="zone1-a"
>> >> >> export ZONE2="zone1-b"
>> >> >> export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w
>> >> >> 20
>> >> >> |
>> >> >> head -n 1 )"
>> >> >> export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w
>> >> >> 40
>> >> >> |
>> >> >> head -n 1 )"
>> >> >>
>> >> >> radosgw-admin realm create --rgw-realm=${REALM} --default
>> >> >> radosgw-admin zonegroup create --rgw-realm=${REALM}
>> >> >> --rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000
>> >> >> --master
>> >> >> --default
>> >> >> radosgw-admin zone create --rgw-realm=${REALM}
>> >> >> --rgw-zonegroup=${ZONEGRP}
>> >> >> --rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
>> >> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default
>> >> >> radosgw-admin user create --uid=sync --display-name="zone sync"
>> >> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system
>> >> >> radosgw-admin period update --commit
>> >> >> sudo systemctl start ceph-radosgw@rgw.${RGWHOST}
>> >> >>
>> >> >> radosgw-admin zone create --rgw-realm=${REALM}
>> >> >> --rgw-zonegroup=${ZONEGRP}
>> >> >> --rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
>> >> >> --endpoints=http://${RGWHOST}:8002
>> >> >> radosgw-admin z

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-16 Thread Youzhong Yang
My bad ... Once I sent config request to us-east-1 (the master zone), it
works, and 'obo mdsearch' against "us-east-es" zone works like a charm.

May I suggest that the following page be modified to reflect this
requirement so that someone else won't run into the same issue? I
understand it may sound obvious to experienced users ...

http://ceph.com/rgw/new-luminous-rgw-metadata-search/

Thanks a lot.


On Tue, Jan 16, 2018 at 3:59 PM, Yehuda Sadeh-Weinraub 
wrote:

> On Tue, Jan 16, 2018 at 12:20 PM, Youzhong Yang 
> wrote:
> > Hi Yehuda,
> >
> > I can use your tool obo to create a bucket, and upload a file to the
> object
> > store, but when I tried to run the following command, it failed:
> >
> > # obo mdsearch buck --config='x-amz-meta-foo; string, x-amz-meta-bar;
> > integer'
> > ERROR: {"status": 405, "resource": null, "message": "", "error_code":
> > "MethodNotAllowed", "reason": "Method Not Allowed"}
> >
> > How to make the method 'Allowed'?
>
>
> Which rgw are you sending this request to?
>
> >
> > Thanks in advance.
> >
> > On Fri, Jan 12, 2018 at 7:25 PM, Yehuda Sadeh-Weinraub <
> yeh...@redhat.com>
> > wrote:
> >>
> >> The errors you're seeing there don't look like related to
> >> elasticsearch. It's a generic radosgw related error that says that it
> >> failed to reach the rados (ceph) backend. You can try bumping up the
> >> messenger log (debug ms =1) and see if there's any hint in there.
> >>
> >> Yehuda
> >>
> >> On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang 
> >> wrote:
> >> > So I did the exact same thing using Kraken and the same set of VMs, no
> >> > issue. What is the magic to make it work in Luminous? Anyone lucky
> >> > enough to
> >> > have this RGW ElasticSearch working using Luminous?
> >> >
> >> > On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang 
> >> > wrote:
> >> >>
> >> >> Hi Yehuda,
> >> >>
> >> >> Thanks for replying.
> >> >>
> >> >> >radosgw failed to connect to your ceph cluster. Does the rados
> command
> >> >> >with the same connection params work?
> >> >>
> >> >> I am not quite sure what to do by running rados command to test.
> >> >>
> >> >> So I tried again, could you please take a look and check what could
> >> >> have
> >> >> gone wrong?
> >> >>
> >> >> Here are what I did:
> >> >>
> >> >>  On ceph admin node, I removed installation on ceph-rgw1 and
> >> >> ceph-rgw2, reinstalled rgw on ceph-rgw1, stoped rgw service, removed
> >> >> all rgw
> >> >> pools. Elasticsearch is running on ceph-rgw2 node on port 9200.
> >> >>
> >> >> ceph-deploy purge ceph-rgw1
> >> >> ceph-deploy purge ceph-rgw2
> >> >> ceph-deploy purgedata ceph-rgw2
> >> >> ceph-deploy purgedata ceph-rgw1
> >> >> ceph-deploy install --release luminous ceph-rgw1
> >> >> ceph-deploy admin ceph-rgw1
> >> >> ceph-deploy rgw create ceph-rgw1
> >> >> ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1
> >> >> rados rmpool default.rgw.log default.rgw.log
> >> >> --yes-i-really-really-mean-it
> >> >> rados rmpool default.rgw.meta default.rgw.meta
> >> >> --yes-i-really-really-mean-it
> >> >> rados rmpool default.rgw.control default.rgw.control
> >> >> --yes-i-really-really-mean-it
> >> >> rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it
> >> >>
> >> >>  On ceph-rgw1 node:
> >> >>
> >> >> export RGWHOST="ceph-rgw1"
> >> >> export ELASTICHOST="ceph-rgw2"
> >> >> export REALM="demo"
> >> >> export ZONEGRP="zone1"
> >> >> export ZONE1="zone1-a"
> >> >> export ZONE2="zone1-b"
> >> >> export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w
> 20
> >> >> |
> >> >> head -n 1 )"
> >> >> export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w
> 40
> >> >> |
> >> >> head -n 1 )"
> >> >>
> >> >> radosgw-admin realm create --rgw-realm=${REALM} --default
> >> >> radosgw-admin zonegroup create --rgw-realm=${REALM}
> >> >> --rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000
> --master
> >> >> --default
> >> >> radosgw-admin zone create --rgw-realm=${REALM}
> >> >> --rgw-zonegroup=${ZONEGRP}
> >> >> --rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
> >> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default
> >> >> radosgw-admin user create --uid=sync --display-name="zone sync"
> >> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system
> >> >> radosgw-admin period update --commit
> >> >> sudo systemctl start ceph-radosgw@rgw.${RGWHOST}
> >> >>
> >> >> radosgw-admin zone create --rgw-realm=${REALM}
> >> >> --rgw-zonegroup=${ZONEGRP}
> >> >> --rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
> >> >> --endpoints=http://${RGWHOST}:8002
> >> >> radosgw-admin zone modify --rgw-realm=${REALM}
> >> >> --rgw-zonegroup=${ZONEGRP}
> >> >> --rgw-zone=${ZONE2} --tier-type=elasticsearch
> >> >>
> >> >> --tier-config=endpoint=http://${ELASTICHOST}:9200,num_
> replicas=1,num_shards=10
> >> >> radosgw-admin period update --commit
> >> >>
> >> >> sudo systemctl restart ceph-radosgw@rgw.${RGWHOST}
> >> >> sudo radosgw --keyring /etc/ceph/

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-16 Thread Yehuda Sadeh-Weinraub
On Tue, Jan 16, 2018 at 12:20 PM, Youzhong Yang  wrote:
> Hi Yehuda,
>
> I can use your tool obo to create a bucket, and upload a file to the object
> store, but when I tried to run the following command, it failed:
>
> # obo mdsearch buck --config='x-amz-meta-foo; string, x-amz-meta-bar;
> integer'
> ERROR: {"status": 405, "resource": null, "message": "", "error_code":
> "MethodNotAllowed", "reason": "Method Not Allowed"}
>
> How to make the method 'Allowed'?


Which rgw are you sending this request to?

>
> Thanks in advance.
>
> On Fri, Jan 12, 2018 at 7:25 PM, Yehuda Sadeh-Weinraub 
> wrote:
>>
>> The errors you're seeing there don't look like related to
>> elasticsearch. It's a generic radosgw related error that says that it
>> failed to reach the rados (ceph) backend. You can try bumping up the
>> messenger log (debug ms =1) and see if there's any hint in there.
>>
>> Yehuda
>>
>> On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang 
>> wrote:
>> > So I did the exact same thing using Kraken and the same set of VMs, no
>> > issue. What is the magic to make it work in Luminous? Anyone lucky
>> > enough to
>> > have this RGW ElasticSearch working using Luminous?
>> >
>> > On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang 
>> > wrote:
>> >>
>> >> Hi Yehuda,
>> >>
>> >> Thanks for replying.
>> >>
>> >> >radosgw failed to connect to your ceph cluster. Does the rados command
>> >> >with the same connection params work?
>> >>
>> >> I am not quite sure what to do by running rados command to test.
>> >>
>> >> So I tried again, could you please take a look and check what could
>> >> have
>> >> gone wrong?
>> >>
>> >> Here are what I did:
>> >>
>> >>  On ceph admin node, I removed installation on ceph-rgw1 and
>> >> ceph-rgw2, reinstalled rgw on ceph-rgw1, stoped rgw service, removed
>> >> all rgw
>> >> pools. Elasticsearch is running on ceph-rgw2 node on port 9200.
>> >>
>> >> ceph-deploy purge ceph-rgw1
>> >> ceph-deploy purge ceph-rgw2
>> >> ceph-deploy purgedata ceph-rgw2
>> >> ceph-deploy purgedata ceph-rgw1
>> >> ceph-deploy install --release luminous ceph-rgw1
>> >> ceph-deploy admin ceph-rgw1
>> >> ceph-deploy rgw create ceph-rgw1
>> >> ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1
>> >> rados rmpool default.rgw.log default.rgw.log
>> >> --yes-i-really-really-mean-it
>> >> rados rmpool default.rgw.meta default.rgw.meta
>> >> --yes-i-really-really-mean-it
>> >> rados rmpool default.rgw.control default.rgw.control
>> >> --yes-i-really-really-mean-it
>> >> rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it
>> >>
>> >>  On ceph-rgw1 node:
>> >>
>> >> export RGWHOST="ceph-rgw1"
>> >> export ELASTICHOST="ceph-rgw2"
>> >> export REALM="demo"
>> >> export ZONEGRP="zone1"
>> >> export ZONE1="zone1-a"
>> >> export ZONE2="zone1-b"
>> >> export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20
>> >> |
>> >> head -n 1 )"
>> >> export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40
>> >> |
>> >> head -n 1 )"
>> >>
>> >> radosgw-admin realm create --rgw-realm=${REALM} --default
>> >> radosgw-admin zonegroup create --rgw-realm=${REALM}
>> >> --rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000 --master
>> >> --default
>> >> radosgw-admin zone create --rgw-realm=${REALM}
>> >> --rgw-zonegroup=${ZONEGRP}
>> >> --rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
>> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default
>> >> radosgw-admin user create --uid=sync --display-name="zone sync"
>> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system
>> >> radosgw-admin period update --commit
>> >> sudo systemctl start ceph-radosgw@rgw.${RGWHOST}
>> >>
>> >> radosgw-admin zone create --rgw-realm=${REALM}
>> >> --rgw-zonegroup=${ZONEGRP}
>> >> --rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
>> >> --endpoints=http://${RGWHOST}:8002
>> >> radosgw-admin zone modify --rgw-realm=${REALM}
>> >> --rgw-zonegroup=${ZONEGRP}
>> >> --rgw-zone=${ZONE2} --tier-type=elasticsearch
>> >>
>> >> --tier-config=endpoint=http://${ELASTICHOST}:9200,num_replicas=1,num_shards=10
>> >> radosgw-admin period update --commit
>> >>
>> >> sudo systemctl restart ceph-radosgw@rgw.${RGWHOST}
>> >> sudo radosgw --keyring /etc/ceph/ceph.client.admin.keyring -f
>> >> --rgw-zone=${ZONE2} --rgw-frontends="civetweb port=8002"
>> >> 2018-01-08 00:21:54.389432 7f0fe9cd2e80 -1 Couldn't init storage
>> >> provider
>> >> (RADOS)
>> >>
>> >>  As you can see, starting rgw on port 8002 failed, but rgw on port
>> >> 8000 was started successfully.
>> >>  Here are some more info which may be useful for diagnosis:
>> >>
>> >> $ cat /etc/ceph/ceph.conf
>> >> [global]
>> >> fsid = 3e5a32d4-e45e-48dd-a3c5-f6f28fef8edf
>> >> mon_initial_members = ceph-mon1, ceph-osd1, ceph-osd2, ceph-osd3
>> >> mon_host = 172.30.212.226,172.30.212.227,172.30.212.228,172.30.212.250
>> >> auth_cluster_required = cephx
>> >> auth_service_required = cephx
>> >> auth_client_required = cep

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-16 Thread Youzhong Yang
Hi Yehuda,

I can use your tool obo to create a bucket, and upload a file to the object
store, but when I tried to run the following command, it failed:

# obo mdsearch buck --config='x-amz-meta-foo; string, x-amz-meta-bar;
integer'
ERROR: {"status": 405, "resource": null, "message": "", "error_code":
"MethodNotAllowed", "reason": "Method Not Allowed"}

How to make the method 'Allowed'?

Thanks in advance.

On Fri, Jan 12, 2018 at 7:25 PM, Yehuda Sadeh-Weinraub 
wrote:

> The errors you're seeing there don't look like related to
> elasticsearch. It's a generic radosgw related error that says that it
> failed to reach the rados (ceph) backend. You can try bumping up the
> messenger log (debug ms =1) and see if there's any hint in there.
>
> Yehuda
>
> On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang 
> wrote:
> > So I did the exact same thing using Kraken and the same set of VMs, no
> > issue. What is the magic to make it work in Luminous? Anyone lucky
> enough to
> > have this RGW ElasticSearch working using Luminous?
> >
> > On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang 
> wrote:
> >>
> >> Hi Yehuda,
> >>
> >> Thanks for replying.
> >>
> >> >radosgw failed to connect to your ceph cluster. Does the rados command
> >> >with the same connection params work?
> >>
> >> I am not quite sure what to do by running rados command to test.
> >>
> >> So I tried again, could you please take a look and check what could have
> >> gone wrong?
> >>
> >> Here are what I did:
> >>
> >>  On ceph admin node, I removed installation on ceph-rgw1 and
> >> ceph-rgw2, reinstalled rgw on ceph-rgw1, stoped rgw service, removed
> all rgw
> >> pools. Elasticsearch is running on ceph-rgw2 node on port 9200.
> >>
> >> ceph-deploy purge ceph-rgw1
> >> ceph-deploy purge ceph-rgw2
> >> ceph-deploy purgedata ceph-rgw2
> >> ceph-deploy purgedata ceph-rgw1
> >> ceph-deploy install --release luminous ceph-rgw1
> >> ceph-deploy admin ceph-rgw1
> >> ceph-deploy rgw create ceph-rgw1
> >> ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1
> >> rados rmpool default.rgw.log default.rgw.log
> --yes-i-really-really-mean-it
> >> rados rmpool default.rgw.meta default.rgw.meta
> >> --yes-i-really-really-mean-it
> >> rados rmpool default.rgw.control default.rgw.control
> >> --yes-i-really-really-mean-it
> >> rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it
> >>
> >>  On ceph-rgw1 node:
> >>
> >> export RGWHOST="ceph-rgw1"
> >> export ELASTICHOST="ceph-rgw2"
> >> export REALM="demo"
> >> export ZONEGRP="zone1"
> >> export ZONE1="zone1-a"
> >> export ZONE2="zone1-b"
> >> export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20
> |
> >> head -n 1 )"
> >> export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40
> |
> >> head -n 1 )"
> >>
> >> radosgw-admin realm create --rgw-realm=${REALM} --default
> >> radosgw-admin zonegroup create --rgw-realm=${REALM}
> >> --rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000 --master
> >> --default
> >> radosgw-admin zone create --rgw-realm=${REALM}
> --rgw-zonegroup=${ZONEGRP}
> >> --rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default
> >> radosgw-admin user create --uid=sync --display-name="zone sync"
> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system
> >> radosgw-admin period update --commit
> >> sudo systemctl start ceph-radosgw@rgw.${RGWHOST}
> >>
> >> radosgw-admin zone create --rgw-realm=${REALM}
> --rgw-zonegroup=${ZONEGRP}
> >> --rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
> >> --endpoints=http://${RGWHOST}:8002
> >> radosgw-admin zone modify --rgw-realm=${REALM}
> --rgw-zonegroup=${ZONEGRP}
> >> --rgw-zone=${ZONE2} --tier-type=elasticsearch
> >> --tier-config=endpoint=http://${ELASTICHOST}:9200,num_
> replicas=1,num_shards=10
> >> radosgw-admin period update --commit
> >>
> >> sudo systemctl restart ceph-radosgw@rgw.${RGWHOST}
> >> sudo radosgw --keyring /etc/ceph/ceph.client.admin.keyring -f
> >> --rgw-zone=${ZONE2} --rgw-frontends="civetweb port=8002"
> >> 2018-01-08 00:21:54.389432 7f0fe9cd2e80 -1 Couldn't init storage
> provider
> >> (RADOS)
> >>
> >>  As you can see, starting rgw on port 8002 failed, but rgw on port
> >> 8000 was started successfully.
> >>  Here are some more info which may be useful for diagnosis:
> >>
> >> $ cat /etc/ceph/ceph.conf
> >> [global]
> >> fsid = 3e5a32d4-e45e-48dd-a3c5-f6f28fef8edf
> >> mon_initial_members = ceph-mon1, ceph-osd1, ceph-osd2, ceph-osd3
> >> mon_host = 172.30.212.226,172.30.212.227,172.30.212.228,172.30.212.250
> >> auth_cluster_required = cephx
> >> auth_service_required = cephx
> >> auth_client_required = cephx
> >> osd_pool_default_size = 2
> >> osd_pool_default_min_size = 2
> >> osd_pool_default_pg_num = 100
> >> osd_pool_default_pgp_num = 100
> >> bluestore_compression_algorithm = zlib
> >> bluestore_compression_mode = force
> >> rgw_max_put_size = 21474836480
> >> [osd]
> >> o

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-15 Thread Youzhong Yang
Finally, the issue that has haunted me for quite some time turned out to be
a ceph.conf issue:

I had
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100

once I changed to
osd_pool_default_pg_num = 32
osd_pool_default_pgp_num = 32

then no issue to start the second rgw process.

No idea why 32 works but 100 doesn't. The debug output is useless and log
files too. Just insane.

Anyway, thanks.


On Fri, Jan 12, 2018 at 7:25 PM, Yehuda Sadeh-Weinraub 
wrote:

> The errors you're seeing there don't look like related to
> elasticsearch. It's a generic radosgw related error that says that it
> failed to reach the rados (ceph) backend. You can try bumping up the
> messenger log (debug ms =1) and see if there's any hint in there.
>
> Yehuda
>
> On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang 
> wrote:
> > So I did the exact same thing using Kraken and the same set of VMs, no
> > issue. What is the magic to make it work in Luminous? Anyone lucky
> enough to
> > have this RGW ElasticSearch working using Luminous?
> >
> > On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang 
> wrote:
> >>
> >> Hi Yehuda,
> >>
> >> Thanks for replying.
> >>
> >> >radosgw failed to connect to your ceph cluster. Does the rados command
> >> >with the same connection params work?
> >>
> >> I am not quite sure what to do by running rados command to test.
> >>
> >> So I tried again, could you please take a look and check what could have
> >> gone wrong?
> >>
> >> Here are what I did:
> >>
> >>  On ceph admin node, I removed installation on ceph-rgw1 and
> >> ceph-rgw2, reinstalled rgw on ceph-rgw1, stoped rgw service, removed
> all rgw
> >> pools. Elasticsearch is running on ceph-rgw2 node on port 9200.
> >>
> >> ceph-deploy purge ceph-rgw1
> >> ceph-deploy purge ceph-rgw2
> >> ceph-deploy purgedata ceph-rgw2
> >> ceph-deploy purgedata ceph-rgw1
> >> ceph-deploy install --release luminous ceph-rgw1
> >> ceph-deploy admin ceph-rgw1
> >> ceph-deploy rgw create ceph-rgw1
> >> ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1
> >> rados rmpool default.rgw.log default.rgw.log
> --yes-i-really-really-mean-it
> >> rados rmpool default.rgw.meta default.rgw.meta
> >> --yes-i-really-really-mean-it
> >> rados rmpool default.rgw.control default.rgw.control
> >> --yes-i-really-really-mean-it
> >> rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it
> >>
> >>  On ceph-rgw1 node:
> >>
> >> export RGWHOST="ceph-rgw1"
> >> export ELASTICHOST="ceph-rgw2"
> >> export REALM="demo"
> >> export ZONEGRP="zone1"
> >> export ZONE1="zone1-a"
> >> export ZONE2="zone1-b"
> >> export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20
> |
> >> head -n 1 )"
> >> export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40
> |
> >> head -n 1 )"
> >>
> >> radosgw-admin realm create --rgw-realm=${REALM} --default
> >> radosgw-admin zonegroup create --rgw-realm=${REALM}
> >> --rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000 --master
> >> --default
> >> radosgw-admin zone create --rgw-realm=${REALM}
> --rgw-zonegroup=${ZONEGRP}
> >> --rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default
> >> radosgw-admin user create --uid=sync --display-name="zone sync"
> >> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system
> >> radosgw-admin period update --commit
> >> sudo systemctl start ceph-radosgw@rgw.${RGWHOST}
> >>
> >> radosgw-admin zone create --rgw-realm=${REALM}
> --rgw-zonegroup=${ZONEGRP}
> >> --rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
> >> --endpoints=http://${RGWHOST}:8002
> >> radosgw-admin zone modify --rgw-realm=${REALM}
> --rgw-zonegroup=${ZONEGRP}
> >> --rgw-zone=${ZONE2} --tier-type=elasticsearch
> >> --tier-config=endpoint=http://${ELASTICHOST}:9200,num_
> replicas=1,num_shards=10
> >> radosgw-admin period update --commit
> >>
> >> sudo systemctl restart ceph-radosgw@rgw.${RGWHOST}
> >> sudo radosgw --keyring /etc/ceph/ceph.client.admin.keyring -f
> >> --rgw-zone=${ZONE2} --rgw-frontends="civetweb port=8002"
> >> 2018-01-08 00:21:54.389432 7f0fe9cd2e80 -1 Couldn't init storage
> provider
> >> (RADOS)
> >>
> >>  As you can see, starting rgw on port 8002 failed, but rgw on port
> >> 8000 was started successfully.
> >>  Here are some more info which may be useful for diagnosis:
> >>
> >> $ cat /etc/ceph/ceph.conf
> >> [global]
> >> fsid = 3e5a32d4-e45e-48dd-a3c5-f6f28fef8edf
> >> mon_initial_members = ceph-mon1, ceph-osd1, ceph-osd2, ceph-osd3
> >> mon_host = 172.30.212.226,172.30.212.227,172.30.212.228,172.30.212.250
> >> auth_cluster_required = cephx
> >> auth_service_required = cephx
> >> auth_client_required = cephx
> >> osd_pool_default_size = 2
> >> osd_pool_default_min_size = 2
> >> osd_pool_default_pg_num = 100
> >> osd_pool_default_pgp_num = 100
> >> bluestore_compression_algorithm = zlib
> >> bluestore_compression_mode = force
> >> rgw_max_put_size = 21474836480
> >> [osd]
> >> osd_m

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-12 Thread Yehuda Sadeh-Weinraub
The errors you're seeing there don't look like related to
elasticsearch. It's a generic radosgw related error that says that it
failed to reach the rados (ceph) backend. You can try bumping up the
messenger log (debug ms =1) and see if there's any hint in there.

Yehuda

On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang  wrote:
> So I did the exact same thing using Kraken and the same set of VMs, no
> issue. What is the magic to make it work in Luminous? Anyone lucky enough to
> have this RGW ElasticSearch working using Luminous?
>
> On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang  wrote:
>>
>> Hi Yehuda,
>>
>> Thanks for replying.
>>
>> >radosgw failed to connect to your ceph cluster. Does the rados command
>> >with the same connection params work?
>>
>> I am not quite sure what to do by running rados command to test.
>>
>> So I tried again, could you please take a look and check what could have
>> gone wrong?
>>
>> Here are what I did:
>>
>>  On ceph admin node, I removed installation on ceph-rgw1 and
>> ceph-rgw2, reinstalled rgw on ceph-rgw1, stoped rgw service, removed all rgw
>> pools. Elasticsearch is running on ceph-rgw2 node on port 9200.
>>
>> ceph-deploy purge ceph-rgw1
>> ceph-deploy purge ceph-rgw2
>> ceph-deploy purgedata ceph-rgw2
>> ceph-deploy purgedata ceph-rgw1
>> ceph-deploy install --release luminous ceph-rgw1
>> ceph-deploy admin ceph-rgw1
>> ceph-deploy rgw create ceph-rgw1
>> ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1
>> rados rmpool default.rgw.log default.rgw.log --yes-i-really-really-mean-it
>> rados rmpool default.rgw.meta default.rgw.meta
>> --yes-i-really-really-mean-it
>> rados rmpool default.rgw.control default.rgw.control
>> --yes-i-really-really-mean-it
>> rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it
>>
>>  On ceph-rgw1 node:
>>
>> export RGWHOST="ceph-rgw1"
>> export ELASTICHOST="ceph-rgw2"
>> export REALM="demo"
>> export ZONEGRP="zone1"
>> export ZONE1="zone1-a"
>> export ZONE2="zone1-b"
>> export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 |
>> head -n 1 )"
>> export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 |
>> head -n 1 )"
>>
>> radosgw-admin realm create --rgw-realm=${REALM} --default
>> radosgw-admin zonegroup create --rgw-realm=${REALM}
>> --rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000 --master
>> --default
>> radosgw-admin zone create --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
>> --rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
>> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default
>> radosgw-admin user create --uid=sync --display-name="zone sync"
>> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system
>> radosgw-admin period update --commit
>> sudo systemctl start ceph-radosgw@rgw.${RGWHOST}
>>
>> radosgw-admin zone create --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
>> --rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
>> --endpoints=http://${RGWHOST}:8002
>> radosgw-admin zone modify --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
>> --rgw-zone=${ZONE2} --tier-type=elasticsearch
>> --tier-config=endpoint=http://${ELASTICHOST}:9200,num_replicas=1,num_shards=10
>> radosgw-admin period update --commit
>>
>> sudo systemctl restart ceph-radosgw@rgw.${RGWHOST}
>> sudo radosgw --keyring /etc/ceph/ceph.client.admin.keyring -f
>> --rgw-zone=${ZONE2} --rgw-frontends="civetweb port=8002"
>> 2018-01-08 00:21:54.389432 7f0fe9cd2e80 -1 Couldn't init storage provider
>> (RADOS)
>>
>>  As you can see, starting rgw on port 8002 failed, but rgw on port
>> 8000 was started successfully.
>>  Here are some more info which may be useful for diagnosis:
>>
>> $ cat /etc/ceph/ceph.conf
>> [global]
>> fsid = 3e5a32d4-e45e-48dd-a3c5-f6f28fef8edf
>> mon_initial_members = ceph-mon1, ceph-osd1, ceph-osd2, ceph-osd3
>> mon_host = 172.30.212.226,172.30.212.227,172.30.212.228,172.30.212.250
>> auth_cluster_required = cephx
>> auth_service_required = cephx
>> auth_client_required = cephx
>> osd_pool_default_size = 2
>> osd_pool_default_min_size = 2
>> osd_pool_default_pg_num = 100
>> osd_pool_default_pgp_num = 100
>> bluestore_compression_algorithm = zlib
>> bluestore_compression_mode = force
>> rgw_max_put_size = 21474836480
>> [osd]
>> osd_max_object_size = 1073741824
>> [mon]
>> mon_allow_pool_delete = true
>> [client.rgw.ceph-rgw1]
>> host = ceph-rgw1
>> rgw frontends = civetweb port=8000
>>
>> $ wget -O - -q http://ceph-rgw2:9200/
>> {
>>   "name" : "Hippolyta",
>>   "cluster_name" : "elasticsearch",
>>   "version" : {
>> "number" : "2.3.1",
>> "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
>> "build_timestamp" : "2016-04-04T12:25:05Z",
>> "build_snapshot" : false,
>> "lucene_version" : "5.5.0"
>>   },
>>   "tagline" : "You Know, for Search"
>> }
>>
>> $ ceph df
>> GLOBAL:
>> SIZE AVAIL RAW USED %RAW USED
>> 719G  705G   14473M  1.96
>> POOLS:
>> NAME   

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-12 Thread Youzhong Yang
So I did the exact same thing using Kraken and the same set of VMs, no
issue. What is the magic to make it work in Luminous? Anyone lucky enough
to have this RGW ElasticSearch working using Luminous?

On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang  wrote:

> Hi Yehuda,
>
> Thanks for replying.
>
> >radosgw failed to connect to your ceph cluster. Does the rados command
> >with the same connection params work?
>
> I am not quite sure what to do by running rados command to test.
>
> So I tried again, could you please take a look and check what could have
> gone wrong?
>
> Here are what I did:
>
>  On ceph admin node, I removed installation on ceph-rgw1 and
> ceph-rgw2, reinstalled rgw on ceph-rgw1, stoped rgw service, removed all
> rgw pools. Elasticsearch is running on ceph-rgw2 node on port 9200.
>
> *ceph-deploy purge ceph-rgw1*
> *ceph-deploy purge ceph-rgw2*
> *ceph-deploy purgedata ceph-rgw2*
> *ceph-deploy purgedata ceph-rgw1*
> *ceph-deploy install --release luminous ceph-rgw1*
> *ceph-deploy admin ceph-rgw1*
> *ceph-deploy rgw create ceph-rgw1*
> *ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1*
> *rados rmpool default.rgw.log default.rgw.log
> --yes-i-really-really-mean-it*
> *rados rmpool default.rgw.meta default.rgw.meta
> --yes-i-really-really-mean-it*
> *rados rmpool default.rgw.control default.rgw.control
> --yes-i-really-really-mean-it*
> *rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it*
>
>  On ceph-rgw1 node:
>
> *export RGWHOST="ceph-rgw1"*
> *export ELASTICHOST="ceph-rgw2"*
> *export REALM="demo"*
> *export ZONEGRP="zone1"*
> *export ZONE1="zone1-a"*
> *export ZONE2="zone1-b"*
> *export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 |
> head -n 1 )"*
> *export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 |
> head -n 1 )"*
>
>
> *radosgw-admin realm create --rgw-realm=${REALM} --default*
> *radosgw-admin zonegroup create --rgw-realm=${REALM}
> --rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000 --master
> --default*
> *radosgw-admin zone create --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
> --rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default*
> *radosgw-admin user create --uid=sync --display-name="zone sync"
> --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system*
> *radosgw-admin period update --commit*
>
> *sudo systemctl start ceph-radosgw@rgw.${RGWHOST}*
>
> *radosgw-admin zone create --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
> --rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
> --endpoints=http://${RGWHOST}:8002*
> *radosgw-admin zone modify --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
> --rgw-zone=${ZONE2} --tier-type=elasticsearch
> --tier-config=endpoint=http://${ELASTICHOST}:9200,num_replicas=1,num_shards=10*
> *radosgw-admin period update --commit*
>
> *sudo systemctl restart ceph-radosgw@rgw.${RGWHOST}*
>
> *sudo radosgw --keyring /etc/ceph/ceph.client.admin.keyring -f
> --rgw-zone=${ZONE2} --rgw-frontends="civetweb port=8002"*
> *2018-01-08 00:21:54.389432 7f0fe9cd2e80 -1 Couldn't init storage provider
> (RADOS)*
>
>  As you can see, starting rgw on port 8002 failed, but rgw on port
> 8000 was started successfully.
>  Here are some more info which may be useful for diagnosis:
>
> $ cat /etc/ceph/ceph.conf
> [global]
> fsid = 3e5a32d4-e45e-48dd-a3c5-f6f28fef8edf
> mon_initial_members = ceph-mon1, ceph-osd1, ceph-osd2, ceph-osd3
> mon_host = 172.30.212.226,172.30.212.227,172.30.212.228,172.30.212.250
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> osd_pool_default_size = 2
> osd_pool_default_min_size = 2
> osd_pool_default_pg_num = 100
> osd_pool_default_pgp_num = 100
> bluestore_compression_algorithm = zlib
> bluestore_compression_mode = force
> rgw_max_put_size = 21474836480
> [osd]
> osd_max_object_size = 1073741824
> [mon]
> mon_allow_pool_delete = true
> [client.rgw.ceph-rgw1]
> host = ceph-rgw1
> rgw frontends = civetweb port=8000
>
> $ wget -O - -q http://ceph-rgw2:9200/
> {
>   "name" : "Hippolyta",
>   "cluster_name" : "elasticsearch",
>   "version" : {
> "number" : "2.3.1",
> "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
> "build_timestamp" : "2016-04-04T12:25:05Z",
> "build_snapshot" : false,
> "lucene_version" : "5.5.0"
>   },
>   "tagline" : "You Know, for Search"
> }
>
> $ ceph df
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 719G  705G   14473M  1.96
> POOLS:
> NAMEID USED %USED MAX AVAIL OBJECTS
> .rgw.root   17 6035 0  333G  19
> zone1-a.rgw.control 180 0  333G   8
> zone1-a.rgw.meta19  350 0  333G   2
> zone1-a.rgw.log 20   50 0  333G 176
> zone1-b.rgw

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-08 Thread Youzhong Yang
Hi Yehuda,

Thanks for replying.

>radosgw failed to connect to your ceph cluster. Does the rados command
>with the same connection params work?

I am not quite sure what to do by running rados command to test.

So I tried again, could you please take a look and check what could have
gone wrong?

Here are what I did:

 On ceph admin node, I removed installation on ceph-rgw1 and ceph-rgw2,
reinstalled rgw on ceph-rgw1, stoped rgw service, removed all rgw pools.
Elasticsearch is running on ceph-rgw2 node on port 9200.

*ceph-deploy purge ceph-rgw1*
*ceph-deploy purge ceph-rgw2*
*ceph-deploy purgedata ceph-rgw2*
*ceph-deploy purgedata ceph-rgw1*
*ceph-deploy install --release luminous ceph-rgw1*
*ceph-deploy admin ceph-rgw1*
*ceph-deploy rgw create ceph-rgw1*
*ssh ceph-rgw1 sudo systemctl stop ceph-rado...@rgw.ceph-rgw1*
*rados rmpool default.rgw.log default.rgw.log --yes-i-really-really-mean-it*
*rados rmpool default.rgw.meta default.rgw.meta
--yes-i-really-really-mean-it*
*rados rmpool default.rgw.control default.rgw.control
--yes-i-really-really-mean-it*
*rados rmpool .rgw.root .rgw.root --yes-i-really-really-mean-it*

 On ceph-rgw1 node:

*export RGWHOST="ceph-rgw1"*
*export ELASTICHOST="ceph-rgw2"*
*export REALM="demo"*
*export ZONEGRP="zone1"*
*export ZONE1="zone1-a"*
*export ZONE2="zone1-b"*
*export SYNC_AKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 |
head -n 1 )"*
*export SYNC_SKEY="$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 |
head -n 1 )"*


*radosgw-admin realm create --rgw-realm=${REALM} --default*
*radosgw-admin zonegroup create --rgw-realm=${REALM}
--rgw-zonegroup=${ZONEGRP} --endpoints=http://${RGWHOST}:8000 --master
--default*
*radosgw-admin zone create --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
--rgw-zone=${ZONE1} --endpoints=http://${RGWHOST}:8000
--access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --master --default*
*radosgw-admin user create --uid=sync --display-name="zone sync"
--access-key=${SYNC_AKEY} --secret=${SYNC_SKEY} --system*
*radosgw-admin period update --commit*

*sudo systemctl start ceph-radosgw@rgw.${RGWHOST}*

*radosgw-admin zone create --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
--rgw-zone=${ZONE2} --access-key=${SYNC_AKEY} --secret=${SYNC_SKEY}
--endpoints=http://${RGWHOST}:8002*
*radosgw-admin zone modify --rgw-realm=${REALM} --rgw-zonegroup=${ZONEGRP}
--rgw-zone=${ZONE2} --tier-type=elasticsearch
--tier-config=endpoint=http://${ELASTICHOST}:9200,num_replicas=1,num_shards=10*
*radosgw-admin period update --commit*

*sudo systemctl restart ceph-radosgw@rgw.${RGWHOST}*

*sudo radosgw --keyring /etc/ceph/ceph.client.admin.keyring -f
--rgw-zone=${ZONE2} --rgw-frontends="civetweb port=8002"*
*2018-01-08 00:21:54.389432 7f0fe9cd2e80 -1 Couldn't init storage provider
(RADOS)*

 As you can see, starting rgw on port 8002 failed, but rgw on port 8000
was started successfully.
 Here are some more info which may be useful for diagnosis:

$ cat /etc/ceph/ceph.conf
[global]
fsid = 3e5a32d4-e45e-48dd-a3c5-f6f28fef8edf
mon_initial_members = ceph-mon1, ceph-osd1, ceph-osd2, ceph-osd3
mon_host = 172.30.212.226,172.30.212.227,172.30.212.228,172.30.212.250
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
bluestore_compression_algorithm = zlib
bluestore_compression_mode = force
rgw_max_put_size = 21474836480
[osd]
osd_max_object_size = 1073741824
[mon]
mon_allow_pool_delete = true
[client.rgw.ceph-rgw1]
host = ceph-rgw1
rgw frontends = civetweb port=8000

$ wget -O - -q http://ceph-rgw2:9200/
{
  "name" : "Hippolyta",
  "cluster_name" : "elasticsearch",
  "version" : {
"number" : "2.3.1",
"build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp" : "2016-04-04T12:25:05Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
719G  705G   14473M  1.96
POOLS:
NAMEID USED %USED MAX AVAIL OBJECTS
.rgw.root   17 6035 0  333G  19
zone1-a.rgw.control 180 0  333G   8
zone1-a.rgw.meta19  350 0  333G   2
zone1-a.rgw.log 20   50 0  333G 176
zone1-b.rgw.control 210 0  333G   8
zone1-b.rgw.meta220 0  333G   0

$ rados df
POOL_NAME   USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND
DEGRADED RD_OPS RDWR_OPS WR
.rgw.root   6035  19  0 38  0   0
  0817  553k 55 37888
zone1-a.rgw.control0   8  0 16  0   0
  0  0 0  0 0
zone1-a.rgw.log   5

Re: [ceph-users] Luminous RGW Metadata Search

2017-12-23 Thread Yehuda Sadeh-Weinraub
On Fri, Dec 22, 2017 at 11:49 PM, Youzhong Yang  wrote:
> I followed the exact steps of the following page:
>
> http://ceph.com/rgw/new-luminous-rgw-metadata-search/
>
> "us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue, the
> service runs successfully.
>
> "us-east-es" zone is serviced by host "ceph-rgw2" on port 8002, the service
> was unable to start:
>
> # /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-rgw2 --setuser
> ceph --setgroup ceph   2017-12-22 16:35:48.513912 7fc54e98ee80 -1
> Couldn't init storage provider (RADOS)
>
> It's this mysterious error message "Couldn't init storage provider (RADOS)",
> there's no any clue what is wrong, what is mis-configured or anything like
> that.

radosgw failed to connect to your ceph cluster. Does the rados command
with the same connection params work?

Yehuda
>
> Yes, I have elasticsearch installed and running on host 'ceph-rgw2'. Is
> there any additional configuration required for ElasticSearch?
>
> Did I miss anything? what is the magic to make this basic stuff work?
>
> Thanks,
>
> --Youzhong
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous RGW Metadata Search

2017-12-22 Thread Youzhong Yang
I followed the exact steps of the following page:

http://ceph.com/rgw/new-luminous-rgw-metadata-search/

"us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue,
the service runs successfully.

"us-east-es" zone is serviced by host "ceph-rgw2" on port 8002, the service
was unable to start:

# /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-rgw2 --setuser
ceph --setgroup ceph   2017-12-22 16:35:48.513912 7fc54e98ee80 -1
Couldn't init storage provider (RADOS)

It's this mysterious error message "Couldn't init storage provider
(RADOS)", there's no any clue what is wrong, what is mis-configured or
anything like that.

Yes, I have elasticsearch installed and running on host 'ceph-rgw2'. Is
there any additional configuration required for ElasticSearch?

Did I miss anything? what is the magic to make this basic stuff work?

Thanks,

--Youzhong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com