Re: [ceph-users] Default Pools

2020-01-18 Thread Paul Emmerich
RGW tools will automatically deploy these pools, for example, running
radosgw-admin will create them if they don't exist.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Sat, Jan 18, 2020 at 2:48 AM Daniele Riccucci  wrote:
>
> Hello,
> I'm still a bit confused by the .rgw.root and the
> default.rgw.{control,meta,log} pools.
> I recently removed the RGW daemon I had running and the aforementioned
> pools, however after a rebalance I suddenly find them again in the
> output of:
>
> $ ceph osd pool ls
> cephfs_data
> cephfs_metadata
> .rgw.root
> default.rgw.control
> default.rgw.meta
> default.rgw.log
>
> Each has 8 pgs but zero usage.
> I was unable to find logs or indications as to which daemon or action
> recreated them or whether it is safe to remove them again, where should
> I look?
> I'm on Nautilus 14.2.5, container deployment.
> Thank you.
>
> Regards,
> Daniele
>
> Il 23/04/19 22:14, David Turner ha scritto:
> > You should be able to see all pools in use in a RGW zone from the
> > radosgw-admin command. This [1] is probably overkill for most, but I
> > deal with multi-realm clusters so I generally think like this when
> > dealing with RGW.  Running this as is will create a file in your current
> > directory for each zone in your deployment (likely to be just one
> > file).  My rough guess for what you would find in that file based on
> > your pool names would be this [2].
> >
> > If you identify any pools not listed from the zone get command, then you
> > can rename [3] the pool to see if it is being created and/or used by rgw
> > currently.  The process here would be to stop all RGW daemons, rename
> > the pools, start a RGW daemon, stop it again, and see which pools were
> > recreated.  Clean up the pools that were freshly made and rename the
> > original pools back into place before starting your RGW daemons again.
> > Please note that .rgw.root is a required pool in every RGW deployment
> > and will not be listed in the zones themselves.
> >
> >
> > [1]
> > for realm in $(radosgw-admin realm list --format=json | jq '.realms[]'
> > -r); do
> >for zonegroup in $(radosgw-admin --rgw-realm=$realm zonegroup list
> > --format=json | jq '.zonegroups[]' -r); do
> >  for zone in $(radosgw-admin --rgw-realm=$realm
> > --rgw-zonegroup=$zonegroup zone list --format=json | jq '.zones[]' -r); do
> >echo $realm.$zonegroup.$zone.json
> >radosgw-admin --rgw-realm=$realm --rgw-zonegroup=$zonegroup
> > --rgw-zone=$zone zone get > $realm.$zonegroup.$zone.json
> >  done
> >done
> > done
> >
> > [2] default.default.default.json
> > {
> >  "id": "{{ UUID }}",
> >  "name": "default",
> >  "domain_root": "default.rgw.meta",
> >  "control_pool": "default.rgw.control",
> >  "gc_pool": ".rgw.gc",
> >  "log_pool": "default.rgw.log",
> >  "user_email_pool": ".users.email",
> >  "user_uid_pool": ".users.uid",
> >  "system_key": {
> >  },
> >  "placement_pools": [
> >  {
> >  "key": "default-placement",
> >  "val": {
> >  "index_pool": "default.rgw.buckets.index",
> >  "data_pool": "default.rgw.buckets.data",
> >  "data_extra_pool": "default.rgw.buckets.non-ec",
> >  "index_type": 0,
> >  "compression": ""
> >  }
> >  }
> >  ],
> >  "metadata_heap": "",
> >  "tier_config": [],
> >  "realm_id": "{{ UUID }}"
> > }
> >
> > [3] ceph osd pool rename  
> >
> > On Thu, Apr 18, 2019 at 10:46 AM Brent Kennedy  > <mailto:bkenn...@cfl.rr.com>> wrote:
> >
> > Yea, that was a cluster created during firefly...
> >
> > Wish there was a good article on the naming and use of these, or
> > perhaps a way I could make sure they are not used before deleting
> > them.  I know RGW will recreate anything it uses, but I don’t want
> > to lose data because I wanted a clean system.
> >
> > -Brent
> >

Re: [ceph-users] Default Pools

2020-01-17 Thread Daniele Riccucci

Hello,
I'm still a bit confused by the .rgw.root and the 
default.rgw.{control,meta,log} pools.
I recently removed the RGW daemon I had running and the aforementioned 
pools, however after a rebalance I suddenly find them again in the 
output of:


$ ceph osd pool ls
cephfs_data
cephfs_metadata
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log

Each has 8 pgs but zero usage.
I was unable to find logs or indications as to which daemon or action 
recreated them or whether it is safe to remove them again, where should 
I look?

I'm on Nautilus 14.2.5, container deployment.
Thank you.

Regards,
Daniele

Il 23/04/19 22:14, David Turner ha scritto:
You should be able to see all pools in use in a RGW zone from the 
radosgw-admin command. This [1] is probably overkill for most, but I 
deal with multi-realm clusters so I generally think like this when 
dealing with RGW.  Running this as is will create a file in your current 
directory for each zone in your deployment (likely to be just one 
file).  My rough guess for what you would find in that file based on 
your pool names would be this [2].


If you identify any pools not listed from the zone get command, then you 
can rename [3] the pool to see if it is being created and/or used by rgw 
currently.  The process here would be to stop all RGW daemons, rename 
the pools, start a RGW daemon, stop it again, and see which pools were 
recreated.  Clean up the pools that were freshly made and rename the 
original pools back into place before starting your RGW daemons again.  
Please note that .rgw.root is a required pool in every RGW deployment 
and will not be listed in the zones themselves.



[1]
for realm in $(radosgw-admin realm list --format=json | jq '.realms[]' 
-r); do
   for zonegroup in $(radosgw-admin --rgw-realm=$realm zonegroup list 
--format=json | jq '.zonegroups[]' -r); do
     for zone in $(radosgw-admin --rgw-realm=$realm 
--rgw-zonegroup=$zonegroup zone list --format=json | jq '.zones[]' -r); do

       echo $realm.$zonegroup.$zone.json
       radosgw-admin --rgw-realm=$realm --rgw-zonegroup=$zonegroup 
--rgw-zone=$zone zone get > $realm.$zonegroup.$zone.json

     done
   done
done

[2] default.default.default.json
{
     "id": "{{ UUID }}",
     "name": "default",
     "domain_root": "default.rgw.meta",
     "control_pool": "default.rgw.control",
     "gc_pool": ".rgw.gc",
     "log_pool": "default.rgw.log",
     "user_email_pool": ".users.email",
     "user_uid_pool": ".users.uid",
     "system_key": {
     },
     "placement_pools": [
         {
             "key": "default-placement",
             "val": {
                 "index_pool": "default.rgw.buckets.index",
                 "data_pool": "default.rgw.buckets.data",
                 "data_extra_pool": "default.rgw.buckets.non-ec",
                 "index_type": 0,
                 "compression": ""
             }
         }
     ],
     "metadata_heap": "",
     "tier_config": [],
     "realm_id": "{{ UUID }}"
}

[3] ceph osd pool rename  

On Thu, Apr 18, 2019 at 10:46 AM Brent Kennedy <mailto:bkenn...@cfl.rr.com>> wrote:


Yea, that was a cluster created during firefly...

Wish there was a good article on the naming and use of these, or
perhaps a way I could make sure they are not used before deleting
them.  I know RGW will recreate anything it uses, but I don’t want
to lose data because I wanted a clean system.

-Brent

-Original Message-
From: Gregory Farnum mailto:gfar...@redhat.com>>
Sent: Monday, April 15, 2019 5:37 PM
To: Brent Kennedy mailto:bkenn...@cfl.rr.com>>
Cc: Ceph Users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] Default Pools

On Mon, Apr 15, 2019 at 1:52 PM Brent Kennedy mailto:bkenn...@cfl.rr.com>> wrote:
 >
 > I was looking around the web for the reason for some of the
default pools in Ceph and I cant find anything concrete.  Here is
our list, some show no use at all.  Can any of these be deleted ( or
is there an article my googlefu failed to find that covers the
default pools?
 >
 > We only use buckets, so I took out .rgw.buckets, .users and
 > .rgw.buckets.index…
 >
 > Name
 > .log
 > .rgw.root
 > .rgw.gc
 > .rgw.control
 > .rgw
 > .users.uid
 > .users.email
 > .rgw.buckets.extra
 > default.rgw.control
 > default.rgw.meta
 > default.rgw.log
 > default.rgw.buckets.non-ec

All of these are created by RGW when you run it, not by the core
Ce

Re: [ceph-users] Default Pools

2019-04-23 Thread David Turner
You should be able to see all pools in use in a RGW zone from the
radosgw-admin command. This [1] is probably overkill for most, but I deal
with multi-realm clusters so I generally think like this when dealing with
RGW.  Running this as is will create a file in your current directory for
each zone in your deployment (likely to be just one file).  My rough guess
for what you would find in that file based on your pool names would be this
[2].

If you identify any pools not listed from the zone get command, then you
can rename [3] the pool to see if it is being created and/or used by rgw
currently.  The process here would be to stop all RGW daemons, rename the
pools, start a RGW daemon, stop it again, and see which pools were
recreated.  Clean up the pools that were freshly made and rename the
original pools back into place before starting your RGW daemons again.
Please note that .rgw.root is a required pool in every RGW deployment and
will not be listed in the zones themselves.


[1]
for realm in $(radosgw-admin realm list --format=json | jq '.realms[]' -r);
do
  for zonegroup in $(radosgw-admin --rgw-realm=$realm zonegroup list
--format=json | jq '.zonegroups[]' -r); do
for zone in $(radosgw-admin --rgw-realm=$realm
--rgw-zonegroup=$zonegroup zone list --format=json | jq '.zones[]' -r); do
  echo $realm.$zonegroup.$zone.json
  radosgw-admin --rgw-realm=$realm --rgw-zonegroup=$zonegroup
--rgw-zone=$zone zone get > $realm.$zonegroup.$zone.json
done
  done
done

[2] default.default.default.json
{
"id": "{{ UUID }}",
"name": "default",
"domain_root": "default.rgw.meta",
"control_pool": "default.rgw.control",
"gc_pool": ".rgw.gc",
"log_pool": "default.rgw.log",
"user_email_pool": ".users.email",
"user_uid_pool": ".users.uid",
"system_key": {
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0,
"compression": ""
}
}
],
"metadata_heap": "",
"tier_config": [],
"realm_id": "{{ UUID }}"
}

[3] ceph osd pool rename  

On Thu, Apr 18, 2019 at 10:46 AM Brent Kennedy  wrote:

> Yea, that was a cluster created during firefly...
>
> Wish there was a good article on the naming and use of these, or perhaps a
> way I could make sure they are not used before deleting them.  I know RGW
> will recreate anything it uses, but I don’t want to lose data because I
> wanted a clean system.
>
> -Brent
>
> -Original Message-
> From: Gregory Farnum 
> Sent: Monday, April 15, 2019 5:37 PM
> To: Brent Kennedy 
> Cc: Ceph Users 
> Subject: Re: [ceph-users] Default Pools
>
> On Mon, Apr 15, 2019 at 1:52 PM Brent Kennedy  wrote:
> >
> > I was looking around the web for the reason for some of the default
> pools in Ceph and I cant find anything concrete.  Here is our list, some
> show no use at all.  Can any of these be deleted ( or is there an article
> my googlefu failed to find that covers the default pools?
> >
> > We only use buckets, so I took out .rgw.buckets, .users and
> > .rgw.buckets.index…
> >
> > Name
> > .log
> > .rgw.root
> > .rgw.gc
> > .rgw.control
> > .rgw
> > .users.uid
> > .users.email
> > .rgw.buckets.extra
> > default.rgw.control
> > default.rgw.meta
> > default.rgw.log
> > default.rgw.buckets.non-ec
>
> All of these are created by RGW when you run it, not by the core Ceph
> system. I think they're all used (although they may report sizes of 0, as
> they mostly make use of omap).
>
> > metadata
>
> Except this one used to be created-by-default for CephFS metadata, but
> that hasn't been true in many releases. So I guess you're looking at an old
> cluster? (In which case it's *possible* some of those RGW pools are also
> unused now but were needed in the past; I haven't kept good track of them.)
> -Greg
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Default Pools

2019-04-18 Thread Brent Kennedy
Yea, that was a cluster created during firefly...

Wish there was a good article on the naming and use of these, or perhaps a way 
I could make sure they are not used before deleting them.  I know RGW will 
recreate anything it uses, but I don’t want to lose data because I wanted a 
clean system.

-Brent

-Original Message-
From: Gregory Farnum  
Sent: Monday, April 15, 2019 5:37 PM
To: Brent Kennedy 
Cc: Ceph Users 
Subject: Re: [ceph-users] Default Pools

On Mon, Apr 15, 2019 at 1:52 PM Brent Kennedy  wrote:
>
> I was looking around the web for the reason for some of the default pools in 
> Ceph and I cant find anything concrete.  Here is our list, some show no use 
> at all.  Can any of these be deleted ( or is there an article my googlefu 
> failed to find that covers the default pools?
>
> We only use buckets, so I took out .rgw.buckets, .users and 
> .rgw.buckets.index…
>
> Name
> .log
> .rgw.root
> .rgw.gc
> .rgw.control
> .rgw
> .users.uid
> .users.email
> .rgw.buckets.extra
> default.rgw.control
> default.rgw.meta
> default.rgw.log
> default.rgw.buckets.non-ec

All of these are created by RGW when you run it, not by the core Ceph system. I 
think they're all used (although they may report sizes of 0, as they mostly 
make use of omap).

> metadata

Except this one used to be created-by-default for CephFS metadata, but that 
hasn't been true in many releases. So I guess you're looking at an old cluster? 
(In which case it's *possible* some of those RGW pools are also unused now but 
were needed in the past; I haven't kept good track of them.) -Greg

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Default Pools

2019-04-15 Thread Gregory Farnum
On Mon, Apr 15, 2019 at 1:52 PM Brent Kennedy  wrote:
>
> I was looking around the web for the reason for some of the default pools in 
> Ceph and I cant find anything concrete.  Here is our list, some show no use 
> at all.  Can any of these be deleted ( or is there an article my googlefu 
> failed to find that covers the default pools?
>
> We only use buckets, so I took out .rgw.buckets, .users and 
> .rgw.buckets.index…
>
> Name
> .log
> .rgw.root
> .rgw.gc
> .rgw.control
> .rgw
> .users.uid
> .users.email
> .rgw.buckets.extra
> default.rgw.control
> default.rgw.meta
> default.rgw.log
> default.rgw.buckets.non-ec

All of these are created by RGW when you run it, not by the core Ceph
system. I think they're all used (although they may report sizes of 0,
as they mostly make use of omap).

> metadata

Except this one used to be created-by-default for CephFS metadata, but
that hasn't been true in many releases. So I guess you're looking at
an old cluster? (In which case it's *possible* some of those RGW pools
are also unused now but were needed in the past; I haven't kept good
track of them.)
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Default Pools

2019-04-15 Thread Brent Kennedy
I was looking around the web for the reason for some of the default pools in
Ceph and I cant find anything concrete.  Here is our list, some show no use
at all.  Can any of these be deleted ( or is there an article my googlefu
failed to find that covers the default pools?

 

We only use buckets, so I took out .rgw.buckets, .users and
.rgw.buckets.index.

 

Name

.log

.rgw.root

.rgw.gc 

.rgw.control   

.rgw   

.users.uid

.users.email   

.rgw.buckets.extra  

default.rgw.control 

default.rgw.meta

default.rgw.log 

default.rgw.buckets.non-ec

metadata

 

Regards,

-Brent

 

Existing Clusters:

Test: Luminous 12.2.11 with 3 osd servers, 1 mon/man, 1 gateway ( all
virtual on SSD )

US Production(HDD): Luminous 12.2.11 with 11 osd servers, 3 mons, 3 gateways
behind haproxy LB

UK Production(HDD): Luminous 12.2.11 with 15 osd servers, 3 mons/man, 3
gateways behind haproxy LB

US Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3
gateways behind haproxy LB

 

 

 

 

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] default pools gone. problem?

2017-03-24 Thread mj



On 03/24/2017 10:13 PM, Bob R wrote:

You can operate without the default pools without issue.


Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] default pools gone. problem?

2017-03-24 Thread Bob R
You can operate without the default pools without issue.

On Fri, Mar 24, 2017 at 1:23 PM, mj  wrote:

> Hi,
>
> On the docs on ppols http://docs.ceph.com/docs/cutt
> lefish/rados/operations/pools/ it says:
>
> The default pools are:
>
> *data
> *metadata
> *rbd
>
> My ceph install has only ONE pool called "ceph-storage", the others are
> gone. (probably deleted?)
>
> Is not having those default pools a problem? Do I need to recreate them,
> or can they safely be deleted?
>
> I'm on hammer, but intending to upgrade to jewel, and trying to identify
> potential issues, therefore this question.
>
> MJ
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] default pools gone. problem?

2017-03-24 Thread mj

Hi,

On the docs on ppols 
http://docs.ceph.com/docs/cuttlefish/rados/operations/pools/ it says:


The default pools are:

*data
*metadata
*rbd

My ceph install has only ONE pool called "ceph-storage", the others are 
gone. (probably deleted?)


Is not having those default pools a problem? Do I need to recreate them, 
or can they safely be deleted?


I'm on hammer, but intending to upgrade to jewel, and trying to identify 
potential issues, therefore this question.


MJ
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com