Re: [ceph-users] CRUSH rule device classes mystery

2019-05-07 Thread Konstantin Shalygin

Hi List,

I'm playing around with CRUSH rules and device classes and I'm puzzled
if it's working correctly. Platform specifics: Ubuntu Bionic with Ceph 14.2.1

I created two new device classes "cheaphdd" and "fasthdd". I made
sure these device classes are applied to the right OSDs and that the
(shadow) crush rule is correctly filtering the right classes for the
OSDs (ceph osd crush tree --show-shadow).

I then created two new crush rules:

ceph osd crush rule create-replicated fastdisks default host fasthdd
ceph osd crush rule create-replicated cheapdisks default host cheaphdd

# rules
rule replicated_rule {
 id 0
 type replicated
 min_size 1
 max_size 10
 step take default
 step chooseleaf firstn 0 type host
 step emit
}
rule fastdisks {
 id 1
 type replicated
 min_size 1
 max_size 10
 step take default class fasthdd
 step chooseleaf firstn 0 type host
 step emit
}
rule cheapdisks {
 id 2
 type replicated
 min_size 1
 max_size 10
 step take default class cheaphdd
 step chooseleaf firstn 0 type host
 step emit
}

After that I put the cephfs_metadata on the fastdisks CRUSH rule:

ceph osd pool set cephfs_metadata crush_rule fastdisks

Some data is moved to new osds, but strange enough there is still data on PGs
residing on OSDs in the "cheaphdd" class. I confirmed this with:

ceph pg ls-by-pool cephfs_data

Testing CRUSH rule nr. 1 gives me:

crushtool -i /tmp/crush_raw --test --show-mappings --rule 1 --min-x 1 --max-x 4 
 --num-rep 3
CRUSH rule 1 x 1 [0,3,6]
CRUSH rule 1 x 2 [3,6,0]
CRUSH rule 1 x 3 [0,6,3]
CRUSH rule 1 x 4 [0,6,3]

Which are indeed the OSDs in the fasthdd class.

Why is not all data moved to OSDs 0,3,6, but still spread on OSDs on the
cheaphhd class as well?


Because you set new crush rule only for `cephfs_metadata` pool and look 
for pg at `cephfs_data` pool.




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CRUSH rule device classes mystery

2019-05-07 Thread Stefan Kooman
Quoting Konstantin Shalygin (k0...@k0ste.ru):
> Because you set new crush rule only for `cephfs_metadata` pool and look for
> pg at `cephfs_data` pool.

ZOMG :-O

Yup, that was it. cephfs_metadata pool looks good.

Thanks!

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] EPEL packages issue

2019-05-07 Thread Marc Roos
 

I have the same situation, where the servers do not have internet 
connection and use my own repository servers. 

I am just rsyncing the rpms to my custom repository like this, works 
like a charm.

rsync -qrlptDvSHP --delete --exclude config.repo --exclude "local*" 
--exclude "aarch64" --exclude "drpms" --exclude "isos" 
anonym...@download.ceph.com::ceph/rpm-luminous/el7/ 
/var/www/cobbler/repo_mirror/Ceph-Luminous/



-Original Message-
From: Mohammad Almodallal [mailto:mmdal...@kku.edu.sa] 
Sent: maandag 6 mei 2019 23:30
To: ceph-users@lists.ceph.com
Subject: [ceph-users] EPEL packages issue

Hello,

 

I need to install Ceph nautilus from local repository, I did download 
all the packages from Ceph site and created a local repository on the 
servers also servers don’t have internet access, but whenever I try to 
install Ceph it tries to install the EPEL release then the installation 
was failed.

 

Any help please?

 

Regards.

 

 

 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-create-keys loops

2019-05-07 Thread ST Wong (ITSC)
Some test results:
1.If using custom admin_secret for MONs, can see 
/etc/ceph/ceph.client.admin.keyring which was missing in previous attempts.
2.stop firewalld on all MONs.

Both attempts gave same result as before: got stuck at collect admin and 
bootstrap keys as before.

Appreciate any clue.

Instead of deploying mimic, should we deploy luminous with latest ansible and 
ceph-ansible 4.0 or master, then upgrade to mimic ?

Thanks a lot.

From: ceph-users  On Behalf Of ST Wong (ITSC)
Sent: Tuesday, May 7, 2019 11:48 AM
To: solarflow99 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-create-keys loops

Some more information:

Rpm installed on MONS:
python-cephfs-13.2.5-0.el7.x86_64
ceph-base-13.2.5-0.el7.x86_64
libcephfs2-13.2.5-0.el7.x86_64
ceph-common-13.2.5-0.el7.x86_64
ceph-selinux-13.2.5-0.el7.x86_64
ceph-mon-13.2.5-0.el7.x86_64

Doing a mon_status gives following.  Only the local host running the command 
can see addr/pub_addr correctly while all other MONs have addr/pub_addr 
“0.0.0.0”.  Besides, the mon feature shows it’s luminous while we’re installing 
mimic.


==

[root@cphmon2a ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.cphmon2a.asok  
mon_status
{
"name": "cphmon2a",
"rank": 0,
"state": "probing",
"election_epoch": 0,
"quorum": [],
"features": {
"required_con": "0",
"required_mon": [],
"quorum_con": "0",
"quorum_mon": []
},
"outside_quorum": [
"cphmon2a"
],
"extra_probe_peers": [],
"sync_provider": [],
"monmap": {
"epoch": 0,
"fsid": "cf746667-3132-476d-a287-19143404cf39",
"modified": "0.00",
"created": "0.00",
"features": {
"persistent": [],
"optional": []
},
"mons": [
{
"rank": 0,
"name": "cphmon2a",
"addr": "123.123.7.93:6789/0",
"public_addr": "123.123.7.93:6789/0"
},
{
"rank": 1,
"name": "cphmon1a",
"addr": "0.0.0.0:0/1",
"public_addr": "0.0.0.0:0/1"
},
{
"rank": 2,
"name": "cphmon3a",
"addr": "0.0.0.0:0/2",
"public_addr": "0.0.0.0:0/2"
},
{
"rank": 3,
"name": "cphmon4b",
"addr": "0.0.0.0:0/3",
"public_addr": "0.0.0.0:0/3"
},
{
"rank": 4,
"name": "cphmon5b",
"addr": "0.0.0.0:0/4",
"public_addr": "0.0.0.0:0/4"
},
{
"rank": 5,
"name": "cphmon6b",
"addr": "0.0.0.0:0/5",
"public_addr": "0.0.0.0:0/5"
}
]
},
"feature_map": {
"mon": [
{
"features": "0x3ffddff8ffacfffb",
"release": "luminous",
"num": 1
}
]
}
}

==


Thanks a lot.

From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
On Behalf Of ST Wong (ITSC)
Sent: Tuesday, May 7, 2019 7:59 AM
To: solarflow99 mailto:solarflo...@gmail.com>>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-create-keys loops

yes, we’re using 3.2 stable, on RHEL 7.   Thanks.

From: solarflow99 mailto:solarflo...@gmail.com>>
Sent: Tuesday, May 7, 2019 1:40 AM
To: ST Wong (ITSC) mailto:s...@itsc.cuhk.edu.hk>>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-create-keys loops

you mention the version of ansible, that is right.  How about the branch of 
ceph-ansible?  should be 3.2-stable, what OS?   I haven't come across this 
problem myself, a lot of other ones.



On Mon, May 6, 2019 at 3:47 AM ST Wong (ITSC) 
mailto:s...@itsc.cuhk.edu.hk>> wrote:
Hi all,

I’ve problem in deploying mimic using ceph-ansible at following step:

-- cut here ---
TASK [ceph-mon : collect admin and bootstrap keys] *
Monday 06 May 2019  17:01:23 +0800 (0:00:00.854)   0:05:38.899 
fatal: [cphmon3a]: FAILED! => {"changed": false, "cmd": ["ceph-create-keys", 
"--cluster", "ceph", "-i", "cphmon3a", "-t", "600"], "delta": "0:11:24.675833", 
"end": "2019-05-06 17:12:48.500996", "msg": "non-zero return code", "rc": 1, 
"start": "2019-05-06 17:01:23.825163", "stderr": 
"INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'\n 
INFO:ceph-create-keys:ceph-mon is not in quorum: 
u'probing'\nINFO:ceph-create-keys:ceph-mon is not in quorum: 
u'probing'\nINFO:ceph-create-keys:ceph-mon is not in quorum: 
u'probing'\nINFO:ceph-create-keys:ceph-mon is not in quorum: 
u'probing'\nINFO:ceph-create-keys:ceph-mon is not in quorum: 
u'probing'\nINFO:ceph-create-keys:ceph-mon is not in quorum: 

[ceph-users] Read-only CephFs on a k8s cluster

2019-05-07 Thread Ignat Zapolsky
Hi,

We are looking at how to troubleshoot an issue with Ceph FS on k8s cluster.

This filesystem is provisioned via rook 0.9.2 and have following behavior:
- If ceph fs is mounted on K8S master, then it is writeable
- If ceph fs is mounted as PV to a POD, then we can write a 0-sized file to it, 
(or create empty file) but bigger writes do not work.
Following is reported as ceph -s :

# ceph -s
  cluster:
id: 18f8d40e-1995-4de4-96dc-e905b097e643
health: HEALTH_OK
 
  services:
mon: 3 daemons, quorum a,b,d
mgr: a(active)
mds: workspace-storage-fs-1/1/1 up  {0=workspace-storage-fs-a=up:active}, 1 
up:standby-replay
osd: 3 osds: 3 up, 3 in
 
  data:
pools:   3 pools, 300 pgs
objects: 212  objects, 181 MiB
usage:   51 GiB used, 244 GiB / 295 GiB avail
pgs: 300 active+clean
 
  io:
client:   853 B/s rd, 2.7 KiB/s wr, 1 op/s rd, 0 op/s wr


I wonder what can be done for further diagnostics ?

With regards,
Ignat Zapolsky

Sent from Mail for Windows 10


-- 
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. 
If you have received this email in error please notify the system manager. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] EPEL packages issue

2019-05-07 Thread Mohammad Almodallal
Hello, 

I already did this step and have the packages in local repostry, but still it 
aske for the EPEL repstry.

Regards.

mohammad almodallal


-Original Message-
From: Marc Roos  
Sent: Tuesday, May 7, 2019 10:15 AM
To: ceph-users ; mmdallal 
Subject: RE: [ceph-users] EPEL packages issue

 

I have the same situation, where the servers do not have internet connection 
and use my own repository servers. 

I am just rsyncing the rpms to my custom repository like this, works like a 
charm.

rsync -qrlptDvSHP --delete --exclude config.repo --exclude "local*" 
--exclude "aarch64" --exclude "drpms" --exclude "isos" 
anonym...@download.ceph.com::ceph/rpm-luminous/el7/
/var/www/cobbler/repo_mirror/Ceph-Luminous/



-Original Message-
From: Mohammad Almodallal [mailto:mmdal...@kku.edu.sa] 
Sent: maandag 6 mei 2019 23:30
To: ceph-users@lists.ceph.com
Subject: [ceph-users] EPEL packages issue

Hello,

 

I need to install Ceph nautilus from local repository, I did download 
all the packages from Ceph site and created a local repository on the 
servers also servers don’t have internet access, but whenever I try to 
install Ceph it tries to install the EPEL release then the installation 
was failed.

 

Any help please?

 

Regards.

 

 

 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread EDH - Manuel Rios Fernandez
Hi Ceph's

 

We got an issue that we're still looking the cause after more than 60 hour
searching a misconfiguration.

 

After cheking a lot of documentation and Questions&Answer we find that
bucket id and bucket marker are not the same. We compared all our other
bucket and all got the same id and marker.

 

Also found some bucket with the rgw.none section an another not.

 

This bucket is unable to be listed in a fashionable time. Customer relaxed
usage from 120TB to 93TB , from 7Million objects to 5.8M.

 

We isolated a single petition in a RGW server and check some metric, just
try to list this bucket generate 2-3Gbps traffic from RGW to OSD/MON's.

 

I asked at IRC if there're any problem about index pool be in other root in
the same site at crushmap and we think that shouldn't be.

 

Any idea or suggestion, however crazy, will be proven.

 

Our relevant configuration that may help :

 

CEPH DF:

 

ceph df

GLOBAL:

SIZEAVAIL   RAW USED %RAW USED

684 TiB 139 TiB  545 TiB 79.70

POOLS:

NAME   ID USED%USED MAX AVAIL
OBJECTS

volumes21 3.3 TiB 63.90   1.9 TiB
831300

backups22 0 B 0   1.9 TiB
0

images 23 1.8 TiB 49.33   1.9 TiB
237066

vms24 3.4 TiB 64.85   1.9 TiB
811534

openstack-volumes-archive  25  30 TiB 47.9232 TiB
7748864

.rgw.root  26 1.6 KiB 0   1.9 TiB
4

default.rgw.control27 0 B 0   1.9 TiB
100

default.rgw.data.root  28  56 KiB 0   1.9 TiB
186

default.rgw.gc 29 0 B 0   1.9 TiB
32

default.rgw.log30 0 B 0   1.9 TiB
175

default.rgw.users.uid  31 4.9 KiB 0   1.9 TiB
26

default.rgw.users.email3612 B 0   1.9 TiB
1

default.rgw.users.keys 37   243 B 0   1.9 TiB
14

default.rgw.buckets.index  38 0 B 0   1.9 TiB
1056

default.rgw.buckets.data   39 245 TiB 93.8416 TiB
102131428

default.rgw.buckets.non-ec 40 0 B 0   1.9 TiB
23046

default.rgw.usage  43 0 B 0   1.9 TiB
6

 

 

CEPH OSD Distribution:

 

ceph osd tree

ID  CLASS   WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF

-41 654.84045 root archive

-37 130.96848 host CEPH-ARCH-R03-07

100 archive  10.91399 osd.100   up  1.0 1.0

101 archive  10.91399 osd.101   up  1.0 1.0

102 archive  10.91399 osd.102   up  1.0 1.0

103 archive  10.91399 osd.103   up  1.0 1.0

104 archive  10.91399 osd.104   up  1.0 1.0

105 archive  10.91399 osd.105   up  1.0 1.0

106 archive  10.91409 osd.106   up  1.0 1.0

107 archive  10.91409 osd.107   up  1.0 1.0

108 archive  10.91409 osd.108   up  1.0 1.0

109 archive  10.91409 osd.109   up  1.0 1.0

110 archive  10.91409 osd.110   up  1.0 1.0

111 archive  10.91409 osd.111   up  1.0 1.0

-23 130.96800 host CEPH005

  4 archive  10.91399 osd.4 up  1.0 1.0

41 archive  10.91399 osd.41up  1.0 1.0

74 archive  10.91399 osd.74up  1.0 1.0

75 archive  10.91399 osd.75up  1.0 1.0

81 archive  10.91399 osd.81up  1.0 1.0

82 archive  10.91399 osd.82up  1.0 1.0

83 archive  10.91399 osd.83up  1.0 1.0

84 archive  10.91399 osd.84up  1.0 1.0

85 archive  10.91399 osd.85up  1.0 1.0

86 archive  10.91399 osd.86up  1.0 1.0

87 archive  10.91399 osd.87up  1.0 1.0

88 archive  10.91399 osd.88up  1.0 1.0

-17 130.96800 host CEPH006

  7 archive  10.91399 osd.7 up  1.0 1.0

  8 archive  10.91399 osd.8 up  1.0 1.0

  9 archive  10.91399 osd.9 up  1.0 1.0

10 archive  10.91399 osd.10up  1.0 1.0

12 archive  10.91399 osd.12up  1.0 1.0

13 archive  10.91399 osd.13up  1.0 1.0

42 archive  10.91399 osd.42 

Re: [ceph-users] EPEL packages issue

2019-05-07 Thread Marc Roos

It can only ask for the epel rego, if you have configured it somewhere, 
so remove it. There are some packages you need from epel. I have that 
mirrored, so I don't have such issues maybe you should mirror it to. 
Otherwise, these are the epel rpm's I have, you have to copy these then 
also to your custom repo. Or just see what the exact dependencies are. 

jemalloc.x86_64  3.6.0-1.el7 
@CentOS7_64-epel
leveldb.x86_64   1.12.0-11.el7   
@CentOS7_64-epel
libbabeltrace.x86_64 1.2.4-3.el7 
@CentOS7_64-epel
lttng-ust.x86_64 2.4.1-4.el7 
@CentOS7_64-epel
oniguruma.x86_64 5.9.5-3.el7 
@CentOS7_64-epel
perl-Net-SNMP.noarch 6.0.1-7.el7 
@CentOS7_64-epel
python-flask.noarch  1:0.10.1-3.el7  
@CentOS7_64-epel
python-pecan.noarch  0.4.5-2.el7 
@CentOS7_64-epel
python-simplegeneric.noarch  0.8-7.el7   
@CentOS7_64-epel
python-singledispatch.noarch 3.4.0.2-2.el7   
@CentOS7_64-epel
python-werkzeug.noarch   0.9.1-1.el7 
@CentOS7_64-epel
sockperf.x86_64  3.5-1.el7   
@CentOS7_64-epel
userspace-rcu.x86_64 0.7.16-1.el7
@CentOS7_64-epel






-Original Message-
From: Mohammad Almodallal [mailto:mmdal...@kku.edu.sa] 
Sent: dinsdag 7 mei 2019 16:18
To: Marc Roos; 'ceph-users'
Subject: RE: [ceph-users] EPEL packages issue

Hello, 

I already did this step and have the packages in local repostry, but 
still it aske for the EPEL repstry.

Regards.

mohammad almodallal


-Original Message-
From: Marc Roos 
Sent: Tuesday, May 7, 2019 10:15 AM
To: ceph-users ; mmdallal 

Subject: RE: [ceph-users] EPEL packages issue

 

I have the same situation, where the servers do not have internet 
connection and use my own repository servers. 

I am just rsyncing the rpms to my custom repository like this, works 
like a charm.

rsync -qrlptDvSHP --delete --exclude config.repo --exclude "local*" 
--exclude "aarch64" --exclude "drpms" --exclude "isos" 
anonym...@download.ceph.com::ceph/rpm-luminous/el7/
/var/www/cobbler/repo_mirror/Ceph-Luminous/



-Original Message-
From: Mohammad Almodallal [mailto:mmdal...@kku.edu.sa]
Sent: maandag 6 mei 2019 23:30
To: ceph-users@lists.ceph.com
Subject: [ceph-users] EPEL packages issue

Hello,

 

I need to install Ceph nautilus from local repository, I did download 
all the packages from Ceph site and created a local repository on the 
servers also servers don’t have internet access, but whenever I try to 
install Ceph it tries to install the EPEL release then the installation 
was failed.

 

Any help please?

 

Regards.

 

 

 





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread Casey Bodley
When the bucket id is different than the bucket marker, that indicates 
the bucket has been resharded. Bucket stats shows 128 shards, which is 
reasonable for that object count. The rgw.none category in bucket stats 
is nothing to worry about.


What ceph version is this? This reminds me of a fix in 
https://github.com/ceph/ceph/pull/23940, which I now see never got its 
backports to mimic or luminous. :(


On 5/7/19 10:20 AM, EDH - Manuel Rios Fernandez wrote:


Hi Ceph’s

We got an issue that we’re still looking the cause after more than 60 
hour searching a misconfiguration.


After cheking a lot of documentation and Questions&Answer we find that 
bucket id and bucket marker are not the same. We compared all our 
other bucket and all got the same id and marker.


Also found some bucket with the rgw.none section an another not.

This bucket is unable to be listed in a fashionable time. Customer 
relaxed usage from 120TB to 93TB , from 7Million objects to 5.8M.


We isolated a single petition in a RGW server and check some metric, 
just try to list this bucket generate 2-3Gbps traffic from RGW to 
OSD/MON’s.


I asked at IRC if there’re any problem about index pool be in other 
root in the same site at crushmap and we think that shouldn’t be.


Any idea or suggestion, however crazy, will be proven.

Our relevant configuration that may help :

CEPH DF:

ceph df

GLOBAL:

    SIZE AVAIL   RAW USED %RAW USED

    684 TiB 139 TiB  545 TiB 79.70

POOLS:

    NAME   ID USED    %USED MAX AVAIL OBJECTS

volumes    21 3.3 TiB 63.90   1.9 
TiB    831300


backups    22 0 B 0   1.9 
TiB 0


    images 23 1.8 TiB 49.33   1.9 
TiB    237066


vms    24 3.4 TiB 64.85   1.9 
TiB    811534


openstack-volumes-archive  25  30 TiB 47.92    32 
TiB   7748864


.rgw.root  26 1.6 KiB 0   1.9 
TiB 4


default.rgw.control    27 0 B 0   1.9 
TiB   100


default.rgw.data.root  28  56 KiB 0   1.9 
TiB   186


default.rgw.gc 29 0 B 0   1.9 
TiB    32


default.rgw.log    30 0 B 0   1.9 
TiB   175


default.rgw.users.uid  31 4.9 KiB 0   1.9 TiB  
  26


default.rgw.users.email    36    12 B 0   1.9 
TiB 1


default.rgw.users.keys 37   243 B 0   1.9 
TiB    14


default.rgw.buckets.index  38 0 B 0   1.9 TiB  
1056


default.rgw.buckets.data   39 245 TiB 93.84    16 TiB 
102131428


default.rgw.buckets.non-ec 40 0 B 0   1.9 TiB 
23046


default.rgw.usage  43 0 B 0    1.9 
TiB 6


CEPH OSD Distribution:

ceph osd tree

ID  CLASS   WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-41 654.84045 root archive

-37 130.96848 host CEPH-ARCH-R03-07

100 archive 10.91399 osd.100   up  1.0 1.0

101 archive 10.91399 osd.101   up  1.0 1.0

102 archive 10.91399 osd.102   up  1.0 1.0

103 archive 10.91399 osd.103   up  1.0 1.0

104 archive 10.91399 osd.104   up  1.0 1.0

105 archive 10.91399 osd.105   up  1.0 1.0

106 archive 10.91409 osd.106   up  1.0 1.0

107 archive 10.91409 osd.107   up  1.0 1.0

108 archive 10.91409 osd.108   up  1.0 1.0

109 archive 10.91409 osd.109   up  1.0 1.0

110 archive 10.91409 osd.110   up  1.0 1.0

111 archive 10.91409 osd.111   up  1.0 1.0

-23 130.96800 host CEPH005

  4 archive 10.91399 osd.4 up  1.0 1.0

41 archive 10.91399 osd.41    up  1.0 1.0

74 archive 10.91399 osd.74    up  1.0 1.0

75 archive 10.91399 osd.75    up  1.0 1.0

81 archive 10.91399 osd.81    up  1.0 1.0

82 archive 10.91399 osd.82    up  1.0 1.0

83 archive 10.91399 osd.83    up  1.0 1.0

84 archive 10.91399 osd.84    up  1.0 1.0

85 archive 10.91399 osd.85    up  1.0 1.0

86 archive 10.91399 osd.86    up  1.0 1.0

87 archive 10.91399 osd.87    up  1.0 1.0

88 archive 10.91399 osd.88    up  1.0 1.0

-17 130.96800 host CEPH006

  7 archive 10.91399 osd.7 up  1.0 1.0

  8 archive 10.913

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread EDH - Manuel Rios Fernandez
Hi Casey

ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
(stable)

Reshard is something than don’t allow us customer to list index?

Regards


-Mensaje original-
De: ceph-users  En nombre de Casey Bodley
Enviado el: martes, 7 de mayo de 2019 17:07
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker
diferent.

When the bucket id is different than the bucket marker, that indicates the
bucket has been resharded. Bucket stats shows 128 shards, which is
reasonable for that object count. The rgw.none category in bucket stats is
nothing to worry about.

What ceph version is this? This reminds me of a fix in
https://github.com/ceph/ceph/pull/23940, which I now see never got its
backports to mimic or luminous. :(

On 5/7/19 10:20 AM, EDH - Manuel Rios Fernandez wrote:
>
> Hi Ceph’s
>
> We got an issue that we’re still looking the cause after more than 60 
> hour searching a misconfiguration.
>
> After cheking a lot of documentation and Questions&Answer we find that 
> bucket id and bucket marker are not the same. We compared all our 
> other bucket and all got the same id and marker.
>
> Also found some bucket with the rgw.none section an another not.
>
> This bucket is unable to be listed in a fashionable time. Customer 
> relaxed usage from 120TB to 93TB , from 7Million objects to 5.8M.
>
> We isolated a single petition in a RGW server and check some metric, 
> just try to list this bucket generate 2-3Gbps traffic from RGW to 
> OSD/MON’s.
>
> I asked at IRC if there’re any problem about index pool be in other 
> root in the same site at crushmap and we think that shouldn’t be.
>
> Any idea or suggestion, however crazy, will be proven.
>
> Our relevant configuration that may help :
>
> CEPH DF:
>
> ceph df
>
> GLOBAL:
>
>     SIZE AVAIL   RAW USED %RAW USED
>
>     684 TiB 139 TiB  545 TiB 79.70
>
> POOLS:
>
>     NAME   ID USED    %USED MAX AVAIL OBJECTS
>
> volumes    21 3.3 TiB 63.90   1.9 TiB    
> 831300
>
> backups    22 0 B 0   1.9 TiB 
> 0
>
>     images 23 1.8 TiB 49.33   1.9
TiB    
> 237066
>
> vms    24 3.4 TiB 64.85   1.9 TiB    
> 811534
>
> openstack-volumes-archive  25  30 TiB 47.92    32 TiB   
> 7748864
>
> .rgw.root  26 1.6 KiB 0   1.9 TiB 
> 4
>
> default.rgw.control    27 0 B 0   1.9 TiB   
> 100
>
> default.rgw.data.root  28  56 KiB 0   1.9 TiB   
> 186
>
> default.rgw.gc 29 0 B 0   1.9 TiB    
> 32
>
> default.rgw.log    30 0 B 0   1.9 TiB   
> 175
>
> default.rgw.users.uid  31 4.9 KiB 0   1.9 TiB
>   26
>
> default.rgw.users.email    36    12 B 0   1.9 TiB 
> 1
>
> default.rgw.users.keys 37   243 B 0   1.9 TiB    
> 14
>
> default.rgw.buckets.index  38 0 B 0   1.9 TiB
> 1056
>
> default.rgw.buckets.data   39 245 TiB 93.84    16 TiB
> 102131428
>
> default.rgw.buckets.non-ec 40 0 B 0   1.9 TiB
> 23046
>
> default.rgw.usage  43 0 B 0    1.9
TiB 
> 6
>
> CEPH OSD Distribution:
>
> ceph osd tree
>
> ID  CLASS   WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
>
> -41 654.84045 root archive
>
> -37 130.96848 host CEPH-ARCH-R03-07
>
> 100 archive 10.91399 osd.100   up  1.0 1.0
>
> 101 archive 10.91399 osd.101   up  1.0 1.0
>
> 102 archive 10.91399 osd.102   up  1.0 1.0
>
> 103 archive 10.91399 osd.103   up  1.0 1.0
>
> 104 archive 10.91399 osd.104   up  1.0 1.0
>
> 105 archive 10.91399 osd.105   up  1.0 1.0
>
> 106 archive 10.91409 osd.106   up  1.0 1.0
>
> 107 archive 10.91409 osd.107   up  1.0 1.0
>
> 108 archive 10.91409 osd.108   up  1.0 1.0
>
> 109 archive 10.91409 osd.109   up  1.0 1.0
>
> 110 archive 10.91409 osd.110   up  1.0 1.0
>
> 111 archive 10.91409 osd.111   up  1.0 1.0
>
> -23 130.96800 host CEPH005
>
>   4 archive 10.91399 osd.4 up  1.0 1.0
>
> 41 archive 10.91399 osd.41    up  1.0 1.0
>
> 74 archive 10.91399 osd.74    up  1.0 1.0
>
> 75 archive 10.91399 osd.75    up  1.0 1.0
>
> 81 archive 10.91399 osd.81    up  1.0 1.0
>
> 82 archive 10.91399 osd.82    up  1.0 

[ceph-users] Access to ceph-storage slack

2019-05-07 Thread Ignat Zapolsky
Hi,

Is there easy way to get access to ceph-storage slack ?

I would appreciate one for ignat.apol...@ammeon.com

Thanks!

Sent from Mail for Windows 10


-- 
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. 
If you have received this email in error please notify the system manager. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread Casey Bodley


On 5/7/19 11:24 AM, EDH - Manuel Rios Fernandez wrote:

Hi Casey

ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
(stable)

Reshard is something than don’t allow us customer to list index?


Reshard does not prevent buckets from being listed, it just spreads the 
index over more rados objects (so more osds). Bucket sharding does have 
an impact on listing performance though, because each request to list 
the bucket has to read from every shard of the bucket index in order to 
sort the entries. If any of those osds have performance issues or slow 
requests, that would slow down all bucket listings.



Regards


-Mensaje original-
De: ceph-users  En nombre de Casey Bodley
Enviado el: martes, 7 de mayo de 2019 17:07
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker
diferent.

When the bucket id is different than the bucket marker, that indicates the
bucket has been resharded. Bucket stats shows 128 shards, which is
reasonable for that object count. The rgw.none category in bucket stats is
nothing to worry about.

What ceph version is this? This reminds me of a fix in
https://github.com/ceph/ceph/pull/23940, which I now see never got its
backports to mimic or luminous. :(

On 5/7/19 10:20 AM, EDH - Manuel Rios Fernandez wrote:

Hi Ceph’s

We got an issue that we’re still looking the cause after more than 60
hour searching a misconfiguration.

After cheking a lot of documentation and Questions&Answer we find that
bucket id and bucket marker are not the same. We compared all our
other bucket and all got the same id and marker.

Also found some bucket with the rgw.none section an another not.

This bucket is unable to be listed in a fashionable time. Customer
relaxed usage from 120TB to 93TB , from 7Million objects to 5.8M.

We isolated a single petition in a RGW server and check some metric,
just try to list this bucket generate 2-3Gbps traffic from RGW to
OSD/MON’s.

I asked at IRC if there’re any problem about index pool be in other
root in the same site at crushmap and we think that shouldn’t be.

Any idea or suggestion, however crazy, will be proven.

Our relevant configuration that may help :

CEPH DF:

ceph df

GLOBAL:

     SIZE AVAIL   RAW USED %RAW USED

     684 TiB 139 TiB  545 TiB 79.70

POOLS:

     NAME   ID USED    %USED MAX AVAIL OBJECTS

volumes    21 3.3 TiB 63.90   1.9 TiB
831300

backups    22 0 B 0   1.9 TiB
0

     images 23 1.8 TiB 49.33   1.9

TiB

237066

vms    24 3.4 TiB 64.85   1.9 TiB
811534

openstack-volumes-archive  25  30 TiB 47.92    32 TiB
7748864

.rgw.root  26 1.6 KiB 0   1.9 TiB
4

default.rgw.control    27 0 B 0   1.9 TiB
100

default.rgw.data.root  28  56 KiB 0   1.9 TiB
186

default.rgw.gc 29 0 B 0   1.9 TiB
32

default.rgw.log    30 0 B 0   1.9 TiB
175

default.rgw.users.uid  31 4.9 KiB 0   1.9 TiB
   26

default.rgw.users.email    36    12 B 0   1.9 TiB
1

default.rgw.users.keys 37   243 B 0   1.9 TiB
14

default.rgw.buckets.index  38 0 B 0   1.9 TiB
1056

default.rgw.buckets.data   39 245 TiB 93.84    16 TiB
102131428

default.rgw.buckets.non-ec 40 0 B 0   1.9 TiB
23046

default.rgw.usage  43 0 B 0    1.9

TiB

6

CEPH OSD Distribution:

ceph osd tree

ID  CLASS   WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-41 654.84045 root archive

-37 130.96848 host CEPH-ARCH-R03-07

100 archive 10.91399 osd.100   up  1.0 1.0

101 archive 10.91399 osd.101   up  1.0 1.0

102 archive 10.91399 osd.102   up  1.0 1.0

103 archive 10.91399 osd.103   up  1.0 1.0

104 archive 10.91399 osd.104   up  1.0 1.0

105 archive 10.91399 osd.105   up  1.0 1.0

106 archive 10.91409 osd.106   up  1.0 1.0

107 archive 10.91409 osd.107   up  1.0 1.0

108 archive 10.91409 osd.108   up  1.0 1.0

109 archive 10.91409 osd.109   up  1.0 1.0

110 archive 10.91409 osd.110   up  1.0 1.0

111 archive 10.91409 osd.111   up  1.0 1.0

-23 130.96800 host CEPH005

   4 archive 10.91399 osd.4 up  1.0 1.0

41 archive 10.91399 osd.41    up  1.0 1.0

74 archive 10.91399 osd.74    up  1.0 1.0

75 archive 10.91399 osd.75    up  1.0 1.0

81 archive 10

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread EDH - Manuel Rios Fernandez
Ok our last shoot is buy NVME PCIe cards for index pool and dedicate to it.

Checking how many GB/TB are needed for the pool is not clean by ceph df show 0:

default.rgw.buckets.index  38 0 B 0   1.8 TiB   
   1056

Any idea for 200M Objects?



-Mensaje original-
De: Casey Bodley  
Enviado el: martes, 7 de mayo de 2019 19:13
Para: EDH - Manuel Rios Fernandez ; 
ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker 
diferent.


On 5/7/19 11:24 AM, EDH - Manuel Rios Fernandez wrote:
> Hi Casey
>
> ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
> (stable)
>
> Reshard is something than don’t allow us customer to list index?

Reshard does not prevent buckets from being listed, it just spreads the index 
over more rados objects (so more osds). Bucket sharding does have an impact on 
listing performance though, because each request to list the bucket has to read 
from every shard of the bucket index in order to sort the entries. If any of 
those osds have performance issues or slow requests, that would slow down all 
bucket listings.

> Regards
>
>
> -Mensaje original-
> De: ceph-users  En nombre de Casey 
> Bodley Enviado el: martes, 7 de mayo de 2019 17:07
> Para: ceph-users@lists.ceph.com
> Asunto: Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and 
> marker diferent.
>
> When the bucket id is different than the bucket marker, that indicates 
> the bucket has been resharded. Bucket stats shows 128 shards, which is 
> reasonable for that object count. The rgw.none category in bucket 
> stats is nothing to worry about.
>
> What ceph version is this? This reminds me of a fix in 
> https://github.com/ceph/ceph/pull/23940, which I now see never got its 
> backports to mimic or luminous. :(
>
> On 5/7/19 10:20 AM, EDH - Manuel Rios Fernandez wrote:
>> Hi Ceph’s
>>
>> We got an issue that we’re still looking the cause after more than 60 
>> hour searching a misconfiguration.
>>
>> After cheking a lot of documentation and Questions&Answer we find 
>> that bucket id and bucket marker are not the same. We compared all 
>> our other bucket and all got the same id and marker.
>>
>> Also found some bucket with the rgw.none section an another not.
>>
>> This bucket is unable to be listed in a fashionable time. Customer 
>> relaxed usage from 120TB to 93TB , from 7Million objects to 5.8M.
>>
>> We isolated a single petition in a RGW server and check some metric, 
>> just try to list this bucket generate 2-3Gbps traffic from RGW to 
>> OSD/MON’s.
>>
>> I asked at IRC if there’re any problem about index pool be in other 
>> root in the same site at crushmap and we think that shouldn’t be.
>>
>> Any idea or suggestion, however crazy, will be proven.
>>
>> Our relevant configuration that may help :
>>
>> CEPH DF:
>>
>> ceph df
>>
>> GLOBAL:
>>
>>  SIZE AVAIL   RAW USED %RAW USED
>>
>>  684 TiB 139 TiB  545 TiB 79.70
>>
>> POOLS:
>>
>>  NAME   ID USED%USED MAX AVAIL 
>> OBJECTS
>>
>> volumes21 3.3 TiB 63.90   1.9 TiB
>> 831300
>>
>> backups22 0 B 0   1.9 TiB
>> 0
>>
>>  images 23 1.8 TiB 49.33   1.9
> TiB
>> 237066
>>
>> vms24 3.4 TiB 64.85   1.9 TiB
>> 811534
>>
>> openstack-volumes-archive  25  30 TiB 47.9232 TiB
>> 7748864
>>
>> .rgw.root  26 1.6 KiB 0   1.9 TiB
>> 4
>>
>> default.rgw.control27 0 B 0   1.9 TiB
>> 100
>>
>> default.rgw.data.root  28  56 KiB 0   1.9 TiB
>> 186
>>
>> default.rgw.gc 29 0 B 0   1.9 TiB
>> 32
>>
>> default.rgw.log30 0 B 0   1.9 TiB
>> 175
>>
>> default.rgw.users.uid  31 4.9 KiB 0   1.9 TiB
>>26
>>
>> default.rgw.users.email3612 B 0   1.9 TiB
>> 1
>>
>> default.rgw.users.keys 37   243 B 0   1.9 TiB
>> 14
>>
>> default.rgw.buckets.index  38 0 B 0   1.9 TiB
>> 1056
>>
>> default.rgw.buckets.data   39 245 TiB 93.8416 TiB
>> 102131428
>>
>> default.rgw.buckets.non-ec 40 0 B 0   1.9 TiB
>> 23046
>>
>> default.rgw.usage  43 0 B 01.9
> TiB
>> 6
>>
>> CEPH OSD Distribution:
>>
>> ceph osd tree
>>
>> ID  CLASS   WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
>>
>> -41 654.84045 root archive
>>
>> -37 130.96848 host CEPH-ARCH-R03-07
>>
>> 100 archive 10.91399 osd.100   up  1.0 
>> 1.0
>>
>> 101 archive 10.91399 osd.101   up  1.0 
>> 1.0
>>
>> 102 archive 10.91399 osd.102   up  1.0 
>> 1.0
>>
>> 103 archive 10.91399 osd.103   up  1.0 
>> 1.0
>>
>> 104 archive 10.91399 

Re: [ceph-users] Read-only CephFs on a k8s cluster

2019-05-07 Thread Gregory Farnum
On Tue, May 7, 2019 at 6:54 AM Ignat Zapolsky  wrote:
>
> Hi,
>
>
>
> We are looking at how to troubleshoot an issue with Ceph FS on k8s cluster.
>
>
>
> This filesystem is provisioned via rook 0.9.2 and have following behavior:
>
> If ceph fs is mounted on K8S master, then it is writeable
> If ceph fs is mounted as PV to a POD, then we can write a 0-sized file to it, 
> (or create empty file) but bigger writes do not work.

This generally means your clients have CephX permission to access the
MDS but not the RADOS pools. Check what auth caps you've given the
relevant keys ("ceph auth list"). Presumably your master node has an
admin key and the clients have a different one that's not quite right.
-Greg

>
> Following is reported as ceph -s :
>
>
>
> # ceph -s
>
>   cluster:
>
> id: 18f8d40e-1995-4de4-96dc-e905b097e643
>
> health: HEALTH_OK
>
>   services:
>
> mon: 3 daemons, quorum a,b,d
>
> mgr: a(active)
>
> mds: workspace-storage-fs-1/1/1 up  {0=workspace-storage-fs-a=up:active}, 
> 1 up:standby-replay
>
> osd: 3 osds: 3 up, 3 in
>
>   data:
>
> pools:   3 pools, 300 pgs
>
> objects: 212  objects, 181 MiB
>
> usage:   51 GiB used, 244 GiB / 295 GiB avail
>
> pgs: 300 active+clean
>
>   io:
>
> client:   853 B/s rd, 2.7 KiB/s wr, 1 op/s rd, 0 op/s wr
>
>
>
>
>
> I wonder what can be done for further diagnostics ?
>
>
>
> With regards,
>
> Ignat Zapolsky
>
>
>
> Sent from Mail for Windows 10
>
>
>
>
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the system manager. This 
> message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Read-only CephFs on a k8s cluster

2019-05-07 Thread Ignat Zapolsky
Hi,

Thanks for the response.

Is there a way to see auth failures in some logs ?

I am looking at ceph auth ls and nothing jumps at me as suspicious.

I’ve tried to write a file not from a container, but from actual node that 
forwards directory to container, and it is indeed gives permission denied error.

I think we are running ceph 13.2.4-20190109 and kernel version is 
4.4.0-140-generic #166-Ubuntu SMP Wed Nov 14 20:09:47 UTC 2018 x86_64 x86_64 
x86_64 GNU/Linux


Sent from Mail for Windows 10

From: Gregory Farnum
Sent: Tuesday, May 7, 2019 7:17 PM
To: Ignat Zapolsky
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Read-only CephFs on a k8s cluster

On Tue, May 7, 2019 at 6:54 AM Ignat Zapolsky  wrote:
>
> Hi,
>
>
>
> We are looking at how to troubleshoot an issue with Ceph FS on k8s cluster.
>
>
>
> This filesystem is provisioned via rook 0.9.2 and have following behavior:
>
> If ceph fs is mounted on K8S master, then it is writeable
> If ceph fs is mounted as PV to a POD, then we can write a 0-sized file to it, 
> (or create empty file) but bigger writes do not work.

This generally means your clients have CephX permission to access the
MDS but not the RADOS pools. Check what auth caps you've given the
relevant keys ("ceph auth list"). Presumably your master node has an
admin key and the clients have a different one that's not quite right.
-Greg

>
> Following is reported as ceph -s :
>
>
>
> # ceph -s
>
>   cluster:
>
> id: 18f8d40e-1995-4de4-96dc-e905b097e643
>
> health: HEALTH_OK
>
>   services:
>
> mon: 3 daemons, quorum a,b,d
>
> mgr: a(active)
>
> mds: workspace-storage-fs-1/1/1 up  {0=workspace-storage-fs-a=up:active}, 
> 1 up:standby-replay
>
> osd: 3 osds: 3 up, 3 in
>
>   data:
>
> pools:   3 pools, 300 pgs
>
> objects: 212  objects, 181 MiB
>
> usage:   51 GiB used, 244 GiB / 295 GiB avail
>
> pgs: 300 active+clean
>
>   io:
>
> client:   853 B/s rd, 2.7 KiB/s wr, 1 op/s rd, 0 op/s wr
>
>
>
>
>
> I wonder what can be done for further diagnostics ?
>
>
>
> With regards,
>
> Ignat Zapolsky
>
>
>
> Sent from Mail for Windows 10
>
>
>
>
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the system manager. This 
> message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. 
If you have received this email in error please notify the system manager. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread J. Eric Ivancich
On 5/7/19 11:24 AM, EDH - Manuel Rios Fernandez wrote:
> Hi Casey
> 
> ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
> (stable)
> 
> Reshard is something than don’t allow us customer to list index?
> 
> Regards
Listing of buckets with a large number of buckets is notoriously slow,
because the entries are not stored in lexical order but the default
behavior is to list the objects in lexical order.

If your use case allows for an unordered listing it would likely perform
better. You can see some documentation here under the S3 API / GET BUCKET:

http://docs.ceph.com/docs/mimic/radosgw/s3/bucketops/

Are you using S3?

Eric
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread EDH - Manuel Rios Fernandez
Hi Eric,

This looks like something the software developer must do, not something than 
Storage provider must allow no?

Strange behavior is that sometimes bucket is list fast in less than 30 secs and 
other time it timeout after 600 secs, the bucket contains 875 folders with a 
total object number of 6Millions.

I don’t know how a simple list of 875 folder can timeout after 600 secs

We bought several NVMe Optane for do 4 partitions in each PCIe card and get up 
1.000.000 IOPS for Index. Quite expensive because we calc that our index is 
just 4GB (100-200M objects),waiting those cards. Any more idea?

Regards




-Mensaje original-
De: J. Eric Ivancich  
Enviado el: martes, 7 de mayo de 2019 23:53
Para: EDH - Manuel Rios Fernandez ; 'Casey Bodley' 
; ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker 
diferent.

On 5/7/19 11:24 AM, EDH - Manuel Rios Fernandez wrote:
> Hi Casey
> 
> ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
> (stable)
> 
> Reshard is something than don’t allow us customer to list index?
> 
> Regards
Listing of buckets with a large number of buckets is notoriously slow, because 
the entries are not stored in lexical order but the default behavior is to list 
the objects in lexical order.

If your use case allows for an unordered listing it would likely perform 
better. You can see some documentation here under the S3 API / GET BUCKET:

http://docs.ceph.com/docs/mimic/radosgw/s3/bucketops/

Are you using S3?

Eric

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v12.2.12 Luminous released

2019-05-07 Thread Cooper Su
Hi all,

It's been four weeks since luminous v12.2.12 released.
I'm a arm64 user, and I still have not found v12.2.12 packages for aarch64
& rpm in the official repository.
https://download.ceph.com/rpm-luminous/el7/aarch64/
Is there any plan for the version ?

Regards,
Cooper
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com