Re: [ceph-users] Why rbd rn did not clean used pool?

2018-08-25 Thread Konstantin Shalygin

Configuration:
rbd - erasure pool
rbdtier - tier pool for rbd

ceph osd tier add-cache rbd rbdtier 549755813888
ceph osd tier cache-mode rbdtier writeback

Create new rbd block device:
rbd create --size 16G  rbdtest
rbd feature disable rbdtest object-map fast-diff deep-flatten
rbd device map rbdtest

And fill in rbd0 by data (dd, fio and like).

Remove rbd block device:
rbd device unmap rbdtest
rbd rm rbdtest

And now pool usage look like:

POOLS:
 NAMEID USED%USED MAX AVAIL OBJECTS
 rbd 9   16 GiB 0   0 B4094
 rbdtier 14 104 KiB 0   1.7 TiB5110



Avoid cache-tier configurations. Write rbd objects to EC pool directly 
since luminous clients.




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-25 Thread Eugen Block

Hi,

take a look into the logs, they should point you in the right direction.
Since the deployment stage fails at the OSD level, start with the OSD  
logs. Something's not right with the disks/partitions, did you wipe  
the partition from previous attempts?


Regards,
Eugen

Zitat von Jones de Andrade :


(Please forgive my previous email: I was using another message and
completely forget to update the subject)

Hi all.

I'm new to ceph, and after having serious problems in ceph stages 0, 1 and
2 that I could solve myself, now it seems that I have hit a wall harder
than my head. :)

When I run salt-run state.orch ceph.stage.deploy, i monitor I see it going
up to here:

###
[14/71]   ceph.sysctl on
  node01... ✓ (0.5s)
  node02 ✓ (0.7s)
  node03... ✓ (0.6s)
  node04. ✓ (0.5s)
  node05... ✓ (0.6s)
  node06.. ✓ (0.5s)

[15/71]   ceph.osd on
  node01.. ❌ (0.7s)
  node02 ❌ (0.7s)
  node03... ❌ (0.7s)
  node04. ❌ (0.6s)
  node05... ❌ (0.6s)
  node06.. ❌ (0.7s)

Ended stage: ceph.stage.deploy succeeded=14/71 failed=1/71 time=624.7s

Failures summary:

ceph.osd (/srv/salt/ceph/osd):
  node02:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node02 for cephdisks.list
  node03:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node03 for cephdisks.list
  node01:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node01 for cephdisks.list
  node04:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node04 for cephdisks.list
  node05:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node05 for cephdisks.list
  node06:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node06 for cephdisks.list
###

Since this is a first attempt in 6 simple test machines, we are going to
put the mon, osds, etc, in all nodes at first. Only the master is left in a
single machine (node01) by now.

As they are simple machines, they have a single hdd, which is partitioned
as follows (the hda4 partition is unmounted and left for the ceph system):

###
# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda  8:00 465,8G  0 disk
├─sda1   8:10   500M  0 part /boot/efi
├─sda2   8:2016G  0 part [SWAP]
├─sda3   8:30  49,3G  0 part /
└─sda4   8:40   400G  0 part
sr0 11:01   3,7G  0 rom

# salt -I 'roles:storage' cephdisks.list
node01:
node02:
node03:
node04:
node05:
node06:

# salt -I 'roles:storage' pillar.get ceph
node02:
--
storage:
--
osds:
--
/dev/sda4:
--
format:
bluestore
standalone:
True
(and so on for all 6 machines)
##

Finally and just in case, my policy.cfg file reads:

#
#cluster-unassigned/cluster/*.sls
cluster-ceph/cluster/*.sls
profile-default/cluster/*.sls
profile-default/stack/default/ceph/minions/*yml
config/stack/default/global.yml
config/stack/default/ceph/cluster.yml
role-master/cluster/node01.sls
role-admin/cluster/*.sls
role-mon/cluster/*.sls
role-mgr/cluster/*.sls
role-mds/cluster/*.sls
role-ganesha/cluster/*.sls
role-client-nfs/cluster/*.sls
role-client-cephfs/cluster/*.sls
##

Please, could someone help me and shed some light on this issue?

Thanks a lot in advance,

Regasrds,

Jones




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Why rbd rn did not clean used pool?

2018-08-25 Thread Fyodor Ustinov
Hi!

Configuration:
rbd - erasure pool
rbdtier - tier pool for rbd

ceph osd tier add-cache rbd rbdtier 549755813888
ceph osd tier cache-mode rbdtier writeback

Create new rbd block device:
rbd create --size 16G  rbdtest
rbd feature disable rbdtest object-map fast-diff deep-flatten
rbd device map rbdtest

And fill in rbd0 by data (dd, fio and like).

Remove rbd block device:
rbd device unmap rbdtest
rbd rm rbdtest

And now pool usage look like:

POOLS:
NAMEID USED%USED MAX AVAIL OBJECTS
rbd 9   16 GiB 0   0 B4094
rbdtier 14 104 KiB 0   1.7 TiB5110

rbd and rbdtier contain some objects:
rados -p rbdtier ls
rbd_data.14716b8b4567.0dc4
rbd_data.14716b8b4567.02fc
rbd_data.14716b8b4567.0e82
rbd_data.14716b8b4567.03d7
rbd_data.14716b8b4567.0fb1
rbd_data.14716b8b4567.0018
[...]

rados - p rbd ls
rbd_data.14716b8b4567.0dc4
rbd_data.14716b8b4567.02fc
rbd_data.14716b8b4567.0e82
rbd_data.14716b8b4567.03d7
rbd_data.14716b8b4567.0fb1
[...]

why rbd rm do not remove all used objects from pools?



WBR,
Fyodor.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs kernel client hangs

2018-08-25 Thread Zhenshi Zhou
Hi,
This time,  osdc:

REQUESTS 0 homeless 0
LINGER REQUESTS

monc:

have monmap 2 want 3+
have osdmap 4545 want 4546
have fsmap.user 0
have mdsmap 446 want 447+
fs_cluster_id -1

mdsc:

649065  mds0setattr  #12e7e5a

Anything useful?



Yan, Zheng  于2018年8月25日周六 上午7:53写道:

> Are there hang request in /sys/kernel/debug/ceph//osdc
>
> On Fri, Aug 24, 2018 at 9:32 PM Zhenshi Zhou  wrote:
> >
> > I'm afaid that the client hangs again...the log shows:
> >
> > 2018-08-24 21:27:54.714334 [WRN]  slow request 62.607608 seconds old,
> received at 2018-08-24 21:26:52.106633: client_request(client.213528:241811
> getattr pAsLsXsFs #0x12e7e5a 2018-08-24 21:26:52.106425 caller_uid=0,
> caller_gid=0{}) currently failed to rdlock, waiting
> > 2018-08-24 21:27:54.714320 [WRN]  3 slow requests, 1 included below;
> oldest blocked for > 843.556758 secs
> > 2018-08-24 21:27:24.713740 [WRN]  slow request 32.606979 seconds old,
> received at 2018-08-24 21:26:52.106633: client_request(client.213528:241811
> getattr pAsLsXsFs #0x12e7e5a 2018-08-24 21:26:52.106425 caller_uid=0,
> caller_gid=0{}) currently failed to rdlock, waiting
> > 2018-08-24 21:27:24.713729 [WRN]  3 slow requests, 1 included below;
> oldest blocked for > 813.556129 secs
> > 2018-08-24 21:25:49.711778 [WRN]  slow request 483.807963 seconds old,
> received at 2018-08-24 21:17:45.903726: client_request(client.213528:241810
> getattr pAsLsXsFs #0x12e7e5a 2018-08-24 21:17:45.903049 caller_uid=0,
> caller_gid=0{}) currently failed to rdlock, waiting
> > 2018-08-24 21:25:49.711766 [WRN]  2 slow requests, 1 included below;
> oldest blocked for > 718.554206 secs
> > 2018-08-24 21:21:54.707536 [WRN]  client.213528 isn't responding to
> mclientcaps(revoke), ino 0x12e7e5a pending pAsLsXsFr issued
> pAsLsXsFscr, sent 483.548912 seconds ago
> > 2018-08-24 21:21:54.706930 [WRN]  slow request 483.549363 seconds old,
> received at 2018-08-24 21:13:51.157483: client_request(client.267792:649065
> setattr size=0 mtime=2018-08-24 21:13:51.163236 #0x12e7e5a 2018-08-24
> 21:13:51.163236 caller_uid=0, caller_gid=0{}) currently failed to xlock,
> waiting
> > 2018-08-24 21:21:54.706920 [WRN]  2 slow requests, 1 included below;
> oldest blocked for > 483.549363 secs
> > 2018-08-24 21:21:49.706838 [WRN]  slow request 243.803027 seconds old,
> received at 2018-08-24 21:17:45.903726: client_request(client.213528:241810
> getattr pAsLsXsFs #0x12e7e5a 2018-08-24 21:17:45.903049 caller_uid=0,
> caller_gid=0{}) currently failed to rdlock, waiting
> > 2018-08-24 21:21:49.706828 [WRN]  2 slow requests, 1 included below;
> oldest blocked for > 478.549269 secs
> > 2018-08-24 21:19:49.704294 [WRN]  slow request 123.800486 seconds old,
> received at 2018-08-24 21:17:45.903726: client_request(client.213528:241810
> getattr pAsLsXsFs #0x12e7e5a 2018-08-24 21:17:45.903049 caller_uid=0,
> caller_gid=0{}) currently failed to rdlock, waiting
> > 2018-08-24 21:19:49.704284 [WRN]  2 slow requests, 1 included below;
> oldest blocked for > 358.546729 secs
> > 2018-08-24 21:18:49.703073 [WRN]  slow request 63.799269 seconds old,
> received at 2018-08-24 21:17:45.903726: client_request(client.213528:241810
> getattr pAsLsXsFs #0x12e7e5a 2018-08-24 21:17:45.903049 caller_uid=0,
> caller_gid=0{}) currently failed to rdlock, waiting
> > 2018-08-24 21:18:49.703062 [WRN]  2 slow requests, 1 included below;
> oldest blocked for > 298.545511 secs
> > 2018-08-24 21:18:19.702465 [WRN]  slow request 33.798637 seconds old,
> received at 2018-08-24 21:17:45.903726: client_request(client.213528:241810
> getattr pAsLsXsFs #0x12e7e5a 2018-08-24 21:17:45.903049 caller_uid=0,
> caller_gid=0{}) currently failed to rdlock, waiting
> > 2018-08-24 21:18:19.702456 [WRN]  2 slow requests, 1 included below;
> oldest blocked for > 268.544880 secs
> > 2018-08-24 21:17:54.702517 [WRN]  client.213528 isn't responding to
> mclientcaps(revoke), ino 0x12e7e5a pending pAsLsXsFr issued
> pAsLsXsFscr, sent 243.543893 seconds ago
> > 2018-08-24 21:17:54.701904 [WRN]  slow request 243.544331 seconds old,
> received at 2018-08-24 21:13:51.157483: client_request(client.267792:649065
> setattr size=0 mtime=2018-08-24 21:13:51.163236 #0x12e7e5a 2018-08-24
> 21:13:51.163236 caller_uid=0, caller_gid=0{}) currently failed to xlock,
> waiting
> > 2018-08-24 21:17:54.701894 [WRN]  1 slow requests, 1 included below;
> oldest blocked for > 243.544331 secs
> > 2018-08-24 21:15:54.700034 [WRN]  client.213528 isn't responding to
> mclientcaps(revoke), ino 0x12e7e5a pending pAsLsXsFr issued
> pAsLsXsFscr, sent 123.541410 seconds ago
> > 2018-08-24 21:15:54.699385 [WRN]  slow request 123.541822 seconds old,
> received at 2018-08-24 21:13:51.157483: client_request(client.267792:649065
> setattr size=0 mtime=2018-08-24 21:13:51.163236 #0x12e7e5a 2018-08-24
> 21:13:51.163236 caller_uid=0, caller_gid=0{}) currently failed to xlock,
> waiting
> > 2018-08-24 21:15:54.699375 [WRN]  1 slow requests, 1 

Re: [ceph-users] ceph-fuse slow cache?

2018-08-25 Thread Stefan Kooman
Quoting Gregory Farnum (gfar...@redhat.com):

> Hmm, these aren't actually the start and end times to the same operation.
> put_inode() is literally adjusting a refcount, which can happen for reasons
> ranging from the VFS doing something that drops it to an internal operation
> completing to a response coming back from the MDS. You should be able to
> find requests coming in from the kernel and a response going back out (the
> function names will be prefixed with "ll_", eg "ll_lookup").

Ok, next try. One page visit and grepped "sess_". See results below:

2018-08-25 11:10:42.142 7ff2a0c2c700  3 client.15224195 ll_lookup 
0x168547c.head sess_bss8rh5bug6bffvj6s67vnsce5
2018-08-25 11:10:42.142 7ff2a0c2c700 10 client.15224195 _lookup concluded 
ENOENT locally for 0x168547c.head(faked_ino=0 ref=2 ll_ref=4 cap_refs={} 
open={} mode=40755 size=0/0 nlink=1 btime=2018-08-23 14:23:09.291832 
mtime=2018-08-25 11:08:11.025085 ctime=2018-08-25 11:08:11.025085 
caps=pAsLsXsFs(0=pAsLsXsFs) COMPLETE parents=0x15218bb.head["sessions"] 
0x557ddecc6100) dn 'sess_bss8rh5bug6bffvj6s67vnsce5'
2018-08-25 11:10:42.142 7ff2a0c2c700  3 client.15224195 ll_lookup 
0x168547c.head sess_bss8rh5bug6bffvj6s67vnsce5 -> -2 (0)
2018-08-25 11:10:42.142 7ff2a142d700  3 client.15224195 ll_lookup 
0x168547c.head sess_bss8rh5bug6bffvj6s67vnsce5
2018-08-25 11:10:42.142 7ff2a142d700 10 client.15224195 _lookup concluded 
ENOENT locally for 0x168547c.head(faked_ino=0 ref=2 ll_ref=5 cap_refs={} 
open={} mode=40755 size=0/0 nlink=1 btime=2018-08-23 14:23:09.291832 
mtime=2018-08-25 11:08:11.025085 ctime=2018-08-25 11:08:11.025085 
caps=pAsLsXsFs(0=pAsLsXsFs) COMPLETE parents=0x15218bb.head["sessions"] 
0x557ddecc6100) dn 'sess_bss8rh5bug6bffvj6s67vnsce5'
2018-08-25 11:10:42.142 7ff2a142d700  3 client.15224195 ll_lookup 
0x168547c.head sess_bss8rh5bug6bffvj6s67vnsce5 -> -2 (0)
2018-08-25 11:10:42.142 7ff2a0c2c700  8 client.15224195 _ll_create 
0x168547c.head sess_bss8rh5bug6bffvj6s67vnsce5 0100600 131138, uid 5003, 
gid 5003
2018-08-25 11:10:42.142 7ff2a0c2c700 10 client.15224195 _lookup concluded 
ENOENT locally for 0x168547c.head(faked_ino=0 ref=2 ll_ref=5 cap_refs={} 
open={} mode=40755 size=0/0 nlink=1 btime=2018-08-23 14:23:09.291832 
mtime=2018-08-25 11:08:11.025085 ctime=2018-08-25 11:08:11.025085 
caps=pAsLsXsFs(0=pAsLsXsFs) COMPLETE parents=0x15218bb.head["sessions"] 
0x557ddecc6100) dn 'sess_bss8rh5bug6bffvj6s67vnsce5'

^^ check for existing session file, which does not exist, yet.

2018-08-25 11:10:42.142 7ff2a0c2c700  8 client.15224195 _create(0x168547c 
sess_bss8rh5bug6bffvj6s67vnsce5, 0100600)
2018-08-25 11:10:42.142 7ff2a0c2c700 20 client.15224195 get_or_create 
0x168547c.head(faked_ino=0 ref=3 ll_ref=5 cap_refs={} open={} mode=40755 
size=0/0 nlink=1 btime=2018-08-23 14:23:09.291832 mtime=2018-08-25 
11:08:11.025085 ctime=2018-08-25 11:08:11.025085 caps=pAsLsXsFs(0=pAsLsXsFs) 
COMPLETE parents=0x15218bb.head["sessions"] 0x557ddecc6100) name 
sess_bss8rh5bug6bffvj6s67vnsce5
2018-08-25 11:10:42.142 7ff2a0c2c700 15 client.15224195 link dir 0x557ddecc6100 
'sess_bss8rh5bug6bffvj6s67vnsce5' to inode 0 dn 0x557ddf053de0 (new dn)
2018-08-25 11:10:42.142 7ff2a0c2c700 20 client.15224195 choose_target_mds inode 
dir hash is 2 on sess_bss8rh5bug6bffvj6s67vnsce5 => 3288169645
2018-08-25 11:10:42.142 7ff2a0c2c700 10 client.15224195 send_request 
client_request(unknown.0:872 create 
#0x168547c/sess_bss8rh5bug6bffvj6s67vnsce5 2018-08-25 11:10:42.148081 
caller_uid=5003, caller_gid=5003{5003,}) v4 to mds.0
2018-08-25 11:10:42.142 7ff2a0c2c700  1 -- [2001:7b8:81:6::18]:0/4024171085 --> 
[2001:7b8:80:3:0:2c:3:2]:6800/1086374448 -- client_request(unknown.0:872 create 
#0x168547c/sess_bss8rh5bug6bffvj6s67vnsce5 2018-08-25 11:10:42.148081 
caller_uid=5003, caller_gid=5003{5003,}) v4 -- 0x557ddf884080 con 0
2018-08-25 11:10:42.142 7ff2a5c36700 12 client.15224195 insert_dentry_inode 
'sess_bss8rh5bug6bffvj6s67vnsce5' vino 0x16d9844.head in dir 
0x168547c.head dn 0x557ddf053de0
2018-08-25 11:10:42.142 7ff2a5c36700 15 client.15224195 link dir 0x557ddecc6100 
'sess_bss8rh5bug6bffvj6s67vnsce5' to inode 0x557ddfaa0680 dn 0x557ddf053de0 
(old dn)
2018-08-25 11:10:42.142 7ff2a5c36700 10 client.15224195 put_inode on 
0x16d9844.head(faked_ino=0 ref=3 ll_ref=0 cap_refs={} open={} mode=100600 
size=0/4194304 nlink=1 btime=2018-08-25 11:10:42.148081 mtime=2018-08-25 
11:10:42.148081 ctime=2018-08-25 11:10:42.148081 
caps=pAsxLsXsxFsxcrwb(0=pAsxLsXsxFsxcrwb) objectset[0x16d9844 ts 0/0 
objects 0 dirty_or_tx 0] 
parents=0x168547c.head["sess_bss8rh5bug6bffvj6s67vnsce5"] 0x557ddfaa0680)
2018-08-25 11:10:42.142 7ff2a5c36700 20 client.15224195 link  inode 
0x557ddfaa0680 parents now 0x168547c.head["sess_bss8rh5bug6bffvj6s67vnsce5"]
2018-08-25 11:10:42.142 7ff2a5c36700 10 client.15224195 put_inode on 
0x16d9844.head(faked_ino=0 ref=2 ll_ref=0 cap_refs={} open={} mode=100600 

Re: [ceph-users] Unexpected behaviour after monitors upgrade from Jewel to Luminous

2018-08-25 Thread Adrien Gillard
The issue is finally resolved.
Upgrading to Luminous was the way to go. Unfortunately, we did not set
'ceph osd require-osd-release luminous' immediately so we did not
activate the luminous functionnalities that saved us.

I think the new mechanisms to manage and prune past intervals[1]
allowed the OSDs to start without consuming enormous amounts of memory
(around 1.5GB for the majority, up to 10GB for a few).
The pgs took the larger part of the day to peer and activate and the
cluster eventually ended up in HEALTH_OK. We are having a maximum of
scrubs over the week-end to detect inconsistencies that could have
appeared with the flapping.

Thank you again for your help !

[1] https://ceph.com/community/new-luminous-pg-overdose-protection/
On Thu, Aug 23, 2018 at 10:04 PM Adrien Gillard
 wrote:
>
> Sending back, forgot the plain text for ceph-devel.
>
> Sorry about that.
>
> On Thu, Aug 23, 2018 at 9:57 PM Adrien Gillard  
> wrote:
> >
> > We are running CentOS 7.5 with upstream Ceph packages, no remote syslog, 
> > just default local logging.
> >
> > After looking a bit deeper into pprof, --alloc_space seems to represent 
> > allocations that happened since the program started which goes along with 
> > the quick deallocation of the memory. --inuse_space (the default) indicates 
> > allocation at collection. std::_Rb_tree::_M_copy uses the most memory here 
> > but I have other samples (well, most of the others actually) where 
> > ceph::logging::Log::create_entry has the lead.
> > I will look at the admin socket.
> >
> > I'm adding ceph-devel, as someone might have a clue as to how to better 
> > intepret these results.
> >
> > Thank you for keeping up with this :)
> >
> >
> > On Thu, Aug 23, 2018 at 6:58 PM Gregory Farnum  wrote:
> >>
> >> On Thu, Aug 23, 2018 at 8:42 AM Adrien Gillard  
> >> wrote:
> >>>
> >>> With a bit of profiling, it seems all the memory is allocated to  
> >>> ceph::logging::Log::create_entry (see below)
> >>>
> >>> Shoould this be normal ? Is it because some OSDs are down and it logs the 
> >>> results of its osd_ping  ?
> >>
> >>
> >> Hmm, is that where the memory is actually *staying* or was it merely 
> >> allocated and then quickly deallocated? If you have debugging turned on 
> >> the volume of log output can be quite large, but it's a small amount kept 
> >> resident in memory at a time. Given that you don't seem to have any logs 
> >> over level 5 (except monc at 10) that does seem like a very strange ratio, 
> >> though.
> >>
> >> Generally log messages are put into a circular buffer and then removed, 
> >> and if the buffer gets full the thread trying to log just gets blocked. 
> >> I'm not sure how you could get very large arbitrary amounts of data like 
> >> that. Are you using syslog or something? I don't think that should do it, 
> >> but that's all I can imagine.
> >> That or somehow the upgrade has broken the way the OSD releases memory 
> >> back to the OS. Have you looked at the heap stats available through the 
> >> admin socket? What distro are you running and what's the source of your 
> >> packages?
> >> -Greg
> >>
> >>>
> >>>
> >>> The debug level of the OSD is below also.
> >>>
> >>> Thanks,
> >>>
> >>> Adrien
> >>>
> >>>  $ pprof /usr/bin/ceph-osd osd.36.profile.0042-first.heap
> >>> Using local file /usr/bin/ceph-osd.
> >>> Using local file osd.36.profile.0042-first.heap.
> >>> Welcome to pprof!  For help, type 'help'.
> >>> (pprof) top
> >>> Total: 2468.0 MB
> >>>519.5  21.0%  21.0%   1000.4  40.5% std::_Rb_tree::_M_copy
> >>>481.0  19.5%  40.5%481.0  19.5% std::_Rb_tree::_M_create_node
> >>>384.3  15.6%  56.1%384.3  15.6% 
> >>> std::_Rb_tree::_M_emplace_hint_unique
> >>>374.0  15.2%  71.3%374.0  15.2% 
> >>> ceph::buffer::create_aligned_in_mempool
> >>>305.6  12.4%  83.6%305.6  12.4% ceph::logging::Log::create_entry
> >>>217.3   8.8%  92.4%220.2   8.9% std::vector::_M_emplace_back_aux
> >>>114.4   4.6%  97.1%115.6   4.7% ceph::buffer::list::append@a51900
> >>> 21.4   0.9%  98.0%210.0   8.5% OSD::heartbeat
> >>> 21.2   0.9%  98.8% 21.2   0.9% std::string::_Rep::_S_create
> >>>  4.7   0.2%  99.0%266.2  10.8% AsyncConnection::send_message
> >>> (pprof)
> >>>
> >>>
> >>>  $ pprof /usr/bin/ceph-osd --alloc_space osd.36.profile.0042-first.heap
> >>> Using local file /usr/bin/ceph-osd.
> >>> Using local file osd.36.profile.0042.heap.
> >>> Welcome to pprof!  For help, type 'help'.
> >>> (pprof) top
> >>> Total: 16519.3 MB
> >>>  11915.7  72.1%  72.1%  11915.7  72.1% ceph::logging::Log::create_entry
> >>>745.0   4.5%  76.6%745.0   4.5% std::string::_Rep::_S_create
> >>>716.9   4.3%  81.0%716.9   4.3% 
> >>> ceph::buffer::create_aligned_in_mempool
> >>>700.2   4.2%  85.2%703.8   4.3% std::vector::_M_emplace_back_aux
> >>>671.9   4.1%  89.3%671.9   4.1% 
> >>> std::_Rb_tree::_M_emplace_hint_unique
> >>>557.8   3.4%  92.7%   1075.2   6.5% 

Re: [ceph-users] radosgw: need couple of blind (indexless) buckets, how-to?

2018-08-25 Thread Konstantin Shalygin

Thank you very much! If anyone would like to help update these docs, I
would be happy to help with guidance/review.



I was make a try half year ago - http://tracker.ceph.com/issues/23081




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com