Re: [ceph-users] JBOD question

2018-07-25 Thread Caspar Smit
Satish,

Yes, that card support 'both'. You have to flash the IR firmware (IT
Firmware = JBOD only) and then you are able to create RAID1 sets in the
BIOS of the card and any ununsed disks will be seen by the OS as 'jbod'

Kind regards,

Caspar Smit


2018-07-23 20:43 GMT+02:00 Satish Patel :

> I am planning to buy "LSI SAS 9207-8i" does anyone know it support
> both RAID & JBOD mode together so i can do RAID-1 on OS disk and other
> disk for JBOD
>
> On Sat, Jul 21, 2018 at 11:16 AM, Willem Jan Withagen 
> wrote:
> > On 21/07/2018 01:45, Oliver Freyermuth wrote:
> >>
> >> Hi Satish,
> >>
> >> that really completely depends on your controller.
> >>
> >
> > This is what I get on an older AMCC 9550 controller.
> > Note that the disk type is set to JBOD. But the disk descriptors are
> hidden.
> > And you'll never know what more is not done right.
> >
> > Geom name: da6
> > Providers:
> > 1. Name: da6
> >Mediasize: 1000204886016 (932G)
> >Sectorsize: 512
> >Mode: r1w1e2
> >descr: AMCC 9550SXU-8L DISK
> >lunname: AMCCZ1N00KBD
> >lunid: AMCCZ1N00KBD
> >ident: Z1N00KBD
> >rotationrate: unknown
> >fwsectors: 63
> >fwheads: 255
> >
> > This is an LSI 9802 controller in IT mode:
> > (And that gives me a bit more faith)
> > Geom name: da7
> > Providers:
> > 1. Name: da7
> >Mediasize: 3000592982016 (2.7T)
> >Sectorsize: 512
> >Mode: r1w1e1
> >descr: WDC WD30EFRX-68AX9N0
> >lunid: 0004d927f870
> >ident: WD-WMC1T4088693
> >rotationrate: unknown
> >fwsectors: 63
> >fwheads: 255
> >
> > --WjW
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

2018-07-25 Thread SCHAER Frederic
Hi Dan,

Just checked again : arggghhh...

# grep AUTO_RESTART /etc/sysconfig/ceph
CEPH_AUTO_RESTART_ON_UPGRADE=no

So no :'(
RPMs were upgraded, but OSD were not restarted as I thought. Or at least not 
restarted with new 12.2.7 binaries (but since the skip digest option was 
present in the running 12.2.6 OSDs, I guess the 12.2.6 osds did not understand 
that option)

I just restarted all of the OSDs : I will check again the behavior and report 
here - thanks for pointing me in the good direction !

Fred

-Message d'origine-
De : Dan van der Ster [mailto:d...@vanderster.com] 
Envoyé : mardi 24 juillet 2018 16:50
À : SCHAER Frederic 
Cc : ceph-users 
Objet : Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

`ceph versions` -- you're sure all the osds are running 12.2.7 ?

osd_skip_data_digest = true is supposed to skip any crc checks during reads.
But maybe the cache tiering IO path is different and checks the crc anyway?

-- dan


On Tue, Jul 24, 2018 at 3:01 PM SCHAER Frederic  wrote:
>
> Hi,
>
>
>
> I read the 12.2.7 upgrade notes, and set “osd skip data digest = true” before 
> I started upgrading from 12.2.6 on my Bluestore-only cluster.
>
> As far as I can tell, my OSDs all got restarted during the upgrade and all 
> got the option enabled :
>
>
>
> This is what I see for a specific OSD taken at random:
>
> # ceph --admin-daemon /var/run/ceph/ceph-osd.68.asok config show|grep 
> data_digest
>
> "osd_skip_data_digest": "true",
>
>
>
> This is what I see when I try to injectarg the option data digest ignore 
> option :
>
>
>
> # ceph tell osd.* injectargs '--osd_skip_data_digest=true' 2>&1|head
>
> osd.0: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> osd.1: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> osd.2: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> osd.3: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> (…)
>
>
>
> This has been like that since I upgraded to 12.2.7.
>
> I read in the releanotes that the skip_data_digest  option should be 
> sufficient to ignore the 12.2.6 corruptions and that objects should auto-heal 
> on rewrite…
>
>
>
> However…
>
>
>
> My config :
>
> -  Using tiering with an SSD hot storage tier
>
> -  HDDs for cold storage
>
>
>
> And… I get I/O errors on some VMs when running some commands as simple as 
> “yum check-update”.
>
>
>
> The qemu/kvm/libirt logs show me these (in : /var/log/libvirt/qemu) :
>
>
>
> block I/O error in device 'drive-virtio-disk0': Input/output error (5)
>
>
>
> In the ceph logs, I can see these errors :
>
>
>
> 2018-07-24 11:17:56.420391 osd.71 [ERR] 1.23 copy from 
> 1:c590b9d7:::rbd_data.1920e2238e1f29.00e7:head to 
> 1:c590b9d7:::rbd_data.1920e2238e1f29.00e7:head data digest 
> 0x3bb26e16 != source 0xec476c54
>
> 2018-07-24 11:17:56.429936 osd.71 [ERR] 1.23 copy from 
> 1:c590b9d7:::rbd_data.1920e2238e1f29.00e7:head to 
> 1:c590b9d7:::rbd_data.1920e2238e1f29.00e7:head data digest 
> 0x3bb26e16 != source 0xec476c54
>
>
>
> (yes, my cluster is seen as healthy)
>
>
>
> On the affected OSDs, I can see these errors :
>
>
>
> 2018-07-24 11:17:56.420349 7f034642a700 -1 osd.71 pg_epoch: 182367 pg[1.23( v 
> 182367'46340724 (182367'46339152,182367'46340724] local-lis/les=182298/182299 
> n=344 ec=2726/2726 lis/c 182298/182298 les/c/f 182299/182299/0 
> 182298/182298/43896) [71,101,74] r=0 lpr=182298 crt=182367'46340724 lcod 
> 182367'46340723 mlcod 182367'46340723 active+clean] process_copy_chunk data 
> digest 0x3bb26e16 != source 0xec476c54
>
> 2018-07-24 11:17:56.420388 7f034642a700 -1 log_channel(cluster) log [ERR] : 
> 1.23 copy from 1:c590b9d7:::rbd_data.1920e2238e1f29.00e7:head to 
> 1:c590b9d7:::rbd_data.1920e2238e1f29.00e7:head data digest 
> 0x3bb26e16 != source 0xec476c54
>
> 2018-07-24 11:17:56.420395 7f034642a700 -1 osd.71 pg_epoch: 182367 pg[1.23( v 
> 182367'46340724 (182367'46339152,182367'46340724] local-lis/les=182298/182299 
> n=344 ec=2726/2726 lis/c 182298/182298 les/c/f 182299/182299/0 
> 182298/182298/43896) [71,101,74] r=0 lpr=182298 crt=182367'46340724 lcod 
> 182367'46340723 mlcod 182367'46340723 active+clean] finish_promote unexpected 
> promote error (5) Input/output error
>
> 2018-07-24 11:17:56.429900 7f034642a700 -1 osd.71 pg_epoch: 182367 pg[1.23( v 
> 182367'46340724 (182367'46339152,182367'46340724] local-lis/les=182298/182299 
> n=344 ec=2726/2726 lis/c 182298/182298 les/c/f 182299/182299/0 
> 182298/182298/43896) [71,101,74] r=0 lpr=182298 crt=182367'46340724 lcod 
> 182367'46340723 mlcod 182367'46340723 active+clean] process_copy_chunk data 
> digest 0x3bb26e16 != source 0xec476c54
>
> 2018-07-24 11:17:56.429934 7f034642a700 -1 log_channel(cluster) log [ERR] : 
> 1.23 copy from 1:c590b9d7:::rbd_data.1920e2238e1f29.00e7:head to 
> 1:c590b9d7:::rbd_data.19

Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

2018-07-25 Thread SCHAER Frederic
Hi again,

Now with all OSDs restarted, I'm getting 
health: HEALTH_ERR
777 scrub errors
Possible data damage: 36 pgs inconsistent
(...)
pgs: 4764 active+clean
 36   active+clean+inconsistent

But from what I could read up to now, this is what's expected and should 
auto-heal when objects are overwritten  - fingers crossed as pg repair or scrub 
doesn't seem to help.
New errors in the ceph logs include lines like the following, which I also 
hope/presume are expected - I still have posts to read on this list about omap 
and those  errors :
2018-07-25 10:20:00.106227 osd.66 osd.66 192.54.207.75:6826/2430367 12 : 
cluster [ERR] 11.288 shard 207: soid 
11:1155c332:::rbd_data.207dce238e1f29.0527:head data_digest 
0xc8997a5b != data_digest 0x2ca15853 from auth oi 
11:1155c332:::rbd_data.207dce238e1f29.0527:head(182554'240410 
client.6084296.0:48463693 dirty|data_digest|omap_digest s 4194304 uv 49429318 
dd 2ca15853 od  alloc_hint [0 0 0])
2018-07-25 10:20:00.106230 osd.66 osd.66 192.54.207.75:6826/2430367 13 : 
cluster [ERR] 11.288 soid 
11:1155c332:::rbd_data.207dce238e1f29.0527:head: failed to pick 
suitable auth object

But never mind : with the SSD cache in writeback, I just saw the same error 
again on one VM (only) for now :
(lots of these)
2018-07-25 10:15:19.841746 osd.101 osd.101 192.54.207.206:6859/3392654 116 : 
cluster [ERR] 1.20 copy from 
1:06dd6812:::rbd_data.194b8c238e1f29.07a3:head to 
1:06dd6812:::rbd_data.194b8c238e1f29.07a3:head data digest 
0x27451e3c != source 0x12c05014

(osd.101 is a SSD from the cache pool)

=> yum update => I/O error => Set the TIER pool to forward => yum update starts.

Weird, but if that happens only on this host, I can cope with it (I have 780+ 
scrub errors to handle now :/ )

And just to be sure ;)

[root@ceph10 ~]# ceph --admin-daemon /var/run/ceph/*osd*101* version
{"version":"12.2.7","release":"luminous","release_type":"stable"}

On the good side : this update is forcing us to dive into ceph internals : 
we'll be more ceph-aware tonight than this morning ;)

Cheers
Fred

-Message d'origine-
De : SCHAER Frederic 
Envoyé : mercredi 25 juillet 2018 09:57
À : 'Dan van der Ster' 
Cc : ceph-users 
Objet : RE: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

Hi Dan,

Just checked again : arggghhh...

# grep AUTO_RESTART /etc/sysconfig/ceph
CEPH_AUTO_RESTART_ON_UPGRADE=no

So no :'(
RPMs were upgraded, but OSD were not restarted as I thought. Or at least not 
restarted with new 12.2.7 binaries (but since the skip digest option was 
present in the running 12.2.6 OSDs, I guess the 12.2.6 osds did not understand 
that option)

I just restarted all of the OSDs : I will check again the behavior and report 
here - thanks for pointing me in the good direction !

Fred

-Message d'origine-
De : Dan van der Ster [mailto:d...@vanderster.com] 
Envoyé : mardi 24 juillet 2018 16:50
À : SCHAER Frederic 
Cc : ceph-users 
Objet : Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

`ceph versions` -- you're sure all the osds are running 12.2.7 ?

osd_skip_data_digest = true is supposed to skip any crc checks during reads.
But maybe the cache tiering IO path is different and checks the crc anyway?

-- dan


On Tue, Jul 24, 2018 at 3:01 PM SCHAER Frederic  wrote:
>
> Hi,
>
>
>
> I read the 12.2.7 upgrade notes, and set “osd skip data digest = true” before 
> I started upgrading from 12.2.6 on my Bluestore-only cluster.
>
> As far as I can tell, my OSDs all got restarted during the upgrade and all 
> got the option enabled :
>
>
>
> This is what I see for a specific OSD taken at random:
>
> # ceph --admin-daemon /var/run/ceph/ceph-osd.68.asok config show|grep 
> data_digest
>
> "osd_skip_data_digest": "true",
>
>
>
> This is what I see when I try to injectarg the option data digest ignore 
> option :
>
>
>
> # ceph tell osd.* injectargs '--osd_skip_data_digest=true' 2>&1|head
>
> osd.0: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> osd.1: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> osd.2: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> osd.3: osd_skip_data_digest = 'true' (not observed, change may require 
> restart)
>
> (…)
>
>
>
> This has been like that since I upgraded to 12.2.7.
>
> I read in the releanotes that the skip_data_digest  option should be 
> sufficient to ignore the 12.2.6 corruptions and that objects should auto-heal 
> on rewrite…
>
>
>
> However…
>
>
>
> My config :
>
> -  Using tiering with an SSD hot storage tier
>
> -  HDDs for cold storage
>
>
>
> And… I get I/O errors on some VMs when running some commands as simple as 
> “yum check-update”.
>
>
>
> The qemu/kvm/libirt logs show me these (in : /var/log/libvirt/qemu) :
>
>
>
> block I/O error in device 'drive-virtio-disk0': In

Re: [ceph-users] Error creating compat weight-set with mgr balancer plugin

2018-07-25 Thread Martin Overgaard Hansen


> On 24 Jul 2018, at 13.22, Lothar Gesslein  wrote:
> 
>> On 07/24/2018 12:58 PM, Martin Overgaard Hansen wrote:
>> Creating a compat weight set manually with 'ceph osd crush weight-set
>> create-compat' gives me: Error EPERM: crush map contains one or more
>> bucket(s) that are not straw2
>> 
>> What changes do I need to implement to get the mgr balancer plugin
>> working? Thank.
> 
> You will need to run
> 
> osd crush set-all-straw-buckets-to-straw2
> 
> which exists since ceph mimic v13.0.1 as a handy shortcut to upgrade to
> straw2.

Thanks, that worked!

Best regards,
Martin Overgaard Hansen
MultiHouse IT Partner A/S
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

2018-07-25 Thread SCHAER Frederic
My cache pool seems affected by an old/closed bug... but I don't think this is 
(directly ?) related to the current issue - but this won't help anyway :-/
http://tracker.ceph.com/issues/12659

Since I got promote issues, I tried to flush only the affected rbd image : I 
got 6 unflush-able objects...

rbd image 'dev7243':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.194b8c238e1f29
(...)

=>

# for i in `rados -p ssd-hot-irfu-virt ls |egrep '^rbd_data.194b8c238e1f29'`; 
do rados -p ssd-hot-irfu-virt cache-flush $i ; rados -p ssd-hot-irfu-virt 
cache-evict $i ; done
error from cache-flush rbd_data.194b8c238e1f29.082f: (16) Device or 
resource busy
error from cache-flush rbd_data.194b8c238e1f29.082f: (16) Device or 
resource busy
error from cache-flush rbd_data.194b8c238e1f29.0926: (16) Device or 
resource busy
error from cache-flush rbd_data.194b8c238e1f29.0926: (16) Device or 
resource busy
(...)

Strange that the cache-evict error message is the same as the cache flush one...
# rados -p ssd-hot-irfu-virt cache-evict 
rbd_data.194b8c238e1f29.082f
error from cache-flush rbd_data.194b8c238e1f29.082f: (16) Device or 
resource busy

Anyway : I stopped the VM and... I still can't flush the objects.
I don't think this is related anyway, as the OSD propote error is :

2018-07-25 10:51:44.386764 7fd27929b700 -1 log_channel(cluster) log [ERR] : 
1.39 copy from 1:9c0e12cc:::rbd_data.1920e2238e1f29.0dfc:head to 
1:9c0e12cc:::rbd_data.1920e2238e1f29.0dfc:head data digest 0x
632451e5 != source 0x73dfd8ab
2018-07-25 10:51:44.386769 7fd27929b700 -1 osd.74 pg_epoch: 182580 pg[1.39( v 
182580'38868939 (182579'38867404,182580'38868939] local-lis/les=182563/182564 
n=342 ec=2726/2726 lis/c 182563/182563 les/c/f 182564/182564/0 182563/
182563/182558) [74,71,19] r=0 lpr=182563 crt=182580'38868939 lcod 
182580'38868938 mlcod 182580'38868938 active+clean] finish_promote unexpected 
promote error (5) Input/output error

And I don't see object rbd_data.1920e2238e1f29.0dfc (:head ?) in 
the unflush-able objects...

Cheers

-Message d'origine-
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de SCHAER 
Frederic
Envoyé : mercredi 25 juillet 2018 10:28
À : Dan van der Ster 
Cc : ceph-users 
Objet : [PROVENANCE INTERNET] Re: [ceph-users] 12.2.7 + osd skip data digest + 
bluestore + I/O errors

Hi again,

Now with all OSDs restarted, I'm getting 
health: HEALTH_ERR
777 scrub errors
Possible data damage: 36 pgs inconsistent
(...)
pgs: 4764 active+clean
 36   active+clean+inconsistent

But from what I could read up to now, this is what's expected and should 
auto-heal when objects are overwritten  - fingers crossed as pg repair or scrub 
doesn't seem to help.
New errors in the ceph logs include lines like the following, which I also 
hope/presume are expected - I still have posts to read on this list about omap 
and those  errors :
2018-07-25 10:20:00.106227 osd.66 osd.66 192.54.207.75:6826/2430367 12 : 
cluster [ERR] 11.288 shard 207: soid 
11:1155c332:::rbd_data.207dce238e1f29.0527:head data_digest 
0xc8997a5b != data_digest 0x2ca15853 from auth oi 
11:1155c332:::rbd_data.207dce238e1f29.0527:head(182554'240410 
client.6084296.0:48463693 dirty|data_digest|omap_digest s 4194304 uv 49429318 
dd 2ca15853 od  alloc_hint [0 0 0])
2018-07-25 10:20:00.106230 osd.66 osd.66 192.54.207.75:6826/2430367 13 : 
cluster [ERR] 11.288 soid 
11:1155c332:::rbd_data.207dce238e1f29.0527:head: failed to pick 
suitable auth object

But never mind : with the SSD cache in writeback, I just saw the same error 
again on one VM (only) for now :
(lots of these)
2018-07-25 10:15:19.841746 osd.101 osd.101 192.54.207.206:6859/3392654 116 : 
cluster [ERR] 1.20 copy from 
1:06dd6812:::rbd_data.194b8c238e1f29.07a3:head to 
1:06dd6812:::rbd_data.194b8c238e1f29.07a3:head data digest 
0x27451e3c != source 0x12c05014

(osd.101 is a SSD from the cache pool)

=> yum update => I/O error => Set the TIER pool to forward => yum update starts.

Weird, but if that happens only on this host, I can cope with it (I have 780+ 
scrub errors to handle now :/ )

And just to be sure ;)

[root@ceph10 ~]# ceph --admin-daemon /var/run/ceph/*osd*101* version
{"version":"12.2.7","release":"luminous","release_type":"stable"}

On the good side : this update is forcing us to dive into ceph internals : 
we'll be more ceph-aware tonight than this morning ;)

Cheers
Fred

-Message d'origine-
De : SCHAER Frederic 
Envoyé : mercredi 25 juillet 2018 09:57
À : 'Dan van der Ster' 
Cc : ceph-users 
Objet : RE: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

Hi Dan,

Just checked again : arggghhh...

# grep AUTO_RESTART /etc/sysconfig/ceph
CEPH_AUTO_RESTART_ON_UPGRADE=no

Re: [ceph-users] ls operation is too slow in cephfs

2018-07-25 Thread Surya Bala
time got reduced when MDS from the same region became active

Each region we have a MDS. OSD nodes are in one region and active MDS is in
another region . So that this delay.

On Tue, Jul 17, 2018 at 6:23 PM, John Spray  wrote:

> On Tue, Jul 17, 2018 at 8:26 AM Surya Bala 
> wrote:
> >
> > Hi folks,
> >
> > We have production cluster with 8 nodes and each node has 60 disks of
> size 6TB each. We are using cephfs and FUSE client with global mount point.
> We are doing rsync from our old server to this cluster rsync is slow
> compared to normal server
> >
> > when we do 'ls' inside some folder, which has many more number of files
> like 1lakhs and 2lakhs, the response is too slow.
>
> The first thing to check is what kind of "ls" you're doing.  Some
> systems colorize ls by default, and that involves statting every file
> in addition to listing the directory.  Try with "ls --color=never".
>
> It also helps to be more specific about what "too slow" means.  How
> many seconds, and how many files?
>
> John
>
> >
> > Any suggestions please
> >
> > Regards
> > Surya
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Yan, Zheng
On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco  wrote:
>
> Hello,
>
> I've attached the PDF.
>
> I don't know if is important, but I made changes on configuration and I've 
> restarted the servers after dump that heap file. I've changed the 
> memory_limit to 25Mb to test if stil with aceptable values of RAM.
>

Looks like there are memory leak in async messenger.  what's output of
"dd /usr/bin/ceph-mds"? Could you try simple messenger (add "ms type =
simple" to 'global' section of ceph.conf)

Regards
Yan, Zheng

> Greetings!
>
> 2018-07-25 2:53 GMT+02:00 Yan, Zheng :
>>
>> On Wed, Jul 25, 2018 at 4:52 AM Daniel Carrasco  wrote:
>> >
>> > Hello,
>> >
>> > I've run the profiler for about 5-6 minutes and this is what I've got:
>> >
>>
>> please run pprof --pdf /usr/bin/ceph-mds
>> /var/log/ceph/ceph-mds.x.profile..heap >
>> /tmp/profile.pdf. and send me the pdf
>>
>>
>>
>> > 
>> > 
>> > 
>> > Using local file /usr/bin/ceph-mds.
>> > Using local file 
>> > /var/log/ceph/mds.kavehome-mgto-pro-fs01.profile.0009.heap.
>> > Total: 400.0 MB
>> >362.5  90.6%  90.6%362.5  90.6% 
>> > ceph::buffer::create_aligned_in_mempool
>> > 20.4   5.1%  95.7% 29.8   7.5% CDir::_load_dentry
>> >  5.9   1.5%  97.2%  6.9   1.7% CDir::add_primary_dentry
>> >  4.7   1.2%  98.4%  4.7   1.2% ceph::logging::Log::create_entry
>> >  1.8   0.5%  98.8%  1.8   0.5% 
>> > std::_Rb_tree::_M_emplace_hint_unique
>> >  1.8   0.5%  99.3%  2.2   0.5% compact_map_base::decode
>> >  0.6   0.1%  99.4%  0.7   0.2% CInode::add_client_cap
>> >  0.5   0.1%  99.5%  0.5   0.1% 
>> > std::__cxx11::basic_string::_M_mutate
>> >  0.4   0.1%  99.6%  0.4   0.1% SimpleLock::more
>> >  0.4   0.1%  99.7%  0.4   0.1% MDCache::add_inode
>> >  0.3   0.1%  99.8%  0.3   0.1% CDir::add_to_bloom
>> >  0.2   0.1%  99.9%  0.2   0.1% CDir::steal_dentry
>> >  0.2   0.0%  99.9%  0.2   0.0% CInode::get_or_open_dirfrag
>> >  0.1   0.0%  99.9%  0.8   0.2% std::enable_if::type decode
>> >  0.1   0.0% 100.0%  0.1   0.0% ceph::buffer::list::crc32c
>> >  0.1   0.0% 100.0%  0.1   0.0% decode_message
>> >  0.0   0.0% 100.0%  0.0   0.0% OpTracker::create_request
>> >  0.0   0.0% 100.0%  0.0   0.0% TrackedOp::TrackedOp
>> >  0.0   0.0% 100.0%  0.0   0.0% std::vector::_M_emplace_back_aux
>> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_insert_unique
>> >  0.0   0.0% 100.0%  0.0   0.0% CInode::add_dirfrag
>> >  0.0   0.0% 100.0%  0.0   0.0% MDLog::_prepare_new_segment
>> >  0.0   0.0% 100.0%  0.0   0.0% DispatchQueue::enqueue
>> >  0.0   0.0% 100.0%  0.0   0.0% ceph::buffer::list::push_back
>> >  0.0   0.0% 100.0%  0.0   0.0% Server::prepare_new_inode
>> >  0.0   0.0% 100.0%365.6  91.4% EventCenter::process_events
>> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_copy
>> >  0.0   0.0% 100.0%  0.0   0.0% CDir::add_null_dentry
>> >  0.0   0.0% 100.0%  0.0   0.0% Locker::check_inode_max_size
>> >  0.0   0.0% 100.0%  0.0   0.0% CDentry::add_client_lease
>> >  0.0   0.0% 100.0%  0.0   0.0% CInode::project_inode
>> >  0.0   0.0% 100.0%  0.0   0.0% std::__cxx11::list::_M_insert
>> >  0.0   0.0% 100.0%  0.0   0.0% MDBalancer::handle_heartbeat
>> >  0.0   0.0% 100.0%  0.0   0.0% MDBalancer::send_heartbeat
>> >  0.0   0.0% 100.0%  0.0   0.0% C_GatherBase::C_GatherSub::complete
>> >  0.0   0.0% 100.0%  0.0   0.0% EventCenter::create_time_event
>> >  0.0   0.0% 100.0%  0.0   0.0% CDir::_omap_fetch
>> >  0.0   0.0% 100.0%  0.0   0.0% Locker::handle_inode_file_caps
>> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_insert_equal
>> >  0.0   0.0% 100.0%  0.0   0.0% Locker::issue_caps
>> >  0.0   0.0% 100.0%  0.1   0.0% MDLog::_submit_thread
>> >  0.0   0.0% 100.0%  0.0   0.0% Journaler::_wait_for_flush
>> >  0.0   0.0% 100.0%  0.0   0.0% Journaler::wrap_finisher
>> >  0.0   0.0% 100.0%  0.0   0.0% MDSCacheObject::add_waiter
>> >  0.0   0.0% 100.0%  0.0   0.0% std::__cxx11::list::insert
>> >  0.0   0.0% 100.0%  0.0   0.0% std::__detail::_Map_base::operator[]
>> >  0.0   0.0% 100.0%  0.0   0.0% Locker::mark_updated_scatterlock
>> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_insert_
>> >  0.0   0.0% 100.0%  0.0   0.0% alloc_ptr::operator->
>> >  0.0   0.0% 100.0%  0.0   0.0% ceph::buffer::list::append@5c1560
>> >  0.0   0.0% 100.0%  0.0   0.0% 
>> > ceph::buffer::malformed_input::~malformed_input
>> >  0.0   0.0% 10

[ceph-users] Why LZ4 isn't built with ceph?

2018-07-25 Thread Elias Abacioglu
Hi

I'm wondering why LZ4 isn't built by default for newer Linux distros like
Ubuntu Xenial?
I understand that it wasn't built for Trusty because of too old lz4
libraries. But why isn't built for the newer distros?

Thanks,
Elias
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Yan, Zheng
On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng  wrote:
>
> On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco  wrote:
> >
> > Hello,
> >
> > I've attached the PDF.
> >
> > I don't know if is important, but I made changes on configuration and I've 
> > restarted the servers after dump that heap file. I've changed the 
> > memory_limit to 25Mb to test if stil with aceptable values of RAM.
> >
>
> Looks like there are memory leak in async messenger.  what's output of
> "dd /usr/bin/ceph-mds"? Could you try simple messenger (add "ms type =
> simple" to 'global' section of ceph.conf)
>

Besides, are there any suspicious messages in mds log? such as "failed
to decode message of type"




> Regards
> Yan, Zheng
>
> > Greetings!
> >
> > 2018-07-25 2:53 GMT+02:00 Yan, Zheng :
> >>
> >> On Wed, Jul 25, 2018 at 4:52 AM Daniel Carrasco  
> >> wrote:
> >> >
> >> > Hello,
> >> >
> >> > I've run the profiler for about 5-6 minutes and this is what I've got:
> >> >
> >>
> >> please run pprof --pdf /usr/bin/ceph-mds
> >> /var/log/ceph/ceph-mds.x.profile..heap >
> >> /tmp/profile.pdf. and send me the pdf
> >>
> >>
> >>
> >> > 
> >> > 
> >> > 
> >> > Using local file /usr/bin/ceph-mds.
> >> > Using local file 
> >> > /var/log/ceph/mds.kavehome-mgto-pro-fs01.profile.0009.heap.
> >> > Total: 400.0 MB
> >> >362.5  90.6%  90.6%362.5  90.6% 
> >> > ceph::buffer::create_aligned_in_mempool
> >> > 20.4   5.1%  95.7% 29.8   7.5% CDir::_load_dentry
> >> >  5.9   1.5%  97.2%  6.9   1.7% CDir::add_primary_dentry
> >> >  4.7   1.2%  98.4%  4.7   1.2% ceph::logging::Log::create_entry
> >> >  1.8   0.5%  98.8%  1.8   0.5% 
> >> > std::_Rb_tree::_M_emplace_hint_unique
> >> >  1.8   0.5%  99.3%  2.2   0.5% compact_map_base::decode
> >> >  0.6   0.1%  99.4%  0.7   0.2% CInode::add_client_cap
> >> >  0.5   0.1%  99.5%  0.5   0.1% 
> >> > std::__cxx11::basic_string::_M_mutate
> >> >  0.4   0.1%  99.6%  0.4   0.1% SimpleLock::more
> >> >  0.4   0.1%  99.7%  0.4   0.1% MDCache::add_inode
> >> >  0.3   0.1%  99.8%  0.3   0.1% CDir::add_to_bloom
> >> >  0.2   0.1%  99.9%  0.2   0.1% CDir::steal_dentry
> >> >  0.2   0.0%  99.9%  0.2   0.0% CInode::get_or_open_dirfrag
> >> >  0.1   0.0%  99.9%  0.8   0.2% std::enable_if::type decode
> >> >  0.1   0.0% 100.0%  0.1   0.0% ceph::buffer::list::crc32c
> >> >  0.1   0.0% 100.0%  0.1   0.0% decode_message
> >> >  0.0   0.0% 100.0%  0.0   0.0% OpTracker::create_request
> >> >  0.0   0.0% 100.0%  0.0   0.0% TrackedOp::TrackedOp
> >> >  0.0   0.0% 100.0%  0.0   0.0% std::vector::_M_emplace_back_aux
> >> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_insert_unique
> >> >  0.0   0.0% 100.0%  0.0   0.0% CInode::add_dirfrag
> >> >  0.0   0.0% 100.0%  0.0   0.0% MDLog::_prepare_new_segment
> >> >  0.0   0.0% 100.0%  0.0   0.0% DispatchQueue::enqueue
> >> >  0.0   0.0% 100.0%  0.0   0.0% ceph::buffer::list::push_back
> >> >  0.0   0.0% 100.0%  0.0   0.0% Server::prepare_new_inode
> >> >  0.0   0.0% 100.0%365.6  91.4% EventCenter::process_events
> >> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_copy
> >> >  0.0   0.0% 100.0%  0.0   0.0% CDir::add_null_dentry
> >> >  0.0   0.0% 100.0%  0.0   0.0% Locker::check_inode_max_size
> >> >  0.0   0.0% 100.0%  0.0   0.0% CDentry::add_client_lease
> >> >  0.0   0.0% 100.0%  0.0   0.0% CInode::project_inode
> >> >  0.0   0.0% 100.0%  0.0   0.0% std::__cxx11::list::_M_insert
> >> >  0.0   0.0% 100.0%  0.0   0.0% MDBalancer::handle_heartbeat
> >> >  0.0   0.0% 100.0%  0.0   0.0% MDBalancer::send_heartbeat
> >> >  0.0   0.0% 100.0%  0.0   0.0% 
> >> > C_GatherBase::C_GatherSub::complete
> >> >  0.0   0.0% 100.0%  0.0   0.0% EventCenter::create_time_event
> >> >  0.0   0.0% 100.0%  0.0   0.0% CDir::_omap_fetch
> >> >  0.0   0.0% 100.0%  0.0   0.0% Locker::handle_inode_file_caps
> >> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_insert_equal
> >> >  0.0   0.0% 100.0%  0.0   0.0% Locker::issue_caps
> >> >  0.0   0.0% 100.0%  0.1   0.0% MDLog::_submit_thread
> >> >  0.0   0.0% 100.0%  0.0   0.0% Journaler::_wait_for_flush
> >> >  0.0   0.0% 100.0%  0.0   0.0% Journaler::wrap_finisher
> >> >  0.0   0.0% 100.0%  0.0   0.0% MDSCacheObject::add_waiter
> >> >  0.0   0.0% 100.0%  0.0   0.0% std::__cxx11::list::insert
> >> >  0.0   0.0% 100.0%  0.0   0.0% 
> >> > std::__detail::_Map_base::operator[]
> >> >  0.0   0.0% 100.0%  0.0   0.0% Locker::m

[ceph-users] Cephfs meta data pool to ssd and measuring performance difference

2018-07-25 Thread Marc Roos
 

>From this thread, I got how to move the meta data pool from the hdd's to 
the ssd's.
https://www.spinics.net/lists/ceph-users/msg39498.html

ceph osd pool get fs_meta crush_rule
ceph osd pool set fs_meta crush_rule replicated_ruleset_ssd

I guess this can be done on a live system?

What would be a good test to show the performance difference between the 
old hdd and the new ssd?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] download.ceph.com repository changes

2018-07-25 Thread Sage Weil
On Tue, 24 Jul 2018, Alfredo Deza wrote:
> Hi all,
> 
> After the 12.2.6 release went out, we've been thinking on better ways
> to remove a version from our repositories to prevent users from
> upgrading/installing a known bad release.
> 
> The way our repos are structured today means every single version of
> the release is included in the repository. That is, for Luminous,
> every 12.x.x version of the binaries is in the same repo. This is true
> for both RPM and DEB repositories.
> 
> However, the DEB repos don't allow pinning to a given version because
> our tooling (namely reprepro) doesn't construct the repositories in a
> way that this is allowed. For RPM repos this is fine, and version
> pinning works.
> 
> To remove a bad version we have to proposals (and would like to hear
> ideas on other possibilities), one that would involve symlinks and the
> other one which purges the known bad version from our repos.
> 
> *Symlinking*
> When releasing we would have a "previous" and "latest" symlink that
> would get updated as versions move forward. It would require
> separation of versions at the URL level (all versions would no longer
> be available in one repo).
> 
> The URL structure would then look like:
> 
> debian/luminous/12.2.3/
> debian/luminous/previous/  (points to 12.2.5)
> debian/luminous/latest/   (points to 12.2.7)
> 
> Caveats: the url structure would change from debian-luminous/ to
> prevent breakage, and the versions would be split. For RPMs it would
> mean a regression if someone is used to pinning, for example pinning
> to 12.2.2 wouldn't be possible using the same url.
> 
> Pros: Faster release times, less need to move packages around, and
> easier to remove a bad version

I think the core question is: how many users use or depend on the 
version-pinning features with RPMs.  The symlinking path is easy to 
implement but breaks that ability... you have to change the repo URL to 
pin to a release.

(Nobody is doing version pinning with debs because our reprepro repos 
don't support it.)

In the future we can always move to the multi-version repos and maintain 
the per-version repos in parallel, right?

Thanks!
sage



> 
> 
> *Single version removal*
> Our tooling would need to go and remove the known bad version from the
> repository, which would require to rebuild the repository again, so
> that the metadata is updated with the difference in the binaries.
> 
> Caveats: time intensive process, almost like cutting a new release
> which takes about a day (and sometimes longer). Error prone since the
> process wouldn't be the same (one off, just when a version needs to be
> removed)
> 
> Pros: all urls for download.ceph.com and its structure are kept the same.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Daniel Carrasco
Hello,

Thanks for all your help.

The dd is an option of any command?, because at least on Debian/Ubuntu is
an aplication to copy blocks, and then fails.
For now I cannot change the configuration, but later I'll try.
About the logs, I've not seen nothing about "warning", "error", "failed",
"message" or something similar, so looks like there are no messages of that
kind.


Greetings!!

2018-07-25 14:48 GMT+02:00 Yan, Zheng :

> On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng  wrote:
> >
> > On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco 
> wrote:
> > >
> > > Hello,
> > >
> > > I've attached the PDF.
> > >
> > > I don't know if is important, but I made changes on configuration and
> I've restarted the servers after dump that heap file. I've changed the
> memory_limit to 25Mb to test if stil with aceptable values of RAM.
> > >
> >
> > Looks like there are memory leak in async messenger.  what's output of
> > "dd /usr/bin/ceph-mds"? Could you try simple messenger (add "ms type =
> > simple" to 'global' section of ceph.conf)
> >
>
> Besides, are there any suspicious messages in mds log? such as "failed
> to decode message of type"
>
>
>
>
> > Regards
> > Yan, Zheng
> >
> > > Greetings!
> > >
> > > 2018-07-25 2:53 GMT+02:00 Yan, Zheng :
> > >>
> > >> On Wed, Jul 25, 2018 at 4:52 AM Daniel Carrasco 
> wrote:
> > >> >
> > >> > Hello,
> > >> >
> > >> > I've run the profiler for about 5-6 minutes and this is what I've
> got:
> > >> >
> > >>
> > >> please run pprof --pdf /usr/bin/ceph-mds
> > >> /var/log/ceph/ceph-mds.x.profile..heap >
> > >> /tmp/profile.pdf. and send me the pdf
> > >>
> > >>
> > >>
> > >> > 
> 
> > >> > 
> 
> > >> > 
> 
> > >> > Using local file /usr/bin/ceph-mds.
> > >> > Using local file /var/log/ceph/mds.kavehome-
> mgto-pro-fs01.profile.0009.heap.
> > >> > Total: 400.0 MB
> > >> >362.5  90.6%  90.6%362.5  90.6% ceph::buffer::create_aligned_
> in_mempool
> > >> > 20.4   5.1%  95.7% 29.8   7.5% CDir::_load_dentry
> > >> >  5.9   1.5%  97.2%  6.9   1.7% CDir::add_primary_dentry
> > >> >  4.7   1.2%  98.4%  4.7   1.2% ceph::logging::Log::create_
> entry
> > >> >  1.8   0.5%  98.8%  1.8   0.5% std::_Rb_tree::_M_emplace_
> hint_unique
> > >> >  1.8   0.5%  99.3%  2.2   0.5% compact_map_base::decode
> > >> >  0.6   0.1%  99.4%  0.7   0.2% CInode::add_client_cap
> > >> >  0.5   0.1%  99.5%  0.5   0.1% std::__cxx11::basic_string::_
> M_mutate
> > >> >  0.4   0.1%  99.6%  0.4   0.1% SimpleLock::more
> > >> >  0.4   0.1%  99.7%  0.4   0.1% MDCache::add_inode
> > >> >  0.3   0.1%  99.8%  0.3   0.1% CDir::add_to_bloom
> > >> >  0.2   0.1%  99.9%  0.2   0.1% CDir::steal_dentry
> > >> >  0.2   0.0%  99.9%  0.2   0.0% CInode::get_or_open_dirfrag
> > >> >  0.1   0.0%  99.9%  0.8   0.2% std::enable_if::type decode
> > >> >  0.1   0.0% 100.0%  0.1   0.0% ceph::buffer::list::crc32c
> > >> >  0.1   0.0% 100.0%  0.1   0.0% decode_message
> > >> >  0.0   0.0% 100.0%  0.0   0.0% OpTracker::create_request
> > >> >  0.0   0.0% 100.0%  0.0   0.0% TrackedOp::TrackedOp
> > >> >  0.0   0.0% 100.0%  0.0   0.0% std::vector::_M_emplace_back_
> aux
> > >> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_insert_
> unique
> > >> >  0.0   0.0% 100.0%  0.0   0.0% CInode::add_dirfrag
> > >> >  0.0   0.0% 100.0%  0.0   0.0% MDLog::_prepare_new_segment
> > >> >  0.0   0.0% 100.0%  0.0   0.0% DispatchQueue::enqueue
> > >> >  0.0   0.0% 100.0%  0.0   0.0% ceph::buffer::list::push_back
> > >> >  0.0   0.0% 100.0%  0.0   0.0% Server::prepare_new_inode
> > >> >  0.0   0.0% 100.0%365.6  91.4% EventCenter::process_events
> > >> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_copy
> > >> >  0.0   0.0% 100.0%  0.0   0.0% CDir::add_null_dentry
> > >> >  0.0   0.0% 100.0%  0.0   0.0% Locker::check_inode_max_size
> > >> >  0.0   0.0% 100.0%  0.0   0.0% CDentry::add_client_lease
> > >> >  0.0   0.0% 100.0%  0.0   0.0% CInode::project_inode
> > >> >  0.0   0.0% 100.0%  0.0   0.0% std::__cxx11::list::_M_insert
> > >> >  0.0   0.0% 100.0%  0.0   0.0% MDBalancer::handle_heartbeat
> > >> >  0.0   0.0% 100.0%  0.0   0.0% MDBalancer::send_heartbeat
> > >> >  0.0   0.0% 100.0%  0.0   0.0% C_GatherBase::C_GatherSub::
> complete
> > >> >  0.0   0.0% 100.0%  0.0   0.0%
> EventCenter::create_time_event
> > >> >  0.0   0.0% 100.0%  0.0   0.0% CDir::_omap_fetch
> > >> >  0.0   0.0% 100.0%  0.0   0.0%
> Locker::handle_inode_file_caps
> > >> >  0.0   0.0% 100.0%  0.0   0.0%
> std::_Rb

Re: [ceph-users] Why LZ4 isn't built with ceph?

2018-07-25 Thread Casey Bodley



On 07/25/2018 08:39 AM, Elias Abacioglu wrote:

Hi

I'm wondering why LZ4 isn't built by default for newer Linux distros 
like Ubuntu Xenial?
I understand that it wasn't built for Trusty because of too old lz4 
libraries. But why isn't built for the newer distros?


Thanks,
Elias


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Hi Elias,

We only turned it on by default once it was available on all target 
platforms, which wasn't the case until the mimic release. This happened 
in https://github.com/ceph/ceph/pull/21332, with some prior discussion 
in https://github.com/ceph/ceph/pull/17038.


I don't know how to add build dependencies that are conditional on 
ubuntu version, but if you're keen to see this in luminous and have some 
debian packaging experience, you can target a PR against the luminous 
branch. I'm happy to help with review.


Casey
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Reclaim free space on RBD images that use Bluestore?????

2018-07-25 Thread Sean Bolding
Thanks. Yes, it turns out this was not an issue with Ceph, but rather an
issue with XenServer. Starting in version 7, Xenserver changed how they
manage LVM by adding a VHD layer on top of it. They did it to handle live
migrations but ironically broke live migrations when using any iSCSI
including iSCSI to Ceph via lrbd. It works just fine with NFS based storage
repositories but not block storage. Doesn't look like they are going to fix
it since they are moving on to using glusterfs instead with an experimental
version of it starting in XenServer 7.5

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Ronny Aasen
Sent: Monday, July 23, 2018 6:13 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Reclaim free space on RBD images that use
Bluestore?

 

On 23.07.2018 22:18, Sean Bolding wrote:

I have XenServers that connect via iSCSI to Ceph gateway servers that use
lrbd and targetcli. On my ceph cluster the RBD images I create are used as
storage repositories in Xenserver for the virtual machine vdisks. 

 

Whenever I delete a virtual machine, XenServer shows that the repository
size has decreased. This also happens when I mount a virtual drive in
Xenserver as a virtual drive in a Windows guest. If I delete a large file,
such as an exported VM, it shows as deleted and space available. However;
when check in Ceph  using ceph -s or ceph df it still shows the space being
used.

 

I checked everywhere and it seems there was a reference to it here
https://github.com/ceph/ceph/pull/14727 but not sure if a way to trim or
discard freed blocks was ever implemented.

 

The only way I have found is to play musical chairs and move the VMs to
different repositories and then completely remove the old RBD images in
ceph. This is not exactly easy to do.

 

Is there a way to reclaim free space on RBD images that use Bluestore?
What commands do I use and where do I use this from? If such command exist
do I run them on the ceph cluster or do I run them from XenServer? Please
help.

 

 

Sean

 

 

 


I am not familiar with Xen, but it does sounds like you have a rbd mounted
with a filesystem on the xen server.
in that case it is the same as for other filesystems. Deleted files are just
deleted in the file allocation table, and the RBD space is "reclaimed" when
the filesystem zeroes out the now unused blocks. 

in many filesystems you would run the fstrim command to overwrite free'd
blocks with zeroes, optionally mount the fs with the the discard option. 
in xenserver >6.5 this should be a button in xencenter to reclaim freed
space. 


kind regards
Ronny Aasen

 
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ls operation is too slow in cephfs

2018-07-25 Thread Ronny Aasen
What are you talking about when you say you have mds in a region, afaik 
only radosgw supports multisite and regions.
it sounds like you have a cluster spread out over a geographical area. 
and this will have a massive impact on latency


what is the latency between all servers in the cluster ?

kind regards
Ronny Aasen

On 25.07.2018 12:03, Surya Bala wrote:

time got reduced when MDS from the same region became active

Each region we have a MDS. OSD nodes are in one region and active MDS 
is in another region . So that this delay.


On Tue, Jul 17, 2018 at 6:23 PM, John Spray > wrote:


On Tue, Jul 17, 2018 at 8:26 AM Surya Bala
mailto:sooriya.ba...@gmail.com>> wrote:
>
> Hi folks,
>
> We have production cluster with 8 nodes and each node has 60
disks of size 6TB each. We are using cephfs and FUSE client with
global mount point. We are doing rsync from our old server to this
cluster rsync is slow compared to normal server
>
> when we do 'ls' inside some folder, which has many more number
of files like 1lakhs and 2lakhs, the response is too slow.

The first thing to check is what kind of "ls" you're doing.  Some
systems colorize ls by default, and that involves statting every file
in addition to listing the directory.  Try with "ls --color=never".

It also helps to be more specific about what "too slow" means.  How
many seconds, and how many files?

John





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Daniel Carrasco
I've changed the configuration adding your line and changing the mds memory
limit to 512Mb, and for now looks stable (its on about 3-6% and sometimes
even below 3%). I've got a very high usage on boot:
1264 ceph  20   0 12,543g 6,251g  16184 S   2,0 41,1%   0:19.34 ceph-mds

but now looks acceptable:
1264 ceph  20   0 12,543g 737952  16188 S   1,0  4,6%   0:41.05 ceph-mds

Anyway, I need time to test it, because 15 minutes is too less.

Greetings!!

2018-07-25 17:16 GMT+02:00 Daniel Carrasco :

> Hello,
>
> Thanks for all your help.
>
> The dd is an option of any command?, because at least on Debian/Ubuntu is
> an aplication to copy blocks, and then fails.
> For now I cannot change the configuration, but later I'll try.
> About the logs, I've not seen nothing about "warning", "error", "failed",
> "message" or something similar, so looks like there are no messages of that
> kind.
>
>
> Greetings!!
>
> 2018-07-25 14:48 GMT+02:00 Yan, Zheng :
>
>> On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng  wrote:
>> >
>> > On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco 
>> wrote:
>> > >
>> > > Hello,
>> > >
>> > > I've attached the PDF.
>> > >
>> > > I don't know if is important, but I made changes on configuration and
>> I've restarted the servers after dump that heap file. I've changed the
>> memory_limit to 25Mb to test if stil with aceptable values of RAM.
>> > >
>> >
>> > Looks like there are memory leak in async messenger.  what's output of
>> > "dd /usr/bin/ceph-mds"? Could you try simple messenger (add "ms type =
>> > simple" to 'global' section of ceph.conf)
>> >
>>
>> Besides, are there any suspicious messages in mds log? such as "failed
>> to decode message of type"
>>
>>
>>
>>
>> > Regards
>> > Yan, Zheng
>> >
>> > > Greetings!
>> > >
>> > > 2018-07-25 2:53 GMT+02:00 Yan, Zheng :
>> > >>
>> > >> On Wed, Jul 25, 2018 at 4:52 AM Daniel Carrasco <
>> d.carra...@i2tic.com> wrote:
>> > >> >
>> > >> > Hello,
>> > >> >
>> > >> > I've run the profiler for about 5-6 minutes and this is what I've
>> got:
>> > >> >
>> > >>
>> > >> please run pprof --pdf /usr/bin/ceph-mds
>> > >> /var/log/ceph/ceph-mds.x.profile..heap >
>> > >> /tmp/profile.pdf. and send me the pdf
>> > >>
>> > >>
>> > >>
>> > >> > 
>> 
>> > >> > 
>> 
>> > >> > 
>> 
>> > >> > Using local file /usr/bin/ceph-mds.
>> > >> > Using local file /var/log/ceph/mds.kavehome-mgt
>> o-pro-fs01.profile.0009.heap.
>> > >> > Total: 400.0 MB
>> > >> >362.5  90.6%  90.6%362.5  90.6%
>> ceph::buffer::create_aligned_in_mempool
>> > >> > 20.4   5.1%  95.7% 29.8   7.5% CDir::_load_dentry
>> > >> >  5.9   1.5%  97.2%  6.9   1.7% CDir::add_primary_dentry
>> > >> >  4.7   1.2%  98.4%  4.7   1.2%
>> ceph::logging::Log::create_entry
>> > >> >  1.8   0.5%  98.8%  1.8   0.5%
>> std::_Rb_tree::_M_emplace_hint_unique
>> > >> >  1.8   0.5%  99.3%  2.2   0.5% compact_map_base::decode
>> > >> >  0.6   0.1%  99.4%  0.7   0.2% CInode::add_client_cap
>> > >> >  0.5   0.1%  99.5%  0.5   0.1%
>> std::__cxx11::basic_string::_M_mutate
>> > >> >  0.4   0.1%  99.6%  0.4   0.1% SimpleLock::more
>> > >> >  0.4   0.1%  99.7%  0.4   0.1% MDCache::add_inode
>> > >> >  0.3   0.1%  99.8%  0.3   0.1% CDir::add_to_bloom
>> > >> >  0.2   0.1%  99.9%  0.2   0.1% CDir::steal_dentry
>> > >> >  0.2   0.0%  99.9%  0.2   0.0% CInode::get_or_open_dirfrag
>> > >> >  0.1   0.0%  99.9%  0.8   0.2% std::enable_if::type decode
>> > >> >  0.1   0.0% 100.0%  0.1   0.0% ceph::buffer::list::crc32c
>> > >> >  0.1   0.0% 100.0%  0.1   0.0% decode_message
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% OpTracker::create_request
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% TrackedOp::TrackedOp
>> > >> >  0.0   0.0% 100.0%  0.0   0.0%
>> std::vector::_M_emplace_back_aux
>> > >> >  0.0   0.0% 100.0%  0.0   0.0%
>> std::_Rb_tree::_M_insert_unique
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% CInode::add_dirfrag
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% MDLog::_prepare_new_segment
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% DispatchQueue::enqueue
>> > >> >  0.0   0.0% 100.0%  0.0   0.0%
>> ceph::buffer::list::push_back
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% Server::prepare_new_inode
>> > >> >  0.0   0.0% 100.0%365.6  91.4% EventCenter::process_events
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% std::_Rb_tree::_M_copy
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% CDir::add_null_dentry
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% Locker::check_inode_max_size
>> > >> >  0.0   0.0% 100.0%  0.0   0.0% CDentry::add_client_lease
>> > >> >  0.0   0.0% 

[ceph-users] Ceph, SSDs and the HBA queue depth parameter

2018-07-25 Thread Jean-Philippe Méthot
Hi,

We’re testing a full Intel SSD Ceph cluster on mimic with bluestore and I’m 
currently trying to squeeze some better performances from it. We know that on 
older storage solutions, sometimes increasing the queue_depth for the HBA can 
speed up the IO. Is this also the case for CEPH? Is queue depth even something 
to think about considering we're using a full SSD setup?

Best regards,

Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Alex Gorbachev
I am not sure this related to RBD, but in case it is, this would be an
important bug to fix.

Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL 7.4.

When running a large read operation and doing LVM snapshots during
that operation, the block being read winds up all zeroes in pagecache.

Dropping the caches syncs up the block with what's on "disk" and
everything is fine.

Working on steps to reproduce simply - ceph is Luminous 12.2.7, RHEL
client is Jewel 10.2.10-17.el7cp

--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Jason Dillaman
On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev 
wrote:

> I am not sure this related to RBD, but in case it is, this would be an
> important bug to fix.
>
> Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL
> 7.4.
>
> When running a large read operation and doing LVM snapshots during
> that operation, the block being read winds up all zeroes in pagecache.
>
> Dropping the caches syncs up the block with what's on "disk" and
> everything is fine.
>
> Working on steps to reproduce simply - ceph is Luminous 12.2.7, RHEL
> client is Jewel 10.2.10-17.el7cp
>

Is this krbd or QEMU+librbd? If the former, what kernel version are you
running?


>
> --
> Alex Gorbachev
> Storcium
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Alex Gorbachev
On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman  wrote:
>
>
> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev 
> wrote:
>>
>> I am not sure this related to RBD, but in case it is, this would be an
>> important bug to fix.
>>
>> Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL
>> 7.4.
>>
>> When running a large read operation and doing LVM snapshots during
>> that operation, the block being read winds up all zeroes in pagecache.
>>
>> Dropping the caches syncs up the block with what's on "disk" and
>> everything is fine.
>>
>> Working on steps to reproduce simply - ceph is Luminous 12.2.7, RHEL
>> client is Jewel 10.2.10-17.el7cp
>
>
> Is this krbd or QEMU+librbd? If the former, what kernel version are you
> running?

It's krbd on RHEL.

RHEL kernel:

Linux dmg-cbcache01 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51
EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

> --
> Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Alex Gorbachev
On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev  
wrote:
> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman  wrote:
>>
>>
>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev 
>> wrote:
>>>
>>> I am not sure this related to RBD, but in case it is, this would be an
>>> important bug to fix.
>>>
>>> Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL
>>> 7.4.
>>>
>>> When running a large read operation and doing LVM snapshots during
>>> that operation, the block being read winds up all zeroes in pagecache.
>>>
>>> Dropping the caches syncs up the block with what's on "disk" and
>>> everything is fine.
>>>
>>> Working on steps to reproduce simply - ceph is Luminous 12.2.7, RHEL
>>> client is Jewel 10.2.10-17.el7cp
>>
>>
>> Is this krbd or QEMU+librbd? If the former, what kernel version are you
>> running?
>
> It's krbd on RHEL.
>
> RHEL kernel:
>
> Linux dmg-cbcache01 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51
> EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

Not sure if this is exactly replicating the issue, but I was able to
do this on two different systems:

RHEL 7.4 kernel as above.

Create a PVM PV on a mapped kRBD device

example: pvcreate /dev/rbd/spin1/lvm1

Create a VG and LV, make an XFS FS

vgcreate datavg /dev/rbd/spin1/lvm1
lvcreate -n data1 -L 5G datavg
mkfs.xfs /dev/datavg/data1


Get some large file and copy it to some other file, same storage or
different.  All is well.

Now snapshot the LV

lvcreate -l8%ORIGIN -s -n snap_data1 /dev/datavg/data1 --addtag backup

Now try to copy that file again.  I get:

NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:3470]

And in dmesg (this is on Proxmox but I did the same on ESXi)

[1397609.308673] sched: RT throttling activated
[1397658.759259] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s!
[kworker/0:1:2648]
[1397658.759354] Modules linked in: dm_snapshot dm_bufio rbd libceph
rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache
sunrpc ppdev joydev pcspkr sg parport_pc virtio_balloon parport shpchp
i2c_piix4 ip_tables xfs libcrc32c sd_mod sr_mod crc_t10dif
crct10dif_generic cdrom crct10dif_common ata_generic pata_acpi
virtio_scsi virtio_console virtio_net bochs_drm drm_kms_helper
syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ata_piix libata
serio_raw virtio_pci i2c_core virtio_ring virtio floppy dm_mirror
dm_region_hash dm_log dm_mod
[1397658.759400] CPU: 0 PID: 2648 Comm: kworker/0:1 Kdump: loaded Not
tainted 3.10.0-862.el7.x86_64 #1
[1397658.759402] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org
04/01/2014
[1397658.759415] Workqueue: kcopyd do_work [dm_mod]
[1397658.759418] task: 932df65d3f40 ti: 932fb138c000 task.ti:
932fb138c000
[1397658.759420] RIP: 0010:[]  []
copy_callback+0x50/0x130 [dm_snapshot]
[1397658.759426] RSP: 0018:932fb138fd08  EFLAGS: 0283
[1397658.759428] RAX: 0003e5e8 RBX: ebecc4943ec0 RCX:
932ff4704068
[1397658.759430] RDX: 932dc8050d00 RSI: 932fd6a0f9b8 RDI:

[1397658.759431] RBP: 932fb138fd28 R08: 932dc7d2c0b0 R09:
932dc8050d20
[1397658.759433] R10: c7d2b301 R11: ebecc01f4a00 R12:

[1397658.759435] R13: 000180090003 R14:  R15:
ff80
[1397658.759438] FS:  () GS:932fffc0()
knlGS:
[1397658.759440] CS:  0010 DS:  ES:  CR0: 8005003b
[1397658.759442] CR2: 7f17bcd5e860 CR3: 42c0e000 CR4:
06f0
[1397658.759447] Call Trace:
[1397658.759452]  [] ? origin_resume+0x70/0x70 [dm_snapshot]
[1397658.759459]  [] run_complete_job+0x6b/0xc0 [dm_mod]
[1397658.759466]  [] process_jobs+0x60/0x100 [dm_mod]
[1397658.759471]  [] ? kcopyd_put_pages+0x50/0x50 [dm_mod]
[1397658.759477]  [] do_work+0x42/0x90 [dm_mod]
[1397658.759483]  [] process_one_work+0x17f/0x440
[1397658.759485]  [] worker_thread+0x22c/0x3c0
[1397658.759489]  [] ? manage_workers.isra.24+0x2a0/0x2a0
[1397658.759494]  [] kthread+0xd1/0xe0
[1397658.759497]  [] ? insert_kthread_work+0x40/0x40
[1397658.759503]  [] ret_from_fork_nospec_begin+0x21/0x21
[1397658.759506]  [] ? insert_kthread_work+0x40/0x40


>
>> --
>> Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Alex Gorbachev
On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev  
wrote:
> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev  
> wrote:
>> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman  wrote:
>>>
>>>
>>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev 
>>> wrote:

 I am not sure this related to RBD, but in case it is, this would be an
 important bug to fix.

 Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL
 7.4.

 When running a large read operation and doing LVM snapshots during
 that operation, the block being read winds up all zeroes in pagecache.

 Dropping the caches syncs up the block with what's on "disk" and
 everything is fine.

 Working on steps to reproduce simply - ceph is Luminous 12.2.7, RHEL
 client is Jewel 10.2.10-17.el7cp
>>>
>>>
>>> Is this krbd or QEMU+librbd? If the former, what kernel version are you
>>> running?
>>
>> It's krbd on RHEL.
>>
>> RHEL kernel:
>>
>> Linux dmg-cbcache01 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51
>> EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
>
> Not sure if this is exactly replicating the issue, but I was able to
> do this on two different systems:
>
> RHEL 7.4 kernel as above.
>
> Create a PVM PV on a mapped kRBD device
>
> example: pvcreate /dev/rbd/spin1/lvm1
>
> Create a VG and LV, make an XFS FS
>
> vgcreate datavg /dev/rbd/spin1/lvm1
> lvcreate -n data1 -L 5G datavg
> mkfs.xfs /dev/datavg/data1
> 
>
> Get some large file and copy it to some other file, same storage or
> different.  All is well.
>
> Now snapshot the LV
>
> lvcreate -l8%ORIGIN -s -n snap_data1 /dev/datavg/data1 --addtag backup
>
> Now try to copy that file again.  I get:
>
> NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:3470]
>
> And in dmesg (this is on Proxmox but I did the same on ESXi)
>
> [1397609.308673] sched: RT throttling activated
> [1397658.759259] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s!
> [kworker/0:1:2648]
> [1397658.759354] Modules linked in: dm_snapshot dm_bufio rbd libceph
> rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache
> sunrpc ppdev joydev pcspkr sg parport_pc virtio_balloon parport shpchp
> i2c_piix4 ip_tables xfs libcrc32c sd_mod sr_mod crc_t10dif
> crct10dif_generic cdrom crct10dif_common ata_generic pata_acpi
> virtio_scsi virtio_console virtio_net bochs_drm drm_kms_helper
> syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ata_piix libata
> serio_raw virtio_pci i2c_core virtio_ring virtio floppy dm_mirror
> dm_region_hash dm_log dm_mod
> [1397658.759400] CPU: 0 PID: 2648 Comm: kworker/0:1 Kdump: loaded Not
> tainted 3.10.0-862.el7.x86_64 #1
> [1397658.759402] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org
> 04/01/2014
> [1397658.759415] Workqueue: kcopyd do_work [dm_mod]
> [1397658.759418] task: 932df65d3f40 ti: 932fb138c000 task.ti:
> 932fb138c000
> [1397658.759420] RIP: 0010:[]  []
> copy_callback+0x50/0x130 [dm_snapshot]
> [1397658.759426] RSP: 0018:932fb138fd08  EFLAGS: 0283
> [1397658.759428] RAX: 0003e5e8 RBX: ebecc4943ec0 RCX:
> 932ff4704068
> [1397658.759430] RDX: 932dc8050d00 RSI: 932fd6a0f9b8 RDI:
> 
> [1397658.759431] RBP: 932fb138fd28 R08: 932dc7d2c0b0 R09:
> 932dc8050d20
> [1397658.759433] R10: c7d2b301 R11: ebecc01f4a00 R12:
> 
> [1397658.759435] R13: 000180090003 R14:  R15:
> ff80
> [1397658.759438] FS:  () GS:932fffc0()
> knlGS:
> [1397658.759440] CS:  0010 DS:  ES:  CR0: 8005003b
> [1397658.759442] CR2: 7f17bcd5e860 CR3: 42c0e000 CR4:
> 06f0
> [1397658.759447] Call Trace:
> [1397658.759452]  [] ? origin_resume+0x70/0x70 [dm_snapshot]
> [1397658.759459]  [] run_complete_job+0x6b/0xc0 [dm_mod]
> [1397658.759466]  [] process_jobs+0x60/0x100 [dm_mod]
> [1397658.759471]  [] ? kcopyd_put_pages+0x50/0x50 [dm_mod]
> [1397658.759477]  [] do_work+0x42/0x90 [dm_mod]
> [1397658.759483]  [] process_one_work+0x17f/0x440
> [1397658.759485]  [] worker_thread+0x22c/0x3c0
> [1397658.759489]  [] ? manage_workers.isra.24+0x2a0/0x2a0
> [1397658.759494]  [] kthread+0xd1/0xe0
> [1397658.759497]  [] ? insert_kthread_work+0x40/0x40
> [1397658.759503]  [] ret_from_fork_nospec_begin+0x21/0x21
> [1397658.759506]  [] ? insert_kthread_work+0x40/0x40
>
>


Tried same on Ubuntu kernel 4.14.39 - no issues
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] active directory integration with cephfs

2018-07-25 Thread Manuel Sopena Ballesteros
Dear Ceph community,

I am quite new to Ceph but trying to learn as much quick as I can. We are 
deploying our first Ceph production cluster in the next few weeks, we choose 
luminous and our goal is to have cephfs. One of the question I have been asked 
by other members of our team is if there is a possibility to integrate ceph 
authentication/authorization with Active Directory. I have seen in the 
documentations that objct gateway can do this but I am not about cephfs.

Anyone has any idea if I can integrate cephfs with AD?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] active directory integration with cephfs

2018-07-25 Thread Serkan Çoban
You can do it by exporting cephfs by samba. I don't think any other
way exists for cephfs.

On Thu, Jul 26, 2018 at 9:12 AM, Manuel Sopena Ballesteros
 wrote:
> Dear Ceph community,
>
>
>
> I am quite new to Ceph but trying to learn as much quick as I can. We are
> deploying our first Ceph production cluster in the next few weeks, we choose
> luminous and our goal is to have cephfs. One of the question I have been
> asked by other members of our team is if there is a possibility to integrate
> ceph authentication/authorization with Active Directory. I have seen in the
> documentations that objct gateway can do this but I am not about cephfs.
>
>
>
> Anyone has any idea if I can integrate cephfs with AD?
>
>
>
> Thank you very much
>
>
>
> Manuel Sopena Ballesteros | Big data Engineer
> Garvan Institute of Medical Research
> The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
> T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel...@garvan.org.au
>
>
>
> NOTICE
> Please consider the environment before printing this email. This message and
> any attachments are intended for the addressee named and may contain legally
> privileged/confidential/copyright information. If you are not the intended
> recipient, you should not read, use, disclose, copy or distribute this
> communication. If you have received this message in error please notify us
> at once by return email and then delete both messages. We accept no
> liability for the distribution of viruses or similar in electronic
> communications. This notice should not be removed.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com