[ceph-users] OpenStack Sydney Forum - Ceph BoF proposal

2017-09-29 Thread Blair Bethwaite
Hi all, I just submitted an OpenStack Forum proposal for a Ceph BoF session at OpenStack Sydney. If you're interested in seeing this happen then please hit up http://forumtopics.openstack.org/cfp/details/46 with your comments / +1's. -- Cheers, ~Blairo ___

[ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
Hello, We are working on a POC with containers (kubernetes) and cephfs (for permanent storage). The main idea is to give to a user access to a subdirectory of the cephfs but be sure he won't be able to access to the rest of the storage. As k8s works, the user will have access to the yml file wh

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Marc Roos
Maybe this will get you started with the permissions for only this fs path /smb sudo ceph auth get-or-create client.cephfs.smb mon 'allow r' mds 'allow r, allow rw path=/smb' osd 'allow rwx pool=fs_meta,allow rwx pool=fs_data' -Original Message- From: Yoann Moulin [mailto:yoann.m

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
>> We are working on a POC with containers (kubernetes) and cephfs (for >> permanent storage). >> >> The main idea is to give to a user access to a subdirectory of the >> cephfs but be sure he won't be able to access to the rest of the >> storage. As k8s works, the user will have access to the

[ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Luis Periquito
Hi all, I use puppet to deploy and manage my clusters. Recently, as I have been doing a removal of old hardware and adding of new I've noticed that sometimes the "ceph osd create" is returning repeated IDs. Usually it's on the same server, but yesterday I saw it in different servers. I was expec

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Marc Roos
I think that is because of the older kernel client, like mentioned here? https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39734.html -Original Message- From: Yoann Moulin [mailto:yoann.mou...@epfl.ch] Sent: vrijdag 29 september 2017 10:00 To: ceph-users Subject: Re: [ceph-u

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Adrian Saul
Do you mean that after you delete and remove the crush and auth entries for the OSD, when you go to create another OSD later it will re-use the previous OSD ID that you have destroyed in the past? Because I have seen that behaviour as well - but only for previously allocated OSD IDs that have

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
We are working on a POC with containers (kubernetes) and cephfs (for permanent storage). The main idea is to give to a user access to a subdirectory of the cephfs but be sure he won't be able to access to the rest of the storage. As k8s works, the user will have acc

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Stefan Kooman
Quoting Yoann Moulin (yoann.mou...@epfl.ch): > > Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > > What is exactly an older kernel client ? 4.4 is old ? See http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version If you're on Ubuntu Xenial I would advise to us

[ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-09-29 Thread Alexander Kushnirenko
Hi, I'm trying to use CEPH-12.2.0 as storage for with Bareos-16.2.4 backup with libradosstriper1 support. Libradosstriber was suggested on this list to solve the problem, that current CEPH-12 discourages users from using object with very big size (>128MB). Bareos treat Rados Object as Volume and

Re: [ceph-users] osd max scrubs not honored?

2017-09-29 Thread Stefan Kooman
Quoting Christian Balzer (ch...@gol.com): > > On Thu, 28 Sep 2017 22:36:22 + Gregory Farnum wrote: > > > Also, realize the deep scrub interval is a per-PG thing and (unfortunately) > > the OSD doesn't use a global view of its PG deep scrub ages to try and > > schedule them intelligently acros

[ceph-users] Ceph OSD get blocked and start to make inconsistent pg from time to time

2017-09-29 Thread Gonzalo Aguilar Delgado
Hi, I discovered that my cluster starts to make slow requests and all disk activity get blocked. This happens once a day. And the ceph OSD get 100% CPU. In the ceph health I get something like: 2017-09-29 10:49:01.227257 [INF] pgmap v67494428: 764 pgs: 1 active+recovery_wait+degraded+inc

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Luis Periquito
On Fri, Sep 29, 2017 at 9:44 AM, Adrian Saul wrote: > > Do you mean that after you delete and remove the crush and auth entries for > the OSD, when you go to create another OSD later it will re-use the previous > OSD ID that you have destroyed in the past? > The issue is that has been giving th

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 10:44, Adrian Saul wrote: > Do you mean that after you delete and remove the crush and auth entries for > the OSD, when you go to create another OSD later it will re-use the previous > OSD ID that you have destroyed in the past? > > Because I have seen that behaviour as well - bu

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
>> Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 >> >> What is exactly an older kernel client ? 4.4 is old ? > > See > http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version > > If you're on Ubuntu Xenial I would advise to use > "linux-generic-hwe-16.04". Curr

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Stefan Kooman
Quoting Yoann Moulin (yoann.mou...@epfl.ch): > > >> Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > >> > >> What is exactly an older kernel client ? 4.4 is old ? > > > > See > > http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version > > > > If you're on Ubuntu

[ceph-users] rados_read versus rados_aio_read performance

2017-09-29 Thread Alexander Kushnirenko
Hello, We see very poor performance when reading/writing rados objects. The speed is only 3-4MB/sec, compared to 95MB rados benchmarking. When you look on underline code it uses librados and linradosstripper libraries (both have poor performance) and the code uses rados_read and rados_write func

Re: [ceph-users] RGW how to delete orphans

2017-09-29 Thread Andreas Calminder
Ok, thanks! So I'll wait a few days for the command to complete and see what kind to of output it produces then. Regards, Andreas On 29 Sep 2017 12:32 a.m., "Christian Wuerdig" wrote: > I'm pretty sure the orphan find command does exactly just that - > finding orphans. I remember some emails on

Re: [ceph-users] ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)

2017-09-29 Thread Matthew Vernon
Hi, On 29/09/17 01:00, Brad Hubbard wrote: > This looks similar to > https://bugzilla.redhat.com/show_bug.cgi?id=1458007 or one of the > bugs/trackers attached to that. Yes, although increasing the timeout still leaves the issue that if the timeout fires you don't get anything resembling a useful

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 11:31, Maged Mokhtar wrote: > On 2017-09-29 10:44, Adrian Saul wrote: > > Do you mean that after you delete and remove the crush and auth entries for > the OSD, when you go to create another OSD later it will re-use the previous > OSD ID that you have destroyed in the past? > >

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 What is exactly an older kernel client ? 4.4 is old ? >>> >>> See >>> http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version >>> >>> If you're on Ubuntu Xenial I would advise to use >>> "linux-generic-

Re: [ceph-users] ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)

2017-09-29 Thread Brad Hubbard
On Fri, Sep 29, 2017 at 8:58 PM, Matthew Vernon wrote: > Hi, > > On 29/09/17 01:00, Brad Hubbard wrote: >> This looks similar to >> https://bugzilla.redhat.com/show_bug.cgi?id=1458007 or one of the >> bugs/trackers attached to that. > > Yes, although increasing the timeout still leaves the issue t

Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-09-29 Thread Kashif Mumtaz
Dear Stefan, Thanks for your help. You are right. I was missing apt update" after adding repo.  After doing apt update I am able to install luminous cadmin@admin:~/my-cluster$ ceph --versionceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable) I am not much in practice

Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-09-29 Thread Ronny Aasen
"apt-cache policy" shows you the different versions that are possible to install, and the prioritized order they have. the highest version will normally be installed unless priorities are changed. example: apt-cache policy ceph ceph:   Installed: 12.2.1-1~bpo90+1   Candidate: 12.2.1-1~bpo90+1  

Re: [ceph-users] rados_read versus rados_aio_read performance

2017-09-29 Thread Gregory Farnum
It sounds like you are doing synchronous reads of small objects here. In that case you are dominated by the per-op already rather than the throughout of your cluster. Using aio or multiple threads will let you parallelism requests. -Greg On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko wrote:

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Gregory Farnum
In cases like this you also want to set RADOS namespaces for each tenant’s directory in the CephFS layout and give them OSD access to only that namespace. That will prevent malicious users from tampering with the raw RADOS objects of other users. -Greg On Fri, Sep 29, 2017 at 4:33 AM Yoann Moulin

[ceph-users] Objecter and librados logs on rbd image operations

2017-09-29 Thread Chamarthy, Mahati
Hi - I'm trying to get logs of Objecter and librados while doing operations(read/write) on an rbd image. Here is my ceph.conf: [global] debug_objecter = 20 debug_rados = 20 [client] rbd_cache = false log file = /self/ceph/ceph-rbd.log debug rbd = 20 debug objecter

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
Hi, > Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > > What is exactly an older kernel client ? 4.4 is old ? > > See > http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version > > If you're on Ubuntu Xenial I would advise to use >>

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Gregory Farnum
On Fri, Sep 29, 2017 at 7:34 AM Yoann Moulin wrote: > Hi, > > > Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > > > > What is exactly an older kernel client ? 4.4 is old ? > > > > See > > > http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-

[ceph-users] zone, zonegroup and resharding bucket on luminous

2017-09-29 Thread Yoann Moulin
Hello, I'm doing some tests on the radosgw on luminous (12.2.1), I have a few questions. In the documentation[1], there is a reference to "radosgw-admin region get" but it seems not to be available anymore. It should be "radosgw-admin zonegroup get" I guess. 1. http://docs.ceph.com/docs/lumino

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
>> In cases like this you also want to set RADOS namespaces for each tenant’s >> directory in the CephFS layout and give them OSD access to only that >> namespace. That will prevent malicious users from tampering with the raw >> RADOS objects of other users. > > You mean by doing something

Re: [ceph-users] osd max scrubs not honored?

2017-09-29 Thread David Turner
If you're scheduling them appropriately so that no deep scrubs will happen on their own, then you can just check the cluster status if any PGs are deep scrubbing at all. If you're only scheduling them for specific pools, then you can confirm which PGs are being deep scrubbed in a specific pool wit

[ceph-users] Get rbd performance stats

2017-09-29 Thread Matthew Stroud
Is there a way I could get a performance stats for rbd images? I’m looking for iops and throughput. This issue we are dealing with is that there was a sudden jump in throughput and I want to be able to find out with rbd volume might be causing it. I just manage the ceph cluster, not the opensta

[ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread Hauke Homburg
Hello, Ich think that the Ceph Users don't recommend on ceph osd on Hardware RAID. But i haven't found a technical Solution for this. Can anybody give me so a Solution? Thanks for your help Regards Hauke -- www.w3-creative.de www.westchat.de ___

Re: [ceph-users] Ceph OSD get blocked and start to make inconsistent pg from time to time

2017-09-29 Thread David Turner
I'm going to assume you're dealing with your scrub errors and have a game plan for those as you didn't mention them in your question at all. One thing I'm always leary of when I see blocked requests happening is that the PGs might be splitting subfolders. It is pretty much a guarantee if you're a

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread David Turner
The reason it is recommended not to raid your disks is to give them all to Ceph. When a disk fails, Ceph can generally recover faster than the raid can. The biggest problem with raid is that you need to replace the disk and rebuild the raid asap. When a disk fails in Ceph, the cluster just moves

Re: [ceph-users] New OSD missing from part of osd crush tree

2017-09-29 Thread Sean Purdy
On Thu, 10 Aug 2017, John Spray said: > On Thu, Aug 10, 2017 at 4:31 PM, Sean Purdy wrote: > > Luminous 12.1.1 rc And 12.2.1 stable > > We added a new disk and did: > > That worked, created osd.18, OSD has data. > > > > However, mgr output at http://localhost:7000/servers showed > > osd.18 unde

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread David Turner
There is no tool on the Ceph side to see which RBDs are doing what. Generally you need to monitor the mount points for the RBDs to track that down with iostat or something. That said, there are some tricky things you could probably do to track down the RBD that is doing a bunch of stuff (as long a

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 17:14, Hauke Homburg wrote: > Hello, > > Ich think that the Ceph Users don't recommend on ceph osd on Hardware > RAID. But i haven't found a technical Solution for this. > > Can anybody give me so a Solution? > > Thanks for your help > > Regards > > Hauke You get better perform

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 17:13, Matthew Stroud wrote: > Is there a way I could get a performance stats for rbd images? I'm looking > for iops and throughput. > > This issue we are dealing with is that there was a sudden jump in throughput > and I want to be able to find out with rbd volume might be causi

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread David Turner
His dilemma sounded like he has access to the cluster, but not any of the clients where the RBDs are used or even the hypervisors in charge of those. On Fri, Sep 29, 2017 at 12:03 PM Maged Mokhtar wrote: > On 2017-09-29 17:13, Matthew Stroud wrote: > > Is there a way I could get a performance st

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Matthew Stroud
Yeah, that is the core problem. I have been working with those teams that manage those. However, there isn’t a way I can check on my side as it appears. From: David Turner Date: Friday, September 29, 2017 at 11:08 AM To: Maged Mokhtar , Matthew Stroud Cc: "ceph-users@lists.ceph.com" Subject:

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Jason Dillaman
There is a feature in the backlog for a "rbd top"-like utility which could provide a probabilistic view of the top X% of RBD image stats against the cluster. The data collection would be by each OSD individually which it why it would be probabilistic stats instead of an absolute. It also would onl

Re: [ceph-users] Large amount of files - cephfs?

2017-09-29 Thread Josef Zelenka
Hi everyone, thanks for the advice, we consulted it and we're gonna test it out with cephfs first. Object storage is a possibility if it misbehaves. Hopefully it will go well :) On 28/09/17 08:20, Henrik Korkuc wrote: On 17-09-27 14:57, Josef Zelenka wrote: Hi, we are currently working on

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Matthew Stroud
Yeah, I don’t have access to the hypervisors, nor the vms on said hypervisors. Having some sort of ceph-top would be awesome, I wish they would implement that. Thanks, Matthew Stroud On 9/29/17, 11:49 AM, "Jason Dillaman" wrote: There is a feature in the backlog for a "rbd top"-like utilit

Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-09-29 Thread Gregory Farnum
I haven't used the striper, but it appears to make you specify sizes, stripe units, and stripe counts. I would expect you need to make sure that the size is an integer multiple of the stripe unit. And it probably defaults to a 4MB object if you don't specify one? On Fri, Sep 29, 2017 at 2:09 AM Al

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread Anthony D'Atri
In addition to the points that others made so well: - When using parity RAID, eg. RAID5 to create OSD devices, one reduces aggregate write speed — specially if using HDD’s — due to write amplification. - If using parity or replicated RAID, one might semi-reasonably get away with reducing Ceph’s

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Anthony D'Atri
Luis: As others have mentioned, be sure that when you delete an OSD each step is completed successfully: - OSD process is killed - OSD is marked out/down in the CRUSH map - ceph osd crush delete osd.xxx - ceph osd rm osd.xxx - ceph auth del osd.xxx - Also be sure to unmount the /var/lib/ceph mou