Re: [ceph-users] Encryption questions

2019-01-10 Thread Tobias Florek
Hi, as others pointed out, traffic in ceph is unencrypted (internal traffic as well as client traffic). I usually advise to set up IPSec or nowadays wireguard connections between all hosts. That takes care of any traffic going over the wire, including ceph. Cheers, Tobias Florek

Re: [ceph-users] Encryption questions

2019-01-10 Thread Alexandre DERUMIER
>>1) Are RBD connections encrypted or is there an option to use encryption >>between clients and Ceph? From reading the documentation, I have the >>impression that the only option to guarantee encryption in >>transit is to >>force clients to encrypt volumes via dmcrypt. Is there another option?

Re: [ceph-users] centos 7.6 kernel panic caused by osd

2019-01-10 Thread Brad Hubbard
On Fri, Jan 11, 2019 at 9:57 AM Jason Dillaman wrote: > > I think Ilya recently looked into a bug that can occur when > CONFIG_HARDENED_USERCOPY is enabled and the IO's TCP message goes > through the loopback interface (i.e. co-located OSDs and krbd). > Assuming that you have the same setup, you

Re: [ceph-users] centos 7.6 kernel panic caused by osd

2019-01-10 Thread Jason Dillaman
I think Ilya recently looked into a bug that can occur when CONFIG_HARDENED_USERCOPY is enabled and the IO's TCP message goes through the loopback interface (i.e. co-located OSDs and krbd). Assuming that you have the same setup, you might be hitting the same bug. On Thu, Jan 10, 2019 at 6:46 PM

Re: [ceph-users] centos 7.6 kernel panic caused by osd

2019-01-10 Thread Brad Hubbard
On Fri, Jan 11, 2019 at 12:20 AM Rom Freiman wrote: > > Hey, > After upgrading to centos7.6, I started encountering the following kernel > panic > > [17845.147263] XFS (rbd4): Unmounting Filesystem > [17846.860221] rbd: rbd4: capacity 3221225472 features 0x1 > [17847.109887] XFS (rbd4): Mounting

Re: [ceph-users] Encryption questions

2019-01-10 Thread Jack
Hi, AFAIK, there is no encryption on the wire, either between daemons or between a daemon and a client The only encryption available on Ceph is at rest, using dmcrypt (aka your data are encrypted before being written on disk) Regards, On 01/10/2019 07:59 PM, Sergio A. de Carvalho Jr. wrote: >

[ceph-users] Encryption questions

2019-01-10 Thread Sergio A. de Carvalho Jr.
Hi everyone, I have some questions about encryption in Ceph. 1) Are RBD connections encrypted or is there an option to use encryption between clients and Ceph? From reading the documentation, I have the impression that the only option to guarantee encryption in transit is to force clients to

Re: [ceph-users] Mimic 13.2.3?

2019-01-10 Thread Reed Dier
> Could I suggest building Luminous for Bionic +1 for Luminous on Bionic. Ran into issues with bionic upgrades, and had to eventually revert from the ceph repos to the Ubuntu repos where they have 12.2.8, which isn’t ideal. Reed > On Jan 9, 2019, at 10:27 AM, Matthew Vernon wrote: > > Hi, >

Re: [ceph-users] Image has watchers, but cannot determine why

2019-01-10 Thread Kenneth Van Alstyne
Thanks for the reply — I was pretty darn sure, since I live migrated all VMs off of that box and then killed everything but a handful of system processes (init, sshd, etc) and the watcher was STILL present. In saying that, I halted the machine (since nothing was running on it any longer) and

Re: [ceph-users] Mimic 13.2.3?

2019-01-10 Thread Ronny Aasen
On 09.01.2019 17:27, Matthew Vernon wrote: Hi, On 08/01/2019 18:58, David Galloway wrote: The current distro matrix is: Luminous: xenial centos7 trusty jessie stretch Mimic: bionic xenial centos7 Thanks for clarifying :) This may have been different in previous point releases because, as

Re: [ceph-users] cephfs free space issue

2019-01-10 Thread David C
On Thu, Jan 10, 2019 at 4:07 PM Scottix wrote: > I just had this question as well. > > I am interested in what you mean by fullest, is it percentage wise or raw > space. If I have an uneven distribution and adjusted it, would it make more > space available potentially. > Yes - I'd recommend

Re: [ceph-users] cephfs free space issue

2019-01-10 Thread Scottix
I just had this question as well. I am interested in what you mean by fullest, is it percentage wise or raw space. If I have an uneven distribution and adjusted it, would it make more space available potentially. Thanks Scott On Thu, Jan 10, 2019 at 12:05 AM Wido den Hollander wrote: > > > On

Re: [ceph-users] Invalid RBD object maps of snapshots on Mimic

2019-01-10 Thread Jason Dillaman
On Thu, Jan 10, 2019 at 10:50 AM Oliver Freyermuth wrote: > > Dear Jason and list, > > Am 10.01.19 um 16:28 schrieb Jason Dillaman: > > On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth > > wrote: > >> > >> Dear Cephalopodians, > >> > >> I performed several consistency checks now: > >> -

Re: [ceph-users] Invalid RBD object maps of snapshots on Mimic

2019-01-10 Thread Oliver Freyermuth
Dear Jason and list, Am 10.01.19 um 16:28 schrieb Jason Dillaman: On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth wrote: Dear Cephalopodians, I performed several consistency checks now: - Exporting an RBD snapshot before and after the object map rebuilding. - Exporting a backup as raw

Re: [ceph-users] Invalid RBD object maps of snapshots on Mimic

2019-01-10 Thread Jason Dillaman
On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth wrote: > > Dear Cephalopodians, > > I performed several consistency checks now: > - Exporting an RBD snapshot before and after the object map rebuilding. > - Exporting a backup as raw image, all backups (re)created before and after > the object

[ceph-users] centos 7.6 kernel panic caused by osd

2019-01-10 Thread Rom Freiman
Hey, After upgrading to centos7.6, I started encountering the following kernel panic [17845.147263] XFS (rbd4): Unmounting Filesystem [17846.860221] rbd: rbd4: capacity 3221225472 features 0x1 [17847.109887] XFS (rbd4): Mounting V5 Filesystem [17847.191646] XFS (rbd4): Ending clean mount

Re: [ceph-users] Migrate/convert replicated pool to EC?

2019-01-10 Thread Fulvio Galeazzi
Hallo, I have the same issue as mentioned here, namely converting/migrating a replicated pool to an EC-based one. I have ~20 TB so my problem is far easier, but I'd like to perform this operation without introducing any downtime (or possibly just a minimal one, to rename pools). I am

[ceph-users] Using a cephfs mount as separate dovecot storage

2019-01-10 Thread Marc Roos
I wanted to expand the usage of the ceph cluster and use a cephfs mount to archive mail messages. Only (the below) 'Archive' tree is going to be on this mount, the default folders stay where they are. Currently mbox is still being used. I thought about switching storage from mbox to mdbox.

[ceph-users] Clarification of mon osd communication

2019-01-10 Thread Eugen Block
Hello list, there are two config options of mon/osd interaction that I don't fully understand. Maybe one of you could clarify it for me. mon osd report timeout - The grace period in seconds before declaring unresponsive Ceph OSD Daemons down. Default 900 mon osd down out interval - The

Re: [ceph-users] EC pools grinding to a screeching halt on Luminous

2019-01-10 Thread Florian Haas
Hi Mohamad! On 31/12/2018 19:30, Mohamad Gebai wrote: > On 12/31/18 4:51 AM, Marcus Murwall wrote: >> What you say does make sense though as I also get the feeling that the >> osds are just waiting for something. Something that never happens and >> the request finally timeout... > > So the OSDs

Re: [ceph-users] recovering vs backfilling

2019-01-10 Thread Dan van der Ster
Hi Caspar, On Thu, Jan 10, 2019 at 1:31 PM Caspar Smit wrote: > > Hi all, > > I wanted to test Dan's upmap-remapped script for adding new osd's to a > cluster. (Then letting the balancer gradually move pgs to the new OSD > afterwards) Cool. Insert "no guarantees or warranties" comment here.

[ceph-users] Get packages - incorrect link

2019-01-10 Thread Jan Kasprzak
Hello, Ceph users, I am not sure where to report the issue with the ceph.com website, so I am posting to this list: The https://ceph.com/use/ page has an incorrect link for getting the packages: "For packages, see http://ceph.com/docs/master/install/get-packages; - the URL should be

[ceph-users] recovering vs backfilling

2019-01-10 Thread Caspar Smit
Hi all, I wanted to test Dan's upmap-remapped script for adding new osd's to a cluster. (Then letting the balancer gradually move pgs to the new OSD afterwards) I've created a fresh (virtual) 12.2.10 4-node cluster with very small disks (16GB each). 2 OSD's per node. Put ~20GB of data on the

Re: [ceph-users] two OSDs with high out rate

2019-01-10 Thread Wido den Hollander
On 1/10/19 12:59 PM, Marc wrote: > Hi, > > for support reasons we're still running firefly (part of MCP 6). In our > grafana monitoring we noticed that two out of 128 OSD processes show > significantly higher outbound IO than all the others and this is > constant (cant see first occurance of

[ceph-users] two OSDs with high out rate

2019-01-10 Thread Marc
Hi, for support reasons we're still running firefly (part of MCP 6). In our grafana monitoring we noticed that two out of 128 OSD processes show significantly higher outbound IO than all the others and this is constant (cant see first occurance of this anymore, grafana only has 14 days

Re: [ceph-users] osdmaps not being cleaned up in 12.2.8

2019-01-10 Thread Dan van der Ster
Hi Bryan, I think this is the old hammer thread you refer to: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013060.html We also have osdmaps accumulating on v12.2.8 -- ~12000 per osd at the moment. I'm trying to churn the osdmaps like before, but our maps are not being

Re: [ceph-users] Image has watchers, but cannot determine why

2019-01-10 Thread Ilya Dryomov
On Wed, Jan 9, 2019 at 5:17 PM Kenneth Van Alstyne wrote: > > Hey folks, I’m looking into what I would think would be a simple problem, but > is turning out to be more complicated than I would have anticipated. A > virtual machine managed by OpenNebula was blown away, but the backing RBD >

Re: [ceph-users] Invalid RBD object maps of snapshots on Mimic

2019-01-10 Thread Oliver Freyermuth
Dear Cephalopodians, I performed several consistency checks now: - Exporting an RBD snapshot before and after the object map rebuilding. - Exporting a backup as raw image, all backups (re)created before and after the object map rebuilding. - md5summing all of that for a snapshot for which the