Re: [ceph-users] Ceph in OSPF environment

2019-01-18 Thread Robin H. Johnson
On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote: > Dear colleagues, > > we build L3 topology for use with CEPH, which is based on OSPF routing > between Loopbacks, in order to get reliable and ECMPed topology, like this: ... > CEPH configured in the way You have a minor

Re: [ceph-users] OSDs crashing in EC pool (whack-a-mole)

2019-01-18 Thread Peter Woodman
At the risk of hijacking this thread, like I said I've ran into this problem again, and have captured a log with debug_osd=20, viewable at https://www.dropbox.com/s/8zoos5hhvakcpc4/ceph-osd.3.log?dl=0 - any pointers? On Tue, Jan 8, 2019 at 11:31 AM Peter Woodman wrote: > > For the record, in the

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread jesper
Hi Everyone. Thanks for the testing everyone - I think my system works as intented. When reading from another client - hitting the cache of the OSD-hosts I also get down to 7-8ms. As mentioned, this is probably as expected. I need to figure out to increase parallism somewhat - or convince

Re: [ceph-users] Today's DocuBetter meeting topic is... SEO

2019-01-18 Thread Brian Topping
Hi Noah! With an eye toward improving documentation and community, two things come to mind: 1. I didn’t know about this meeting or I would have done my very best to enlist my roommate, who probably could have answered these questions very quickly. I do know there’s something to do with the

Re: [ceph-users] Today's DocuBetter meeting topic is... SEO

2019-01-18 Thread Noah Watkins
1 PM PST / 9 PM GMT https://bluejeans.com/908675367 On Fri, Jan 18, 2019 at 10:31 AM Noah Watkins wrote: > > We'll be discussing SEO for the Ceph documentation site today at the > DocuBetter meeting. Currently when Googling or DuckDuckGoing for > Ceph-related things you may see results from

[ceph-users] Today's DocuBetter meeting topic is... SEO

2019-01-18 Thread Noah Watkins
We'll be discussing SEO for the Ceph documentation site today at the DocuBetter meeting. Currently when Googling or DuckDuckGoing for Ceph-related things you may see results from master, mimic, or what's a dumpling? The goal is figure out what sort of approach we can take to make these results

Re: [ceph-users] Boot volume on OSD device

2019-01-18 Thread Hector Martin
On 19/01/2019 02.24, Brian Topping wrote: > > >> On Jan 18, 2019, at 4:29 AM, Hector Martin wrote: >> >> On 12/01/2019 15:07, Brian Topping wrote: >>> I’m a little nervous that BlueStore assumes it owns the partition table and >>> will not be happy that a couple of primary partitions have been

Re: [ceph-users] Boot volume on OSD device

2019-01-18 Thread Brian Topping
> On Jan 18, 2019, at 4:29 AM, Hector Martin wrote: > > On 12/01/2019 15:07, Brian Topping wrote: >> I’m a little nervous that BlueStore assumes it owns the partition table and >> will not be happy that a couple of primary partitions have been used. Will >> this be a problem? > > You should

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-18 Thread Mark Nelson
On 1/18/19 9:22 AM, Nils Fahldieck - Profihost AG wrote: Hello Mark, I'm answering on behalf of Stefan. Am 18.01.19 um 00:22 schrieb Mark Nelson: On 1/17/19 4:06 PM, Stefan Priebe - Profihost AG wrote: Hello Mark, after reading

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Alfredo Deza
On Fri, Jan 18, 2019 at 10:07 AM Jan Kasprzak wrote: > > Alfredo, > > Alfredo Deza wrote: > : On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote: > : > Eugen Block wrote: > : > : > : > : I think you're running into an issue reported a couple of times. > : > : For the use of LVM you have

Re: [ceph-users] Bluestore 32bit max_object_size limit

2019-01-18 Thread KEVIN MICHAEL HRPCEK
On 1/18/19 7:26 AM, Igor Fedotov wrote: Hi Kevin, On 1/17/2019 10:50 PM, KEVIN MICHAEL HRPCEK wrote: Hey, I recall reading about this somewhere but I can't find it in the docs or list archive and confirmation from a dev or someone who knows for sure would be nice. What I recall is that

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-18 Thread Nils Fahldieck - Profihost AG
Hello Mark, I'm answering on behalf of Stefan. Am 18.01.19 um 00:22 schrieb Mark Nelson: > > On 1/17/19 4:06 PM, Stefan Priebe - Profihost AG wrote: >> Hello Mark, >> >> after reading >> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/ >> >> again i'm really confused

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Jan Kasprzak
Alfredo, Alfredo Deza wrote: : On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote: : > Eugen Block wrote: : > : : > : I think you're running into an issue reported a couple of times. : > : For the use of LVM you have to specify the name of the Volume Group : > : and the respective

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread Marc Roos
Yes, and to be sure I did the read test again from another client. -Original Message- From: David C [mailto:dcsysengin...@gmail.com] Sent: 18 January 2019 16:00 To: Marc Roos Cc: aderumier; Burkhard.Linke; ceph-users Subject: Re: [ceph-users] CephFS - Small file - single thread -

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread David C
On Fri, 18 Jan 2019, 14:46 Marc Roos > > [@test]# time cat 50b.img > /dev/null > > real0m0.004s > user0m0.000s > sys 0m0.002s > [@test]# time cat 50b.img > /dev/null > > real0m0.002s > user0m0.000s > sys 0m0.002s > [@test]# time cat 50b.img > /dev/null > > real0m0.002s

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread Marc Roos
[@test]# time cat 50b.img > /dev/null real0m0.004s user0m0.000s sys 0m0.002s [@test]# time cat 50b.img > /dev/null real0m0.002s user0m0.000s sys 0m0.002s [@test]# time cat 50b.img > /dev/null real0m0.002s user0m0.000s sys 0m0.001s [@test]# time cat 50b.img

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread David C
On Fri, Jan 18, 2019 at 2:12 PM wrote: > Hi. > > We have the intention of using CephFS for some of our shares, which we'd > like to spool to tape as a part normal backup schedule. CephFS works nice > for large files but for "small" .. < 0.1MB .. there seem to be a > "overhead" on 20-40ms per

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread Alexandre DERUMIER
Hi, I don't have so big latencies: # time cat 50bytesfile > /dev/null real0m0,002s user0m0,001s sys 0m0,000s (It's on an ceph ssd cluster (mimic), kernel cephfs client (4.18), 10GB network with small latency too, client/server have 3ghz cpus) - Mail original - De:

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread Burkhard Linke
Hi, On 1/18/19 3:11 PM, jes...@krogh.cc wrote: Hi. We have the intention of using CephFS for some of our shares, which we'd like to spool to tape as a part normal backup schedule. CephFS works nice for large files but for "small" .. < 0.1MB .. there seem to be a "overhead" on 20-40ms per

[ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread jesper
Hi. We have the intention of using CephFS for some of our shares, which we'd like to spool to tape as a part normal backup schedule. CephFS works nice for large files but for "small" .. < 0.1MB .. there seem to be a "overhead" on 20-40ms per file. I tested like this: root@abe:/nfs/home/jk# time

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-18 Thread Wido den Hollander
On 1/16/19 4:54 PM, c...@jack.fr.eu.org wrote: > Hi, > > My 2 cents: > - do drop python2 support I wouldn't agree. Python 2 needs to be dropped. > - do not drop python2 support unexpectedly, aka do a deprecation phase > Indeed. Deprecate it at the Nautilus release and drop it after N.

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-18 Thread Hector Martin
On 18/01/2019 22.33, Alfredo Deza wrote: > On Fri, Jan 18, 2019 at 7:07 AM Hector Martin wrote: >> >> On 17/01/2019 00:45, Sage Weil wrote: >>> Hi everyone, >>> >>> This has come up several times before, but we need to make a final >>> decision. Alfredo has a PR prepared that drops Python 2

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-18 Thread Alfredo Deza
On Fri, Jan 18, 2019 at 7:07 AM Hector Martin wrote: > > On 17/01/2019 00:45, Sage Weil wrote: > > Hi everyone, > > > > This has come up several times before, but we need to make a final > > decision. Alfredo has a PR prepared that drops Python 2 support entirely > > in master, which will mean

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Alfredo Deza
On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote: > > Eugen Block wrote: > : Hi Jan, > : > : I think you're running into an issue reported a couple of times. > : For the use of LVM you have to specify the name of the Volume Group > : and the respective Logical Volume instead of the path, e.g. >

Re: [ceph-users] Bluestore 32bit max_object_size limit

2019-01-18 Thread Igor Fedotov
Hi Kevin, On 1/17/2019 10:50 PM, KEVIN MICHAEL HRPCEK wrote: Hey, I recall reading about this somewhere but I can't find it in the docs or list archive and confirmation from a dev or someone who knows for sure would be nice. What I recall is that bluestore has a max 4GB file size limit

Re: [ceph-users] quick questions about a 5-node homelab setup

2019-01-18 Thread Eugen Leitl
On Fri, Jan 18, 2019 at 12:42:21PM +0100, Robert Sander wrote: > On 18.01.19 11:48, Eugen Leitl wrote: > > > OSD on every node (Bluestore), journal on SSD (do I need a directory, or a > > dedicated partition? How large, assuming 2 TB and 4 TB Bluestore HDDs?) > > You need a partition on the SSD

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Jan Kasprzak
Eugen Block wrote: : Hi Jan, : : I think you're running into an issue reported a couple of times. : For the use of LVM you have to specify the name of the Volume Group : and the respective Logical Volume instead of the path, e.g. : : ceph-volume lvm prepare --bluestore --block.db ssd_vg/ssd00

[ceph-users] Ceph in OSPF environment

2019-01-18 Thread Max Krasilnikov
Dear colleagues, we build L3 topology for use with CEPH, which is based on OSPF routing between Loopbacks, in order to get reliable and ECMPed topology, like this: 10.10.200.6 proto bird metric 64     nexthop via 10.10.15.3 dev enp97s0f1 weight 1     nexthop via 10.10.25.3 dev enp19s0f0

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Ilya Dryomov
On Fri, Jan 18, 2019 at 11:25 AM Mykola Golub wrote: > > On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote: > > Hi, > > > > We am trying to use Ceph in our products to address some of the use cases. > > We think Ceph block device for us. One of the use cases is that we have a > >

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-18 Thread Hector Martin
On 17/01/2019 00:45, Sage Weil wrote: Hi everyone, This has come up several times before, but we need to make a final decision. Alfredo has a PR prepared that drops Python 2 support entirely in master, which will mean nautilus is Python 3 only. All of our distro targets (el7, bionic, xenial)

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Eugen Block
Hi Jan, I think you're running into an issue reported a couple of times. For the use of LVM you have to specify the name of the Volume Group and the respective Logical Volume instead of the path, e.g. ceph-volume lvm prepare --bluestore --block.db ssd_vg/ssd00 --data /dev/sda Regards, Eugen

Re: [ceph-users] Suggestions/experiences with mixed disk sizes and models from 4TB - 14TB

2019-01-18 Thread Hector Martin
On 16/01/2019 18:33, Götz Reinicke wrote: My question is: How are your experiences with the current >=8TB SATA disks are some very bad models out there which I should avoid? Be careful with Seagate consumer SATA drives. They are now shipping SMR drives without mentioning that fact anywhere

Re: [ceph-users] quick questions about a 5-node homelab setup

2019-01-18 Thread Robert Sander
On 18.01.19 11:48, Eugen Leitl wrote: > OSD on every node (Bluestore), journal on SSD (do I need a directory, or a > dedicated partition? How large, assuming 2 TB and 4 TB Bluestore HDDs?) You need a partition on the SSD for the block.db (it's not a journal anymore with blustore). You should

[ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Jan Kasprzak
Hello, Ceph users, replying to my own post from several weeks ago: Jan Kasprzak wrote: : [...] I plan to add new OSD hosts, : and I am looking for setup recommendations. : : Intended usage: : : - small-ish pool (tens of TB) for RBD volumes used by QEMU : - large pool for object-based

Re: [ceph-users] Boot volume on OSD device

2019-01-18 Thread Hector Martin
On 12/01/2019 15:07, Brian Topping wrote: I’m a little nervous that BlueStore assumes it owns the partition table and will not be happy that a couple of primary partitions have been used. Will this be a problem? You should look into using ceph-volume in LVM mode. This will allow you to

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Ilya Dryomov
On Fri, Jan 18, 2019 at 9:25 AM Burkhard Linke wrote: > > Hi, > > On 1/17/19 7:27 PM, Void Star Nill wrote: > > Hi, > > We am trying to use Ceph in our products to address some of the use cases. We > think Ceph block device for us. One of the use cases is that we have a number > of jobs running

[ceph-users] quick questions about a 5-node homelab setup

2019-01-18 Thread Eugen Leitl
(Crossposting this from Reddit /r/ceph , since likely to have more technical audience present here). I've scrounged up 5 old Atom Supermicro nodes and would like to run them 365/7 for limited production as RBD with Bluestore (ideally latest 13.2.4 Mimic), triple copy redundancy. Underlying

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Mykola Golub
On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote: > Hi, > > We am trying to use Ceph in our products to address some of the use cases. > We think Ceph block device for us. One of the use cases is that we have a > number of jobs running in containers that need to have Read-Only

Re: [ceph-users] Ceph Nautilus Release T-shirt Design

2019-01-18 Thread Marc Roos
Is there an overview of previous tshirts? -Original Message- From: Anthony D'Atri [mailto:a...@dreamsnake.net] Sent: 18 January 2019 01:07 To: Tim Serong Cc: Ceph Development; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph Nautilus Release T-shirt Design >> Lenz has

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Burkhard Linke
Hi, On 1/17/19 7:27 PM, Void Star Nill wrote: Hi, We am trying to use Ceph in our products to address some of the use cases. We think Ceph block device for us. One of the use cases is that we have a number of jobs running in containers that need to have Read-Only access to shared data. The