Re: [ceph-users] 答复: How's cephfs going?

2017-07-17 Thread Brady Deetz
We have a cephfs data pool with 52.8M files stored in 140.7M objects. That translates to a metadata pool size of 34.6MB across 1.5M objects. On Jul 18, 2017 12:54 AM, "Blair Bethwaite" wrote: > We are a data-intensive university, with an increasingly large fleet > of scientific instruments captu

Re: [ceph-users] 答复: How's cephfs going?

2017-07-17 Thread Blair Bethwaite
We are a data-intensive university, with an increasingly large fleet of scientific instruments capturing various types of data (mostly imaging of one kind or another). That data typically needs to be stored, protected, managed, shared, connected/moved to specialised compute for analysis. Given the

Re: [ceph-users] 答复: How's cephfs going?

2017-07-17 Thread Brady Deetz
No problem. We are a functional mri research institute. We have a fairly mixed workload. But, I can tell you that we see 60+gbps of throughput when multiple clients are reading sequencially on large files (1+GB) with 1-4MB block sizes. IO involving small files and small block sizes are not very goo

Re: [ceph-users] Long OSD restart after upgrade to 10.2.9

2017-07-17 Thread Anton Dmitriev
My cluster stores more than 1.5 billion objects in RGW, cephfs I dont use. Bucket index pool stored on separate SSD placement. But compaction occurs on all OSD, also on those, which doesn`t contain bucket indexes. After restarting 5 times every OSD nothing changed, each of them doing comapct ag

Re: [ceph-users] updating the documentation

2017-07-17 Thread Dan Mick
On 07/12/2017 11:29 AM, Sage Weil wrote: > We have a fair-sized list of documentation items to update for the > luminous release. The other day when I starting looking through what is > there now, though, I was also immediately struck by how out of date much > of the content is. In addition to

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-17 Thread Blair Bethwaite
Brilliant, thanks Marcus. We have just (noticed we've) hit this too and looks like your script will fix this (will test and report back...). On 18 July 2017 at 14:08, Marcus Furlong wrote: > [ 92.938882] XFS (sdi1): Mounting V5 Filesystem > [ 93.065393] XFS (sdi1): Ending clean mount > [ 93.17529

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-17 Thread Marcus Furlong
On 22 March 2017 at 05:51, Dan van der Ster wrote: > On Wed, Mar 22, 2017 at 8:24 AM, Marcus Furlong wrote: >> Hi, >> >> I'm experiencing the same issue as outlined in this post: >> >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013330.html >> >> I have also deployed this j

[ceph-users] 答复: How's cephfs going?

2017-07-17 Thread 许雪寒
Thanks, sir☺ You are really a lot of help☺ May I ask what kind of business are you using cephFS for? What's the io pattern:-) If answering this may involve any business secret, I really understand if you don't answer:-) Thanks again:-) 发件人: Brady Deetz [mailto:bde...@gmail.com] 发送时间: 2017年7

[ceph-users] 答复: How's cephfs going?

2017-07-17 Thread 许雪寒
Hi, thanks for the advice:-) By the way, may I ask what kind of business you are using cephFS for? What's the IO pattern of that business? And which version of ceph are you using? If this involves any business secret, it's really understandable not to answer:-) Thanks again for the help:-) ---

Re: [ceph-users] Systemd dependency cycle in Luminous

2017-07-17 Thread Michael Andersen
Thanks for pointing me towards that! You saved me a lot of stress On Jul 17, 2017 4:39 PM, "Tim Serong" wrote: > On 07/17/2017 11:22 AM, Michael Andersen wrote: > > Hi all > > > > I recently upgraded two separate ceph clusters from Jewel to Luminous. > > (OS is Ubuntu xenial) Everything went smo

Re: [ceph-users] How's cephfs going?

2017-07-17 Thread Brady Deetz
I feel that the correct answer to this question is: it depends. I've been running a 1.75PB Jewel based cephfs cluster in production for about a 2 years at Laureate Institute for Brain Research. Before that we had a good 6-8 month planning and evaluation phase. I'm running with active/standby dedic

Re: [ceph-users] Systemd dependency cycle in Luminous

2017-07-17 Thread Tim Serong
On 07/17/2017 11:22 AM, Michael Andersen wrote: > Hi all > > I recently upgraded two separate ceph clusters from Jewel to Luminous. > (OS is Ubuntu xenial) Everything went smoothly except on one of the > monitors in each cluster I had a problem shutting down/starting up. It > seems the systemd dep

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread David Turner
What are your pool settings? That can affect your read/write speeds as much as anything in the ceph.conf file. On Mon, Jul 17, 2017, 4:55 PM Gencer Genç wrote: > I don't think so. > > Because I tried one thing a few minutes ago. I opened 4 ssh channel and > run rsync command and copy bigfile to

Re: [ceph-users] Bucket policies in Luminous

2017-07-17 Thread Graham Allan
Thanks for the update. I saw there was a set of new 12.1.1 packages today so I updated to these (appears to contain the update below), rather than build my own radosgw. I'm not sure what changed; I don't get a crash now but I don't seem able to set any policy now. my sample policy: % cat s3

Re: [ceph-users] How's cephfs going?

2017-07-17 Thread Deepak Naidu
Based on my experience, it's really stable and yes is production ready. Most of the use case for cephFS depends on what your trying to achieve. Few feedbacks. 1) Kernel client is nice/stable and can achieve higher bandwidth if you have 40G or higher network. 2) ceph-fuse is very slow, as the wri

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread Gencer Genç
I don't think so. Because I tried one thing a few minutes ago. I opened 4 ssh channel and run rsync command and copy bigfile to different targets in cephfs at the same time. Then i looked into network graphs and i see numbers up to 1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? Wha

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread gencer
I have a seperate 10GbE network for ceph and another for public. No they are not NVMe, unfortunately. Do you know any test command that i can try to see if this is the max. Read speed from rsync? Because I tried one thing a few minutes ago. I opened 4 ssh channel and run rsync command and co

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread Peter Maloney
You should have a separate public and cluster network. And journal or wal/db performance is important... are the devices fast NVMe? On 07/17/17 21:31, gen...@gencgiyen.com wrote: > > Hi, > > > > I located and applied almost every different tuning setting/config > over the internet. I couldn’t m

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread Patrick Donnelly
On Mon, Jul 17, 2017 at 1:08 PM, wrote: > But lets try another. Lets say i have a file in my server which is 5GB. If i > do this: > > $ rsync ./bigfile /mnt/cephfs/targetfile --progress > > Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected? Perhaps that is the bandwidth l

Re: [ceph-users] iSCSI production ready?

2017-07-17 Thread Alvaro Soto
Thanks Jason, The second part, nevermind know I see that the solution is to use the TCMU daemon, I was thinking in a out of the box iSCSI endpoint directly from CEPH, sorry don't have to much expertise in this area. Best. On Jul 17, 2017 6:54 AM, "Jason Dillaman" wrote: On Sat, Jul 15, 20

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread gencer
Hi Patrick. Thank you for prompt response. I added ceph.conf file but i think you missed it. These are the configs i tuned: (also i disabled debug logs in global section). Correct me if i understand you wrongly on this. Btw before i gave you config i want to answer on sync io. Yes if i remo

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread Patrick Donnelly
Hi Gencer, On Mon, Jul 17, 2017 at 12:31 PM, wrote: > I located and applied almost every different tuning setting/config over the > internet. I couldn’t manage to speed up my speed one byte further. It is > always same speed whatever I do. I believe you're frustrated but this type of informatio

[ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread gencer
Hi, I located and applied almost every different tuning setting/config over the internet. I couldn't manage to speed up my speed one byte further. It is always same speed whatever I do. I was on jewel, now I tried BlueStore on Luminous. Still exact same speed I gain from cephfs. It does

Re: [ceph-users] Long OSD restart after upgrade to 10.2.9

2017-07-17 Thread Josh Durgin
Both of you are seeing leveldb perform compaction when the osd starts up. This can take a while for large amounts of omap data (created by things like cephfs directory metadata or rgw bucket indexes). The 'leveldb_compact_on_mount' option wasn't changed in 10.2.9, but leveldb will compact automa

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Update! Yeah, That was the problem. I zap the disks (purge) and re-create them according to official documentation. Now everything is OK. I can see all disk and total sizes properly. Let's see if this will bring any performance improvements if we compare to previous standard schema (usinbg jew

Re: [ceph-users] missing feature 400000000000000 ?

2017-07-17 Thread Richard Hesketh
Correct me if I'm wrong, but I understand rbd-nbd is a userland client for mapping RBDs to local block devices (like "rbd map" in the kernel client), not a client for mounting the cephfs filesystem which is what Riccardo is using? Rich On 17/07/17 12:48, Massimiliano Cuttini wrote: > Hi Riccard

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
When I use /dev/sdb or /dev/sdc (the whole disk) i get errors like this: ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/sdb: Line is truncated: RuntimeError: command returned non-zero exit status: 1 RuntimeError: Failed to execute command: /usr/sbin/ceph-disk

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread Wido den Hollander
> Op 17 juli 2017 om 17:03 schreef gen...@gencgiyen.com: > > > I used this methods: > > $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 > (one from 09th server one from 10th server..) > > and then; > > $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/d

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Also one more thing, If I want to use BlueStore how do I let it to know that I have more space? Do i need to specify a size at any point? -Gencer. -Original Message- From: gen...@gencgiyen.com [mailto:gen...@gencgiyen.com] Sent: Monday, July 17, 2017 6:04 PM To: 'Wido den Hollander' ; '

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
I used this methods: $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 (one from 09th server one from 10th server..) and then; $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ... This is my second creation for ceph cluster. At first I used bluest

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread Wido den Hollander
> Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com: > > > Hi Wido, > > Each disk is 3TB SATA (2.8TB seen) but what I got is this: > > First let me gave you df -h: > > /dev/sdb1 2.8T 754M 2.8T 1% /var/lib/ceph/osd/ceph-0 > /dev/sdc1 2.8T 753M 2.8T 1% /var/lib/c

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Hi Wido, Each disk is 3TB SATA (2.8TB seen) but what I got is this: First let me gave you df -h: /dev/sdb1 2.8T 754M 2.8T 1% /var/lib/ceph/osd/ceph-0 /dev/sdc1 2.8T 753M 2.8T 1% /var/lib/ceph/osd/ceph-2 /dev/sdd1 2.8T 752M 2.8T 1% /var/lib/ceph/osd/ceph-

Re: [ceph-users] How to force "rbd unmap"

2017-07-17 Thread Ilya Dryomov
On Thu, Jul 6, 2017 at 2:43 PM, Ilya Dryomov wrote: > On Thu, Jul 6, 2017 at 2:23 PM, Stanislav Kopp wrote: >> 2017-07-06 14:16 GMT+02:00 Ilya Dryomov : >>> On Thu, Jul 6, 2017 at 1:28 PM, Stanislav Kopp wrote: Hi, 2017-07-05 20:31 GMT+02:00 Ilya Dryomov : > On Wed, Jul 5, 201

Re: [ceph-users] Ceph mount rbd

2017-07-17 Thread lista
Dear,   In your last message i has understood, the exclusive-lock is work in kernel 4.9 or higher and this could help-me, with don't permission write in two machines, but this feature only avaible in kernel 4.12, is right ?   I will reading more about the pacemaker, in my environment testing, i

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread Wido den Hollander
> Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com: > > > Hi, > > > > I successfully managed to work with ceph jewel. Want to try luminous. > > > > I also set experimental bluestore while creating osds. Problem is, I have > 20x3TB hdd in two nodes and i would expect 55TB usable (as

[ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Hi, I successfully managed to work with ceph jewel. Want to try luminous. I also set experimental bluestore while creating osds. Problem is, I have 20x3TB hdd in two nodes and i would expect 55TB usable (as on jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB space available

Re: [ceph-users] Problems getting nfs-ganesha with cephfs backend to work.

2017-07-17 Thread Micha Krause
Hi, > Change Pseudo to something like /mypseudofolder I tried this, without success, but I managed to get something working with version 2.5. I can mount the NFS export now, however 2 problems remain: 1. The root directory of the mount-point looks empty (ls shows no files), however director

Re: [ceph-users] Long OSD restart after upgrade to 10.2.9

2017-07-17 Thread Lincoln Bryant
Hi Anton, We observe something similar on our OSDs going from 10.2.7 to 10.2.9 (see thread "some OSDs stuck down after 10.2.7 -> 10.2.9 update"). Some of our OSDs are not working at all on 10.2.9 or die with suicide timeouts. Those that come up/in take a very long time to boot up. Seems to no

Re: [ceph-users] upgrade procedure to Luminous

2017-07-17 Thread Lars Marowsky-Bree
On 2017-07-14T15:18:54, Sage Weil wrote: > Yes, but how many of those clusters can only upgrade by updating the > packages and rebooting? Our documented procedures have always recommended > upgrading the packages, then restarting either mons or osds first and to > my recollection nobody has c

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-17 Thread Jason Dillaman
On Sat, Jul 15, 2017 at 8:00 PM, Ruben Rodriguez wrote: > > > On 14/07/17 18:43, Ruben Rodriguez wrote: >> How to reproduce... > > I'll provide more concise details on how to test this behavior: > > Ceph config: > > [client] > rbd readahead max bytes = 0 # we don't want forced readahead to fool us

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-17 Thread Jason Dillaman
Are you 100% positive that your files are actually stored sequentially on the block device? I would recommend running blktrace to verify the IO pattern from your use-case. On Sat, Jul 15, 2017 at 5:42 PM, Ruben Rodriguez wrote: > > > On 15/07/17 09:43, Nick Fisk wrote: >>> -Original Message--

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-17 Thread Jason Dillaman
On Sat, Jul 15, 2017 at 5:35 PM, Ruben Rodriguez wrote: > > > On 15/07/17 15:33, Jason Dillaman wrote: >> On Sat, Jul 15, 2017 at 9:43 AM, Nick Fisk wrote: >>> Unless you tell the rbd client to not disable readahead after reading the >>> 1st x number of bytes (rbd readahead disable after bytes=0

Re: [ceph-users] iSCSI production ready?

2017-07-17 Thread Jason Dillaman
On Sat, Jul 15, 2017 at 11:01 PM, Alvaro Soto wrote: > Hi guys, > does anyone know any news about in what release iSCSI interface is going to > be production ready, if not yet? There are several flavors of RBD iSCSI implementations that are in-use by the community. We are working to solidify the

Re: [ceph-users] missing feature 400000000000000 ?

2017-07-17 Thread Massimiliano Cuttini
Hi Riccardo, using ceph-fuse will add extra layer. Consider to use instead ceph-nbd which is a porting to use network device blocks. This should be faster and allow you to use latest tunables (which it's better). Il 17/07/2017 10:56, Riccardo Murri ha scritto: Thanks a lot to all! Both th

[ceph-users] ANN: ElastiCluster to deploy CephFS

2017-07-17 Thread Riccardo Murri
Hello, I would just like to let you know that ElastiCluster [1], a command-line tool to create and configure compute clusters on various IaaS clouds (OpenStack, AWS, GCE, and anything supported by Apache LibCloud), is now supporting CephFS as a shared cluster filesystem [2]. Although ElastiCluste

Re: [ceph-users] Any recommendations for CephFS metadata/data pool sizing?

2017-07-17 Thread Riccardo Murri
(David Turner, Mon, Jul 03, 2017 at 03:12:28PM +:) > I would also recommend keeping each pool at base 2 numbers of PGs. So with > the 512 PGs example, do 512 PGs for the data pool and 64 PGs for the > metadata pool. Thanks for all the suggestions! Eventually I went with a 1:7 metadata:data s

Re: [ceph-users] What caps are necessary for FUSE-mounts of the FS?

2017-07-17 Thread Riccardo Murri
Hi John, all, (John Spray, Thu, Jun 29, 2017 at 12:50:47PM +0100:) > On Thu, Jun 29, 2017 at 11:42 AM, Riccardo Murri > wrote: > > The documentation at states: > > > > """ > > Before mounting a Ceph File System in User Space (FUSE), ensure that > >

Re: [ceph-users] cluster network question

2017-07-17 Thread Laszlo Budai
Hi David, thank you for the answer. It seems that in case of a dedicated cluster network it is needed to have the monitors also connected to that network, otherwise ceph-deploy fails: # ceph-deploy new --public-network 10.1.1.0/24 --cluster-network=10.3.3.0/24 monitor{1,2,3} ... [2017-07-17 1

Re: [ceph-users] Problems getting nfs-ganesha with cephfs backend to work.

2017-07-17 Thread Ricardo Dias
Hi, Not sure if using the root path in Pseudo is valid. Change Pseudo to something like /mypseudofolder And see if that solves the problem. Ricardo > On 17 Jul 2017, at 09:45, Micha Krause wrote: > > Hi, > > im trying to get nfs-ganesha to work with ceph as the FSAL Backend. > > Um using V

Re: [ceph-users] missing feature 400000000000000 ?

2017-07-17 Thread Riccardo Murri
Thanks a lot to all! Both the suggestion to use "ceph osd tunables hammer" and to use "ceph-fuse" instead solved the issue. Riccardo ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Problems getting nfs-ganesha with cephfs backend to work.

2017-07-17 Thread Micha Krause
Hi, im trying to get nfs-ganesha to work with ceph as the FSAL Backend. Um using Version 2.4.5, this is my ganeasha.conf: EXPORT { Export_ID=1; Path = /; Pseudo = /; Access_Type = RW; Protocols = 3; Transports = TCP; FSAL {