Re: [ceph-users] BlueStore write amplification

2016-08-22 Thread Jan Schermer
Is that 400MB on all nodes or on each node? If it's on all nodes then 10:1 is not that surprising. What what the block size in your fio benchmark? We had much higher amplification on our cluster with snapshots and stuff... Jan > On 23 Aug 2016, at 08:38, Zhiyuan Wang wrote: > > Hi > I have te

Re: [ceph-users] BlueStore write amplification

2016-08-22 Thread Varada Kari
Hi, You can refer to a thread "Odd WAL traffic for BlueStore" in devel list for your questions. This traffic is mostly observed on wal partition of bluestore, which is used by rocksdb. Above thread should give more insight to your questions. Varada On Tuesday 23 August 2016 12:09 PM, Zhiyuan W

[ceph-users] BlueStore write amplification

2016-08-22 Thread Zhiyuan Wang
Hi I have test bluestore on SSD, and I found that the BW from fio is about 40MB, but the write BW from iostat of SSD is about 400MB, nearly ten times. Could someone help to explain this? Thanks a lot. Below are my configuration file: [global] fsid = 31e77e3c-447c-4745-a91a-58bda80a868c

[ceph-users] RGW CORS bug report

2016-08-22 Thread zhu tong
Hi, Creating bucket in browser. The client is supposed to first send an OPTION package, and then a PUT one. But RGW replies a "404 no such bucket" for client's OPTION request, and thereafter stops the PUT operation. [cid:cf47d512-318a-4d8f-a1a9-1a435c74035c] __

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Christian Balzer
Hello, On Mon, 22 Aug 2016 20:34:54 +0100 Nick Fisk wrote: > > -Original Message- > > From: Christian Balzer [mailto:ch...@gol.com] > > Sent: 22 August 2016 03:00 > > To: 'ceph-users' > > Cc: Nick Fisk > > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > > > >

Re: [ceph-users] Ceph pool snapshots

2016-08-22 Thread Gregory Farnum
On Sun, Aug 21, 2016 at 3:17 AM, Vimal Kumar wrote: > Hi, > > [ceph@ceph1 my-cluster]$ ceph -v > ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) > [ceph@ceph1 my-cluster]$ rados -p mypool ls > hello.txt > [ceph@ceph1 my-cluster]$ rados -p mypool mksnap snap01 > created pool mypool s

Re: [ceph-users] Export nfs-ganesha from standby MDS and last MON

2016-08-22 Thread Gregory Farnum
On Mon, Aug 22, 2016 at 2:05 PM, Brady Deetz wrote: > Is it an acceptable practice to configure my standby MDS and highest IP'd > MON as ganesha servers? > > Since MDS is supposedly primarily bound to a single core, despite having > many threads, would exporting really cause any issues if the nice

[ceph-users] Export nfs-ganesha from standby MDS and last MON

2016-08-22 Thread Brady Deetz
Is it an acceptable practice to configure my standby MDS and highest IP'd MON as ganesha servers? Since MDS is supposedly primarily bound to a single core, despite having many threads, would exporting really cause any issues if the niceness of the ganesha service was higher than the mds process?

Re: [ceph-users] Understanding write performance

2016-08-22 Thread Gregory Farnum
Apart from what Christian said... On Thu, Aug 18, 2016 at 12:03 PM, lewis.geo...@innoscale.net wrote: > Hi, > So, I have really been trying to find information about this without > annoying the list, but I just can't seem to get any clear picture of it. I > was going to try to search the mailing

Re: [ceph-users] CephFS Fuse ACLs

2016-08-22 Thread Gregory Farnum
On Thu, Aug 18, 2016 at 2:51 PM, Brady Deetz wrote: > apparently fuse_default_permission and client_acl_type have to be in the > fstab entry instead of the ceph.conf. That doesn't sound familiar/right to me. Can you create a ticket at tracker.ceph.com? And I notice you have different whitespace

Re: [ceph-users] Signature V2

2016-08-22 Thread Gregory Farnum
On Thu, Aug 18, 2016 at 11:42 AM, jan hugo prins wrote: > I have been able to reproduce the error and create a debug log from the > failure. > I can't post the debug log here because there is sensitive information > in the debug log like access keys etc. > Where can I send this log for analysis? A

Re: [ceph-users] Merging CephFS data pools

2016-08-22 Thread Gregory Farnum
On Thu, Aug 18, 2016 at 12:21 AM, Burkhard Linke wrote: > Hi, > > the current setup for CephFS at our site uses two data pools due to > different requirements in the past. I want to merge these two pools now, > eliminating the second pool completely. > > I've written a small script to locate all f

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Nick Fisk
From: Alex Gorbachev [mailto:a...@iss-integration.com] Sent: 22 August 2016 20:30 To: Nick Fisk Cc: Wilhelm Redbrake ; Horace Ng ; ceph-users Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance On Sunday, August 21, 2016, Wilhelm Redbrake mailto:w...@globe.de> > wrote

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Nick Fisk
> -Original Message- > From: Christian Balzer [mailto:ch...@gol.com] > Sent: 22 August 2016 03:00 > To: 'ceph-users' > Cc: Nick Fisk > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > > Hello, > > On Sun, 21 Aug 2016 09:57:40 +0100 Nick Fisk wrote: > > > > > > >

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Alex Gorbachev
> > > On Sunday, August 21, 2016, Wilhelm Redbrake wrote: > > Hi Nick, > i understand all of your technical improvements. > But: why do you Not use a simple for example Areca Raid Controller with 8 > gb Cache and Bbu ontop in every ceph node. > Configure n Times RAID 0 on the Controller and enable

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-22 Thread Wido den Hollander
> Op 22 augustus 2016 om 21:22 schreef Nick Fisk : > > > > -Original Message- > > From: Wido den Hollander [mailto:w...@42on.com] > > Sent: 22 August 2016 18:22 > > To: ceph-users ; n...@fisk.me.uk > > Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's > > > > > > > Op

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-22 Thread Nick Fisk
> -Original Message- > From: Wido den Hollander [mailto:w...@42on.com] > Sent: 22 August 2016 18:22 > To: ceph-users ; n...@fisk.me.uk > Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's > > > > Op 22 augustus 2016 om 15:17 schreef Nick Fisk : > > > > > > Hope it's useful

Re: [ceph-users] Help with systemd

2016-08-22 Thread K.C. Wong
Thank you for the suggestion and gist. I'll give that a try. -kc > On Aug 22, 2016, at 11:53 AM, Jeffrey Ollie wrote: > > I put the systemd service files that I use to map a RBD and mount the > filesystem before starting up PostgreSQL into the following gist. It's > probably not perfect, but it

Re: [ceph-users] Help with systemd

2016-08-22 Thread Jeffrey Ollie
I put the systemd service files that I use to map a RBD and mount the filesystem before starting up PostgreSQL into the following gist. It's probably not perfect, but it seems to work for me. Personally, I like using a native service to accomplish this rather than using fstab and the generator. ht

Re: [ceph-users] Understanding throughput/bandwidth changes in object store

2016-08-22 Thread Gregory Farnum
On Tue, Aug 16, 2016 at 6:00 AM, wrote: > Env: Ceph 10.2.2, 6 nodes, 96 OSDs, journals on ssd (8 per ssd), OSDs are > enterprise SATA disks, 50KB objects, dual 10 Gbe, 3 copies of each object > > I'm running some tests with COSbench to our object store, and I'm not really > understanding what I

[ceph-users] Help with systemd

2016-08-22 Thread K.C. Wong
Folks, I have some services that depends on RBD images getting mounted prior to service start-up. I am having a really hard time getting out of systemd dependency hell. * I create a run-once systemd service that basically does the rbd map operation, and set it start after network.target, netw

Re: [ceph-users] CephFS: cached inodes with active-standby

2016-08-22 Thread Gregory Farnum
On Mon, Aug 15, 2016 at 5:02 AM, David wrote: > Hi All > > When I compare a 'ceph daemon mds.id perf dump mds' on my active MDS with > my standby-replay MDS, the inodes count on the standby is a lot less than > the active. I would expect to see a very similar number of inodes or have I > misunder

Re: [ceph-users] Recommended hardware for MDS server

2016-08-22 Thread Wido den Hollander
> Op 22 augustus 2016 om 15:52 schreef Christian Balzer : > > > > Hello, > > first off, not a CephFS user, just installed it on a lab setup for fun. > That being said, I tend to read most posts here. > > And I do remember participating in similar discussions. > > On Mon, 22 Aug 2016 14:47:38

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-22 Thread Wido den Hollander
> Op 22 augustus 2016 om 15:17 schreef Nick Fisk : > > > Hope it's useful to someone > > https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6 > Thanks for sharing. Might this be worth adding it to ceph-common? And is 16MB something we should want by default or does this apply to yo

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-22 Thread Ilya Dryomov
On Mon, Aug 22, 2016 at 3:17 PM, Nick Fisk wrote: > Hope it's useful to someone > > https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6 Make sure your kernel is 4.4 or later - there was a 2M readahead limit imposed by the memory management subsystem until 4.4. Thanks,

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-22 Thread Nick Fisk
> -Original Message- > From: Ilya Dryomov [mailto:idryo...@gmail.com] > Sent: 22 August 2016 15:00 > To: Jason Dillaman > Cc: Nick Fisk ; ceph-users > Subject: Re: [ceph-users] RBD Watch Notify for snapshots > > On Fri, Jul 8, 2016 at 5:02 AM, Jason Dillaman wrote: > > librbd pseudo-a

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-22 Thread Nick Fisk
> -Original Message- > From: Ilya Dryomov [mailto:idryo...@gmail.com] > Sent: 22 August 2016 15:16 > To: Nick Fisk > Cc: ceph-users > Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's > > On Mon, Aug 22, 2016 at 3:17 PM, Nick Fisk wrote: > > Hope it's useful to someone

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-22 Thread Nick Fisk
> -Original Message- > From: Ilya Dryomov [mailto:idryo...@gmail.com] > Sent: 22 August 2016 14:53 > To: Nick Fisk > Cc: Jason Dillaman ; ceph-users > > Subject: Re: [ceph-users] RBD Watch Notify for snapshots > > On Mon, Aug 22, 2016 at 3:13 PM, Nick Fisk wrote: > > Hi Jason, > > > >

Re: [ceph-users] Should hot pools for cache-tiering be replicated ?

2016-08-22 Thread Christian Balzer
On Mon, 22 Aug 2016 15:45:52 +0200 Florent B wrote: > On 08/22/2016 02:48 PM, Christian Balzer wrote: > > Hello, > > > > On Mon, 22 Aug 2016 14:33:51 +0200 Florent B wrote: > > > >> Hi, > >> > >> I'm looking for informations about cache-tiering. > >> > > Have you searched the ML archives, includin

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-22 Thread Ilya Dryomov
On Fri, Jul 8, 2016 at 5:02 AM, Jason Dillaman wrote: > librbd pseudo-automatically handles this by flushing the cache to the > snapshot when a new snapshot is created, but I don't think krbd does the > same. If it doesn't, it would probably be a nice addition to the block > driver to support the

Re: [ceph-users] Recommended hardware for MDS server

2016-08-22 Thread Christian Balzer
Hello, first off, not a CephFS user, just installed it on a lab setup for fun. That being said, I tend to read most posts here. And I do remember participating in similar discussions. On Mon, 22 Aug 2016 14:47:38 +0200 Burkhard Linke wrote: > Hi, > > we are running CephFS with about 70TB data

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-22 Thread Ilya Dryomov
On Mon, Aug 22, 2016 at 3:13 PM, Nick Fisk wrote: > Hi Jason, > > Here is my initial attempt at using the Watch/Notify support to be able to > remotely fsfreeze a filesystem on a RBD. Please note this > was all very new to me and so there will probably be a lot of things that > haven't been done

[ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-22 Thread Nick Fisk
Hope it's useful to someone https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-22 Thread Nick Fisk
Hi Jason, Here is my initial attempt at using the Watch/Notify support to be able to remotely fsfreeze a filesystem on a RBD. Please note this was all very new to me and so there will probably be a lot of things that haven't been done in the best way. https://github.com/fiskn/rbd_freeze I'm no

Re: [ceph-users] Should hot pools for cache-tiering be replicated ?

2016-08-22 Thread Christian Balzer
Hello, On Mon, 22 Aug 2016 14:33:51 +0200 Florent B wrote: > Hi, > > I'm looking for informations about cache-tiering. > Have you searched the ML archives, including my "Cache tier operation clarifications" thread? > I can't find if pools used as "hot storage" in cache-tiering should be > rep

[ceph-users] Recommended hardware for MDS server

2016-08-22 Thread Burkhard Linke
Hi, we are running CephFS with about 70TB data, > 5 million files and about 100 clients. The MDS is currently colocated on a storage box with 14 OSD (12 HDD, 2SSD). The box has two E52680v3 CPUs and 128 GB RAM. CephFS runs fine, but it feels like the metadata operations may need more speed.

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-22 Thread Александр Пивушков
Thank you very much for your answer! > Yes, I gathered that.  > The question is, what servers between the Windows clients and the final > Ceph storage are you planning to use. >  That I do not yet understand. While I believe that the client can be connected directly to the Ceph  :) I will read

Re: [ceph-users] 2TB useable - small business - help appreciated

2016-08-22 Thread Robert Sander
On 01.08.2016 08:05, Christian Balzer wrote: >> With all the info provided is DRBD Pacemaker HA Cluster or even >> GlusterFS a better option? Yes. >> > No GlusterFS support for VMware as well last time I checked, only > interfaces via an additional NFS head again, so no advantage here. Last tim

[ceph-users] JSSDK API description is missing in ceph website

2016-08-22 Thread zhu tong
Hi, I am currently studying how to use s3 ceph through browser. So far, it works fine. But I noticed http://docs.ceph.com/docs/master/radosgw/s3/ does not have a description of JSSDK API. Hope someday it could be available. Thanks. ___ ceph-users m

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-22 Thread Christian Balzer
On Mon, 22 Aug 2016 10:18:51 +0300 Александр Пивушков wrote: > Hello, > Several answers below > > >Среда, 17 августа 2016, 8:57 +03:00 от Christian Balzer : > > > > > >Hello, > > > >On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote: > > > >> Christian, > >> > >> thanks a lot for

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-22 Thread Александр Пивушков
Hello, Several answers below >Среда, 17 августа 2016, 8:57 +03:00 от Christian Balzer : > > >Hello, > >On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote: > >> Christian, >> >> thanks a lot for your time. Please see below. >> >> >> 2016-08-17 5:41 GMT+05:00 Christian Balzer < ch