Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Konstantin Shalygin
The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has the RBD mounted … So there is nor migration from my POV as there is no second storage to migrate to ... All your pain is self-inflicted. Just FYI clients are not interrupted when you upgrade ceph. Client will be

Re: [ceph-users] how the files in /var/lib/ceph/osd/ceph-0 are generated

2018-04-03 Thread Jeffrey Zhang
Btw, I am using ceph-volume. I just test ceph-disk. In this case, the ceph-0 folder is mounted from /dev/sdb1. So tmpfs only happens when using ceph-volume? how it works? On Wed, Apr 4, 2018 at 9:29 AM, Jeffrey Zhang < zhang.lei.fly+ceph-us...@gmail.com> wrote: > I am testing ceph Luminous,

[ceph-users] how the files in /var/lib/ceph/osd/ceph-0 are generated

2018-04-03 Thread Jeffrey Zhang
I am testing ceph Luminous, the environment is - centos 7.4 - ceph luminous ( ceph offical repo) - ceph-deploy 2.0 - bluestore + separate wal and db I found the ceph osd folder `/var/lib/ceph/osd/ceph-0` is mounted from tmpfs. But where the files in that folder come from? like `keyring`,

Re: [ceph-users] Instrumenting RBD IO

2018-04-03 Thread Jason Dillaman
You might want to take a look at the Zipkin tracing hooks that are (semi)integrated into Ceph [1]. The hooks are disabled by default in release builds so you would need to rebuild Ceph yourself and then enable tracing via the 'rbd_blkin_trace_all = true' configuration option [2]. [1]

[ceph-users] Ceph Developer Monthly - April 2018

2018-04-03 Thread Leonardo Vaz
Hey Cephers, This is just a friendly reminder that the next Ceph Developer Montly meeting is coming up: http://wiki.ceph.com/Planning If you have work that you're doing that it a feature work, significant backports, or anything you would like to discuss with the core team, please add it to the

Re: [ceph-users] librados python pool alignment size write failures

2018-04-03 Thread Kevin Hrpcek
Thanks for the input Greg, we've submitted the patch to the ceph github repo https://github.com/ceph/ceph/pull/21222 Kevin On 04/02/2018 01:10 PM, Gregory Farnum wrote: On Mon, Apr 2, 2018 at 8:21 AM Kevin Hrpcek > wrote:

[ceph-users] Instrumenting RBD IO

2018-04-03 Thread Alex Gorbachev
I was wondering if there is a mechanism to instrument an RBD workload to elucidate what takes place on OSDs to troubleshoot performance issues better. Currently, we can issue the RBD IO, such as via fio, and observe just the overall performance. One needs to guess what OSDs that hits and try to

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Götz Reinicke
> Am 03.04.2018 um 13:31 schrieb Konstantin Shalygin : > >> and true the VMs have to be shut down/server rebooted > > > Is not necessary. Just migrate VM. Hi, The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has the RBD mounted … So there is nor

Re: [ceph-users] What do you use to benchmark your rgw?

2018-04-03 Thread Mohamad Gebai
On 03/28/2018 11:11 AM, Mark Nelson wrote: > Personally I usually use a modified version of Mark Seger's getput > tool here: > > https://github.com/markhpc/getput/tree/wip-fix-timing > > The difference between this version and upstream is primarily to make > getput more accurate/useful when using

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Konstantin Shalygin
and true the VMs have to be shut down/server rebooted Is not necessary. Just migrate VM. k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Götz Reinicke
Hi Robert, > Am 29.03.2018 um 10:27 schrieb Robert Sander : > > On 28.03.2018 11:36, Götz Reinicke wrote: > >> My question is: How to proceed with the serves which map the rbds? > > Do you intend to upgrade the kernels on these RBD clients acting as NFS > servers?