[ceph-users] ceph mds/db tansfer

2019-01-28 Thread renjianxinlover
hi, professor Recently, i am intend to make big adaption for local small-scale ceph cluster. The job mainly includes two parts: (1) mds metadata: switch metadata storage medium to ssd. (2) osd bluestore wal: switch wal storage medium to ssd. now, we are doing some research and test

Re: [ceph-users] Slow requests from bluestore osds

2019-01-28 Thread Marc Schöchlin
Hello cephers, as described - we also have the slow requests in our setup. We recently updated from ceph 12.2.4 to 12.2.10, updated Ubuntu 16.04 to the latest patchlevel (with kernel 4.15.0-43) and applied dell firmware 2.8.0. On 12.2.5 (before updating the cluster) we had in a frequency of

Re: [ceph-users] Bucket logging howto

2019-01-28 Thread Casey Bodley
On Sat, Jan 26, 2019 at 6:57 PM Marc Roos wrote: > > > > > From the owner account of the bucket I am trying to enable logging, but > I don't get how this should work. I see the s3:PutBucketLogging is > supported, so I guess this should work. How do you enable it? And how do > you access the log?

Re: [ceph-users] Questions about using existing HW for PoC cluster

2019-01-28 Thread Will Dennis
The hope is to be able to provide scale-out storage, that will be performant enough to use as a primary fs-based data store for research data (right now we mount via NFS on our cluster nodes, may do that with Ceph or perhaps do native cephfs access from the cluster nodes.) Right now I’m still

[ceph-users] ceph-fs crashed after upgrade to 13.2.4

2019-01-28 Thread Ansgar Jazdzewski
hi folks we need some help with our cephfs, all mds keep crashing starting mds.mds02 at - terminate called after throwing an instance of 'ceph::buffer::bad_alloc' what(): buffer::bad_alloc *** Caught signal (Aborted) ** in thread 7f542d825700 thread_name:md_log_replay ceph version 13.2.4

Re: [ceph-users] Questions about using existing HW for PoC cluster

2019-01-28 Thread Willem Jan Withagen
On 28-1-2019 02:56, Will Dennis wrote: I mean to use CephFS on this PoC; the initial use would be to back up an existing ZFS server with ~43TB data (may have to limit the backed-up data depending on how much capacity I can get out of the OSD servers) and then share out via NFS as a read-only

Re: [ceph-users] krbd reboot hung

2019-01-28 Thread Jason Dillaman
On Mon, Jan 28, 2019 at 4:48 AM Gao, Wenjun wrote: > > The "rbdmap" unit needs rbdmap and fstab to be configured for each volume, > what if the map and mount are done by applications instead of the system > unit? See, we don't write each volume info into /etc/ceph/rbdmap /etc/fstab, > and if

Re: [ceph-users] Commercial support

2019-01-28 Thread Robert Sander
Hi, Am 23.01.19 um 23:28 schrieb Ketil Froyn: > How is the commercial support for Ceph? At Heinlein Support we also offer independent ceph consulting. We are concentrating on the German speaking regions of Europe. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119

Re: [ceph-users] krbd reboot hung

2019-01-28 Thread Gao, Wenjun
The "rbdmap" unit needs rbdmap and fstab to be configured for each volume, what if the map and mount are done by applications instead of the system unit? See, we don't write each volume info into /etc/ceph/rbdmap /etc/fstab, and if the "rbdmap" systemd unit is stopped unexpected, not by

Re: [ceph-users] cephfs kernel client instability

2019-01-28 Thread Martin Palma
Upgrading to 4.15.0-43-generic fixed the problem. Best, Martin On Fri, Jan 25, 2019 at 9:43 PM Ilya Dryomov wrote: > > On Fri, Jan 25, 2019 at 9:40 AM Martin Palma wrote: > > > > > Do you see them repeating every 30 seconds? > > > > yes: > > > > Jan 25 09:34:37 sdccgw01 kernel:

Re: [ceph-users] RBD client hangs

2019-01-28 Thread Ilya Dryomov
On Mon, Jan 28, 2019 at 7:31 AM ST Wong (ITSC) wrote: > > > That doesn't appear to be an error -- that's just stating that it found a > > dead client that was holding the exclusice-lock, so it broke the dead > > client's lock on the image (by blacklisting the client). > > As there is only 1 RBD