[ceph-users] Fwd: [lca-announce] Call for Proposals for linux.conf.au 2018 in Sydney are open!

2017-07-02 Thread Tim Serong
It's that time of year again, folks! Please everyone go submit talks, or at least plan to attend this most excellent of F/OSS conferences. (I thought I might put in a proposal to run a ceph miniconf, unless anyone else was already thinking of doing that? If accepted, that would give us a whole d

Re: [ceph-users] Ceph and IPv4 -> IPv6

2017-07-02 Thread Simon Leinen
> I have it running the other way around. The RGW has IPv4 and IPv6, but > the Ceph cluster is IPv6-only. > RGW/librados talks to Ceph ovre IPv6 and handles client traffic on > both protocols. > No problem to run the RGW dual-stacked. Just for the record, we've been doing exactly the same for se

[ceph-users] Ceph upgrade kraken -> luminous without deploy

2017-07-02 Thread Marc Roos
I have updated a test cluster by just updating the rpm and issueing a ceph osd require-osd-release because it was mentioned in the status. Is there more you need to do? - update on all nodes the packages sed -i 's/Kraken/Luminous/g' /etc/yum.repos.d/ceph.repo yum update - then on each node f

[ceph-users] Ceph Cluster with Deeo Scrub Error

2017-07-02 Thread Hauke Homburg
Hello, Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 and ceph 10.0.2.5. All OSD running in a RAID6. In this Cluster i have Deep Scrub Error: /var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1 log_channel(cluster) log [ERR] : 1.129 deep-scrub 1 errors This L

Re: [ceph-users] Snapshot cleanup performance impact on client I/O?

2017-07-02 Thread Gagandeep Arora
Hello Kenneth, To throttle the snapshot trimming transactions on osds, set the "osd snap trim sleep" to value greater than 0(default). [global] osd snap trim sleep = 1 #the above will cause the osd to sleep for 1 second before submitting the next batch snap trimming transactions. Put it in your