[ceph-users] Mimic 13.2.1 released date?

2018-07-13 Thread Frank Yu
Hi there, Any plan for the release of 13.2.1? -- Regards Frank Yu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] IMPORTANT: broken luminous 12.2.6 release in repo, do not upgrade

2018-07-13 Thread Sage Weil
Hi everyone, tl;dr: Please avoid the 12.2.6 packages that are currently present on download.ceph.com. We will have a 12.2.7 published ASAP (probably Monday). If you do not use bluestore or erasure-coded pools, none of the issues affect you. Details: We built 12.2.6 and pushed it to the re

Re: [ceph-users] osd prepare issue device-mapper mapping

2018-07-13 Thread Jacob DeGlopper
Also, looking at your ceph-disk list output, the LVM is probably your root filesystem and cannot be wiped.  If you'd like the send the output of a 'mount' and 'lvs' command, you should be able to to tell.     -- jacob On 07/13/2018 03:42 PM, Jacob DeGlopper wrote: You have LVM data on /dev/sd

Re: [ceph-users] osd prepare issue device-mapper mapping

2018-07-13 Thread Jacob DeGlopper
You have LVM data on /dev/sdb already; you will need to remove that before you can use ceph-disk on that device. Use the LVM commands 'lvs','vgs', and 'pvs' to list the logical volumes, volume groups, and physical volumes defined.  Once you're sure you don't need the data, lvremove, vgremove,

[ceph-users] osd prepare issue device-mapper mapping

2018-07-13 Thread Satish Patel
I am installing ceph in my lab box using ceph-ansible, i have two HDD for OSD and i am getting following error on one of OSD not sure what is the issue. [root@ceph-osd-01 ~]# ceph-disk prepare --cluster ceph --bluestore /dev/sdb ceph-disk: Error: Device /dev/sdb1 is in use by a device-mapper map

Re: [ceph-users] Approaches for migrating to a much newer cluster

2018-07-13 Thread Brady Deetz
Just a thought: have you considered rbd replication? On Fri, Jul 13, 2018 at 9:30 AM r...@cleansafecloud.com < r...@cleansafecloud.com> wrote: > > Hello folks, > > We have an old active Ceph cluster on Firefly (v0.80.9) which we use for > OpenStack and have multiple live clients. We have been put

Re: [ceph-users] MDS damaged

2018-07-13 Thread Alessandro De Salvo
Hi Dan, you're right, I was following the mimic instructions (which indeed worked on my mimic testbed), but luminous is different and I missed the additional step. Works now, thanks!     Alessandro Il 13/07/18 17:51, Dan van der Ster ha scritto: On Fri, Jul 13, 2018 at 4:07 PM Alessandro

Re: [ceph-users] MDS damaged

2018-07-13 Thread Dan van der Ster
On Fri, Jul 13, 2018 at 4:07 PM Alessandro De Salvo wrote: > However, I cannot reduce the number of mdses anymore, I was used to do > that with e.g.: > > ceph fs set cephfs max_mds 1 > > Trying this with 12.2.6 has apparently no effect, I am left with 2 > active mdses. Is this another bug? Are yo

[ceph-users] Approaches for migrating to a much newer cluster

2018-07-13 Thread r...@cleansafecloud.com
Hello folks, We have an old active Ceph cluster on Firefly (v0.80.9) which we use for OpenStack and have multiple live clients. We have been put in a position whereby we need to move to a brand new cluster under a new OpenStack deployment. The new cluster is on Luminous (v.12.2.5). Now we obvi

Re: [ceph-users] MDS damaged

2018-07-13 Thread Alessandro De Salvo
Thanks all, 100..inode, mds_snaptable and 1..inode were not corrupted, so I left them as they were. I have re-injected all the bad objects, for all mdses (2 per filesysytem) and all filesystems I had (2), and after setiing the mdses as repaired my filesystems are back! Howeve

Re: [ceph-users] MDS damaged

2018-07-13 Thread Yan, Zheng
On Thu, Jul 12, 2018 at 11:39 PM Alessandro De Salvo wrote: > > Some progress, and more pain... > > I was able to recover the 200. using the ceph-objectstore-tool for > one of the OSDs (all identical copies) but trying to re-inject it just with > rados put was giving no error while the g

Re: [ceph-users] MDS damaged

2018-07-13 Thread Adam Tygart
Bluestore. On Fri, Jul 13, 2018, 05:56 Dan van der Ster wrote: > Hi Adam, > > Are your osds bluestore or filestore? > > -- dan > > > On Fri, Jul 13, 2018 at 7:38 AM Adam Tygart wrote: > > > > I've hit this today with an upgrade to 12.2.6 on my backup cluster. > > Unfortunately there were issues

[ceph-users] Ceph balancer module algorithm learning

2018-07-13 Thread Hunter zhao
Hi, all: I am now looking at the mgr balancer module. How do the two algorithms in it calculate? I just used ceph, the code reading ability is very poor. Can anyone help me explain how to calculate the score of the data balance?Especially `def calc_eval()` and `def calc_stats()`. Best ___

Re: [ceph-users] upgrading to 12.2.6 damages cephfs (crc errors)

2018-07-13 Thread Dan van der Ster
The problem seems similar to https://tracker.ceph.com/issues/23871 which was fixed in mimic but not luminous: fe5038c7f9 osd/PrimaryLogPG: clear data digest on WRITEFULL if skip_data_digest .. dan On Fri, Jul 13, 2018 at 12:45 PM Dan van der Ster wrote: > > Hi, > > Following the reports on ceph-

Re: [ceph-users] mds daemon damaged

2018-07-13 Thread Dan van der Ster
Hi Kevin, Are your OSDs bluestore or filestore? -- dan On Thu, Jul 12, 2018 at 11:30 PM Kevin wrote: > > Sorry for the long posting but trying to cover everything > > I woke up to find my cephfs filesystem down. This was in the logs > > 2018-07-11 05:54:10.398171 osd.1 [ERR] 2.4 full-object rea

Re: [ceph-users] MDS damaged

2018-07-13 Thread Dan van der Ster
Hi Adam, Are your osds bluestore or filestore? -- dan On Fri, Jul 13, 2018 at 7:38 AM Adam Tygart wrote: > > I've hit this today with an upgrade to 12.2.6 on my backup cluster. > Unfortunately there were issues with the logs (in that the files > weren't writable) until after the issue struck.

Re: [ceph-users] Bluestore and number of devices

2018-07-13 Thread Kevin Olbrich
You can keep the same layout as before. Most place DB/WAL combined in one partition (similar to the journal on filestore). Kevin 2018-07-13 12:37 GMT+02:00 Robert Stanford : > > I'm using filestore now, with 4 data devices per journal device. > > I'm confused by this: "BlueStore manages either

[ceph-users] upgrading to 12.2.6 damages cephfs (crc errors)

2018-07-13 Thread Dan van der Ster
Hi, Following the reports on ceph-users about damaged cephfs after updating to 12.2.6 I spun up a 1 node cluster to try the upgrade. I started with two OSDs on 12.2.5, wrote some data. Then I restarted the OSDs one by one while continuing to write to the cephfs mountpoint. Then I restarted the (si

[ceph-users] Bluestore and number of devices

2018-07-13 Thread Robert Stanford
I'm using filestore now, with 4 data devices per journal device. I'm confused by this: "BlueStore manages either one, two, or (in certain cases) three storage devices." ( http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/ ) When I convert my journals to bluestore, wil

Re: [ceph-users] OSD tuning no longer required?

2018-07-13 Thread Robert Stanford
This is what leads me to believe it's other settings being referred to as well: https://ceph.com/community/new-luminous-rados-improvements/ *"There are dozens of documents floating around with long lists of Ceph configurables that have been tuned for optimal performance on specific hardware or fo

[ceph-users] [Ceph Admin & Monitoring] Inkscope is back

2018-07-13 Thread ghislain.chevalier
Hi, Inkscope, a ceph admin and monitoring GUI, is still alive. It can be now installed with an ansible playbook. https://github.com/inkscope/inkscope-ansible Best regards - - - - - - - - - - - - - - - - - Ghislain Chevalier ORANGE/IMT/OLS/DIESE/LCP/DDSD Software-Defined Storage Architect +3329

Re: [ceph-users] Increase queue_depth in KVM

2018-07-13 Thread Damian Dabrowski
Konstantin, Thanks for explanation. But unfortunately, upgrading qemu is nearly impossible in my case. So is there something else I can do, or I have to agree with fact that write IOPS had to be 8x smaller inside KVM rather than outside KVM? :| pt., 13 lip 2018 o 04:22 Konstantin Shalygin napisa

Re: [ceph-users] mds daemon damaged

2018-07-13 Thread Oliver Freyermuth
Hi Kevin, Am 13.07.2018 um 04:21 schrieb Kevin: > That thread looks exactly like what I'm experiencing. Not sure why my > repeated googles didn't find it! maybe the thread was still too "fresh" for Google's indexing. > > I'm running 12.2.6 and CentOS 7 > > And yes, I recently upgraded from j