Re: [ceph-users] iSCSI over RBD

2018-01-19 Thread Mike Christie
On 01/19/2018 02:12 PM, Steven Vacaroaia wrote: > Hi Joshua, > > I was under the impression that kernel 3.10.0-693 will work with iscsi > That kernel works with RHCS 2.5 and below. You need the rpms from that or the matching upstream releases. Besides trying to dig out the versions and matchin

Re: [ceph-users] Removing cache tier for RBD pool

2018-01-19 Thread Mike Lovell
On Tue, Jan 16, 2018 at 9:25 AM, Jens-U. Mozdzen wrote: > Hello Mike, > > Zitat von Mike Lovell : > >> On Mon, Jan 8, 2018 at 6:08 AM, Jens-U. Mozdzen wrote: >> >>> Hi *, >>> [...] >>> 1. Does setting the cache mode to "forward" lead to above situation of >>> remaining locks on hot-storage pool

Re: [ceph-users] ceph df shows 100% used

2018-01-19 Thread QR
'MAX AVAIL' in the 'ceph df' output represents the amount of data that can be used before the first OSD becomes full, and not the sum of all free space across a set of OSDs. 原始邮件 发件人: Webert de Souza Lima收件人: ceph-users发送时间: 2018年1月19日(

Re: [ceph-users] iSCSI over RBD

2018-01-19 Thread Brady Deetz
I too experienced this with that kernel as well as the elrepo kernel. On Jan 19, 2018 2:13 PM, "Steven Vacaroaia" wrote: Hi Joshua, I was under the impression that kernel 3.10.0-693 will work with iscsi Unfortunately I still cannot create a disk because qfull_time_out is not supported What

Re: [ceph-users] iSCSI over RBD

2018-01-19 Thread Steven Vacaroaia
Hi Joshua, I was under the impression that kernel 3.10.0-693 will work with iscsi Unfortunately I still cannot create a disk because qfull_time_out is not supported What am I missing / do it wrong ? 2018-01-19 15:06:45,216 INFO [lun.py:601:add_dev_to_lio()] - (LUN.add_dev_to_lio) Adding i

[ceph-users] Missing udev rule for FC disks (Re: mkjournal error creating journal ... : (13) Permission denied)

2018-01-19 Thread Fulvio Galeazzi
Hallo, apologies for reviving an old thread, but I just wasted again one full day as I had forgotten about this issue... To recap, udev rules nowadays do not (at least in my case, I am using disks served via FiberChannel) create the links /dev/disk/by-partuuid that ceph-disk expects.

Re: [ceph-users] Migrating to new pools

2018-01-19 Thread Jens-U. Mozdzen
Hi *, Zitat von ceph-users-requ...@lists.ceph.com: Hi *, facing the problem to reduce the number of PGs for a pool, I've found various information and suggestions, but no "definite guide" to handle pool migration with Ceph 12.2.x. This seems to be a fairly common problem when having to deal wit

Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-19 Thread Youzhong Yang
I don't think it's hardware issue. All the hosts are VMs. By the way, using the same set of VMWare hypervisors, I switched back to Ubuntu 16.04 last night, so far so good, no freeze. On Fri, Jan 19, 2018 at 8:50 AM, Daniel Baumann wrote: > Hi, > > On 01/19/18 14:46, Youzhong Yang wrote: > > Just

Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-19 Thread Daniel Baumann
Hi, On 01/19/18 14:46, Youzhong Yang wrote: > Just wondering if anyone has seen the same issue, or it's just me. we're using debian with our own backported kernels and ceph, works rock solid. what you're describing sounds more like hardware issues to me. if you don't fully "trust"/have confidenc

Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-19 Thread David Turner
The freeze is likely a kernel panic. Try testing different versions of the kernel. On Fri, Jan 19, 2018, 8:46 AM Youzhong Yang wrote: > One month ago when I first started evaluating ceph, I chose Debian 9.3 as > the operating system. I saw random OS hang so I gave up and switched to > Ubuntu 16.

[ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-19 Thread Youzhong Yang
One month ago when I first started evaluating ceph, I chose Debian 9.3 as the operating system. I saw random OS hang so I gave up and switched to Ubuntu 16.04. Every thing works well using Ubuntu 16.04. Yesterday I tried Ubuntu 17.10, again I saw random OS hang, no matter it's mon, mgr, osd, or rg

Re: [ceph-users] ghost degraded objects

2018-01-19 Thread Sage Weil
On Fri, 19 Jan 2018, Ugis wrote: > Running Luminous 12.2.2, noticed strange behavior lately. > When for example setting "ceph osd out X" closer to the reballancing > end "degraded" objects still show up, but in "pgs:" section of ceph -s > no degraded pgs are still recovering, just ramapped and no d

Re: [ceph-users] QUEMU - rbd cache - inconsistent documentation?

2018-01-19 Thread Jason Dillaman
That big fat warning was related to the note under your second quote: "Note: Prior to QEMU v2.4.0, if you explicitly set RBD Cache settings in the Ceph configuration file, your Ceph settings override the QEMU cache settings." Short story long, starting with QEMU v2.4.0, your QEMU cache settings 1

[ceph-users] QUEMU - rbd cache - inconsistent documentation?

2018-01-19 Thread Wolfgang Lendl
hi, I'm a bit confused after reading the official ceph docu regarding QEMU and rbd caching. http://docs.ceph.com/docs/master/rbd/qemu-rbd/?highlight=qemu there's a big fat warning:  "Important: If you set rbd_cache=true, you must set cache=writeback or risk data loss. Without cache=writeback, Q

Re: [ceph-users] ceph df shows 100% used

2018-01-19 Thread Webert de Souza Lima
While it seemed to be solved yesterday, today the %USED has grown a lot again. See: ~# ceph osd df tree http://termbin.com/0zhk ~# ceph df detail http://termbin.com/thox 94% USED while there is about 21TB worth of data, size = 2 menas ~42TB RAW Usage, but the OSDs in that root sum ~70TB availabl

Re: [ceph-users] Fwd: Ceph team involvement in Rook (Deploying Ceph in Kubernetes)

2018-01-19 Thread Marc Roos
Are the guys of apache mesos agreeing to this? I have been looking at mesos, dcos and still have to make up my mind which way to go. I like that mesos has the unified containerizer that runs the docker images and I don’t need to run the dockerd, how the adapt the to the cni standard. How is t

[ceph-users] Fwd: Ceph team involvement in Rook (Deploying Ceph in Kubernetes)

2018-01-19 Thread Kai Wagner
Just for those of you who are not subscribed to ceph-users. Forwarded Message Subject:Ceph team involvement in Rook (Deploying Ceph in Kubernetes) Date: Fri, 19 Jan 2018 11:49:05 +0100 From: Sebastien Han To: ceph-users , Squid Cybernetic , Dan Mick , Chen, Hua