On 01/19/2018 02:12 PM, Steven Vacaroaia wrote:
> Hi Joshua,
>
> I was under the impression that kernel 3.10.0-693 will work with iscsi
>
That kernel works with RHCS 2.5 and below. You need the rpms from that
or the matching upstream releases. Besides trying to dig out the
versions and matchin
On Tue, Jan 16, 2018 at 9:25 AM, Jens-U. Mozdzen wrote:
> Hello Mike,
>
> Zitat von Mike Lovell :
>
>> On Mon, Jan 8, 2018 at 6:08 AM, Jens-U. Mozdzen wrote:
>>
>>> Hi *,
>>> [...]
>>> 1. Does setting the cache mode to "forward" lead to above situation of
>>> remaining locks on hot-storage pool
'MAX AVAIL' in the 'ceph df' output represents the amount of data that can be
used before the first OSD becomes full, and not the sum of all free space
across a set of OSDs.
原始邮件 发件人: Webert de
Souza Lima收件人:
ceph-users发送时间: 2018年1月19日(
I too experienced this with that kernel as well as the elrepo kernel.
On Jan 19, 2018 2:13 PM, "Steven Vacaroaia" wrote:
Hi Joshua,
I was under the impression that kernel 3.10.0-693 will work with iscsi
Unfortunately I still cannot create a disk because qfull_time_out is not
supported
What
Hi Joshua,
I was under the impression that kernel 3.10.0-693 will work with iscsi
Unfortunately I still cannot create a disk because qfull_time_out is not
supported
What am I missing / do it wrong ?
2018-01-19 15:06:45,216 INFO [lun.py:601:add_dev_to_lio()] -
(LUN.add_dev_to_lio) Adding i
Hallo,
apologies for reviving an old thread, but I just wasted again one
full day as I had forgotten about this issue...
To recap, udev rules nowadays do not (at least in my case, I am
using disks served via FiberChannel) create the links
/dev/disk/by-partuuid that ceph-disk expects.
Hi *,
Zitat von ceph-users-requ...@lists.ceph.com:
Hi *,
facing the problem to reduce the number of PGs for a pool, I've found
various information and suggestions, but no "definite guide" to handle
pool migration with Ceph 12.2.x. This seems to be a fairly common
problem when having to deal wit
I don't think it's hardware issue. All the hosts are VMs. By the way, using
the same set of VMWare hypervisors, I switched back to Ubuntu 16.04 last
night, so far so good, no freeze.
On Fri, Jan 19, 2018 at 8:50 AM, Daniel Baumann
wrote:
> Hi,
>
> On 01/19/18 14:46, Youzhong Yang wrote:
> > Just
Hi,
On 01/19/18 14:46, Youzhong Yang wrote:
> Just wondering if anyone has seen the same issue, or it's just me.
we're using debian with our own backported kernels and ceph, works rock
solid.
what you're describing sounds more like hardware issues to me. if you
don't fully "trust"/have confidenc
The freeze is likely a kernel panic. Try testing different versions of the
kernel.
On Fri, Jan 19, 2018, 8:46 AM Youzhong Yang wrote:
> One month ago when I first started evaluating ceph, I chose Debian 9.3 as
> the operating system. I saw random OS hang so I gave up and switched to
> Ubuntu 16.
One month ago when I first started evaluating ceph, I chose Debian 9.3 as
the operating system. I saw random OS hang so I gave up and switched to
Ubuntu 16.04. Every thing works well using Ubuntu 16.04.
Yesterday I tried Ubuntu 17.10, again I saw random OS hang, no matter it's
mon, mgr, osd, or rg
On Fri, 19 Jan 2018, Ugis wrote:
> Running Luminous 12.2.2, noticed strange behavior lately.
> When for example setting "ceph osd out X" closer to the reballancing
> end "degraded" objects still show up, but in "pgs:" section of ceph -s
> no degraded pgs are still recovering, just ramapped and no d
That big fat warning was related to the note under your second quote:
"Note: Prior to QEMU v2.4.0, if you explicitly set RBD Cache settings
in the Ceph configuration file, your Ceph settings override the QEMU
cache settings."
Short story long, starting with QEMU v2.4.0, your QEMU cache settings
1
hi,
I'm a bit confused after reading the official ceph docu regarding QEMU
and rbd caching.
http://docs.ceph.com/docs/master/rbd/qemu-rbd/?highlight=qemu
there's a big fat warning:
"Important: If you set rbd_cache=true, you must set cache=writeback or
risk data loss. Without cache=writeback, Q
While it seemed to be solved yesterday, today the %USED has grown a lot
again. See:
~# ceph osd df tree
http://termbin.com/0zhk
~# ceph df detail
http://termbin.com/thox
94% USED while there is about 21TB worth of data, size = 2 menas ~42TB RAW
Usage, but the OSDs in that root sum ~70TB availabl
Are the guys of apache mesos agreeing to this? I have been looking at
mesos, dcos and still have to make up my mind which way to go. I like
that mesos has the unified containerizer that runs the docker images and
I don’t need to run the dockerd, how the adapt the to the cni standard.
How is t
Just for those of you who are not subscribed to ceph-users.
Forwarded Message
Subject:Ceph team involvement in Rook (Deploying Ceph in Kubernetes)
Date: Fri, 19 Jan 2018 11:49:05 +0100
From: Sebastien Han
To: ceph-users , Squid Cybernetic
, Dan Mick , Chen, Hua
17 matches
Mail list logo