Dear everyone,
Last year I set up an experimental Ceph cluster (still single node,
failure domain = osd, MB Asus P10S-M WS, CPU Xeon E3-1235L, RAM 64 GB,
HDDs WD30EFRX, Ubuntu 18.04, now with kernel 5.3.0 from Ubuntu mainline
PPA and Ceph 14.2.4 from download.ceph.com/debian-nautilus/
Hello Mike and Jason,
as described in my last mail i converted the filesystem to ext4, set "sysctl
vm.dirty_background_ratio=0" and I put the regular workload on the filesystem
(used as a NFS mount).
That seems so to prevent crashes for a entire week now (before this, the nbd
device crashed aft
OK, looks like clock skew is the problem. I thought this is caused by the
reboot but it did not fix itself after some minutes (mon3 was 6 seconds
ahead).
After forcing time sync from the same server, it seems to be solved now.
Kevin
Am Fr., 20. Sept. 2019 um 07:33 Uhr schrieb Kevin Olbrich :
> H
Hi!
Today some OSDs went down, a temporary problem that was solved easily.
The mimic cluster is working and all OSDs are complete, all active+clean.
Completely new for me is this:
> 25 slow ops, oldest one blocked for 219 sec, mon.mon03 has slow ops
The cluster itself looks fine, monitoring for
On Thu, Sep 19, 2019 at 11:37 PM Dan van der Ster wrote:
>
> You were running v14.2.2 before?
>
> It seems that that ceph_assert you're hitting was indeed added
> between v14.2.2. and v14.2.3 in this commit
> https://github.com/ceph/ceph/commit/12f8b813b0118b13e0cdac15b19ba8a7e127730b
>
> There's
Hi,
I recently took our test cluster up to a new version and am no longer able to
start radosgw. The cluster itself (mon, osd, mgr) appears fine.
Without being much of an expert trying to read this, from the errors that were
being thrown it seems like the object expirer is choking ok handling
Hello Team ,
Hello Team , I am trying to enable cephx in existing cluster using
ceph-ansible and it is failing when it tried to do`ceph --cluster
ceph --name mon. -k /var/lib/ceph/mon/ceph-computenode01/keyring auth
get-key mon.` . I am sure `mon.` user exist because I created it but for
s
You were running v14.2.2 before?
It seems that that ceph_assert you're hitting was indeed added
between v14.2.2. and v14.2.3 in this commit
https://github.com/ceph/ceph/commit/12f8b813b0118b13e0cdac15b19ba8a7e127730b
There's a comment in the tracker for that commit which says the
original fix wa
I forgot to mention the tracker issue: https://tracker.ceph.com/issues/41935
On 19/09/2019 16:59, Kenneth Waegeman wrote:
Hi all,
I updated our ceph cluster to 14.2.3 yesterday, and today the mds are
crashing one after another. I'm using two active mds.
I've made a tracker ticket, but I was
Hi all,
I updated our ceph cluster to 14.2.3 yesterday, and today the mds are
crashing one after another. I'm using two active mds.
I've made a tracker ticket, but I was wondering if someone else also has
seen this issue yet?
-27> 2019-09-19 15:42:00.196 7f036c2f0700 4 mds.1.server
h
Dear ml,
we are currently trying to wrap our heads around a HEALTH_ERR problem on
our Luminous 12.2.12 cluster (upgraded from Jewel a couple of weeks
ago). Before attempting a 'ceph pg repair' we would like to have a
better understanding of what has happened.
ceph -s reports:
cluster:
id:
Hi-Iam using ceph 12.2.11. here I am getting a few scrub errors. To fix
these scrub error I ran the "ceph pg repair ".
But scrub error not going and the repair is talking long time like 8-12
hours.
Thanks
Swami
___
ceph-users mailing list
ceph-users@list
Hi,
is it possible to set the logical/physical block size for exported disks?
I can set both values in FreeNAS
oVirt 4.3.6 will "Support device block size of 4096 bytes for file based
storage domains" and I want to know if i can use this with ceph-iscsi
thx
matthias
__
Hello,
I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk (no
SSD) in 20 servers.
> cluster:
> id: 778234df-5784-4021-b983-0ee1814891be
> health: HEALTH_WARN
> 2 MDSs report slow requests
>
> services:
> mon: 3 daemons, quorum icadmin006,
14 matches
Mail list logo