On Wed, Jul 4, 2018 at 7:02 PM Dennis Kramer (DBS) wrote:
>
> Hi,
>
> I have managed to get cephfs mds online again...for a while.
>
> These topics covers more or less my symptoms and helped me get it up
> and running again:
> - https://www.spinics.net/lists/ceph-users/msg45696.h
> tml
> -
Hi,
I want to try to distribute the deep scrub by running the deep scrub
manually.
I have set osd deep scrub interval to 30 days and osd scrub max interval to
14 days.
However, I still getting deep-scrub even though all the last deep scrub did
not pass the 30 days period. How do I reset the
On Wed, Jul 4, 2018 at 6:26 PM, Benjamin Naber wrote:
> Hi @all,
>
> im currently in testing for setup an production environment based on the
> following OSD Nodes:
>
> CEPH Version: luminous 12.2.5
>
> 5x OSD Nodes with following specs:
>
> - 8 Core Intel Xeon 2,0 GHZ
>
> - 96GB Ram
>
> - 10x
Hi Sean,
Many thanks for the suggestion, but unfortunately deep-scrub also
appears to be ignored:
# ceph pg deep-scrub 4.ff
instructing pg 4.ffs0 on osd.318 to deep-scrub
'tail -f ceph-osd.318.log' shows no new entries.
To get more info, I set debug level 10 on the osd, and issued another
Hi!
* Gregory Farnum [2018-06-28 19:31:09 -0700]:
> That’s close but not *quite* right. It’s not that Ceph will explicitly
> “fall back” to replication. In most (though perhaps not all) erasure codes,
> what you’ll see is full sized parity blocks, a full store of the data (in
> the default
hi Caspar,
ty for the reply. ive updatet all SSDs to actual firmware. Still having the
same error. the strange thing is that this issue switches from node to node and
from osd to osd.
HEALTH_WARN 4 slow requests are blocked > 32 sec
REQUEST_SLOW 4 slow requests are blocked > 32 sec
1 ops
Hi,
I have managed to get cephfs mds online again...for a while.
These topics covers more or less my symptoms and helped me get it up
and running again:
- https://www.spinics.net/lists/ceph-users/msg45696.h
tml
- http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/
023133.html
Hello,
I wonder how good or bad is to reserve space for WAL/DB on the same SSD
as OS or is it better to separate them? And what are recommended size
for WAL/DB partition?
I am building Luminous cluster on Centos7 with 2 OSD per host, each OSD
would be 2 * 8tb disks through LVM.
Anton.
Hi Ben,
At first glance i would say the CPU's are a bit weak for this setup.
Recommended is to have at least 1 core per OSD. Since you have 8 cores and
10 OSD's there isn't much left for other processes.
Furthermore, did you upgrade the firmware of those DC S4500's to the latest
firmware?
Hi Drew,
Try to increase debugging with
debug ms = 1
debug rgw = 20
Regards
Kev
- Original Message -
From: "Drew Weaver"
To: "ceph-users"
Sent: Tuesday, July 3, 2018 1:39:55 PM
Subject: [ceph-users] RADOSGW err=Input/output error
An application is having general failures writing
Hi @all,
im currently in testing for setup an production environment based on the
following OSD Nodes:
CEPH Version: luminous 12.2.5
5x OSD Nodes with following specs:
- 8 Core Intel Xeon 2,0 GHZ
- 96GB Ram
- 10x 1,92 TB Intel DC S4500 connectet via SATA
- 4x 10 Gbit NIC 2 bonded via LACP
11 matches
Mail list logo