log_channel(cluster) log [ERR] :
> repair 2.36bs0 2:d6cac754:::100070209f6.:head : on disk size
> (4096) does not match object info size (0) adjusted for ondisk to (0)
> 2019-10-01 11:30:10.573 7fa01f589700 -1 log_channel(cluster) log [ERR] :
> 2.36b repair 11 errors, 0 fixed
Hi all,
Same situation here:
Ceph 13.2.6 on Ubuntu 16.04.
Best
Mattia
On 10/1/19 4:38 PM, Stefan Kooman wrote:
> Quoting Wido den Hollander (w...@42on.com):
>> Hi,
>>
>> The Telemetry [0] module has been in Ceph since the Mimic release and
>> when enabled it sends back a anonymized JSON back to
ot;snapset_inconsistency"
The pool has size=3 and min_size=2, the image size is 5TB with 4MB objects.
Has anyone experienced a similar issue? I could not find anything
relevant in the issue tracker but I'll be happy to open a case if this
turns out to be a bug.
Thanks in adva
m.
>
> On Wed, Jul 31, 2019 at 5:43 AM Mattia Belluco wrote:
>>
>> Dear ceph users,
>>
>> We have been recently trying to use the two quota attributes:
>>
>> - ceph.quota.max_files
>> - ceph.quota.max_bytes
>>
>> to prepare for quota
Dear ceph users,
We have been recently trying to use the two quota attributes:
- ceph.quota.max_files
- ceph.quota.max_bytes
to prepare for quota enforcing.
While the idea is quite straightforward we found out we cannot set any
additional file attribute (we tried with the directory pinning, too
1247 München
> www.croit.io <http://www.croit.io>
> Tel: +49 89 1896585 90
>
>
> On Tue, Jul 9, 2019 at 4:20 PM Mattia Belluco <mailto:mattia.bell...@uzh.ch>> wrote:
>
> Hello ml,
>
> I have been looking for an updated table like the one you can
Hello ml,
I have been looking for an updated table like the one you can see here:
https://ceph.com/geen-categorie/feature-set-mismatch-error-on-ceph-kernel-client/
Case in point we would like to use upmap on our ceph cluster (currently
used mainly for CephFS) but `ceph feature` return:
"client
; the SSD, and speedup garbage collection?
>
> many thanks
>
> Jake
>
>
>
> On 5/29/19 9:56 AM, Mattia Belluco wrote:
>> On 5/29/19 5:40 AM, Konstantin Shalygin wrote:
>>> block.db should be 30Gb or 300Gb - anything between is pointless. There
>
On 5/29/19 5:40 AM, Konstantin Shalygin wrote:
> block.db should be 30Gb or 300Gb - anything between is pointless. There
> is described why:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html
Following some discussions we had at the past Cephalocon I beg to differ
on t
Hi,
we are having issues with the crush location hooks on Mimic:
we deployed the same script we have been using since Hammer (and has
been working fine also in Jewel) that returns:
root=fresh-install host=$(hostname -s)-fresh
however it seems the output of the script is completely disregarded.
an Balzer :
>>
>>
>>
>> Hello,
>>
>> On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote:
>>
>>> Hello all,
>>> we are currently in the process of buying new hardware to expand an
>>> existing Ceph cluster that already has 1200 osds.
>>
Hello all,
we are currently in the process of buying new hardware to expand an
existing Ceph cluster that already has 1200 osds.
We are currently using 24 * 4 TB SAS drives per osd with an SSD journal
shared among 4 osds. For the upcoming expansion we were thinking of
switching to either 6 or 8 TB
12 matches
Mail list logo