Re: [ceph-users] ceph pg repair fails...?

2019-10-01 Thread Mattia Belluco
log_channel(cluster) log [ERR] : > repair 2.36bs0 2:d6cac754:::100070209f6.:head : on disk size > (4096) does not match object info size (0) adjusted for ondisk to (0) > 2019-10-01 11:30:10.573 7fa01f589700 -1 log_channel(cluster) log [ERR] : > 2.36b repair 11 errors, 0 fixed

Re: [ceph-users] Have you enabled the telemetry module yet?

2019-10-01 Thread Mattia Belluco
Hi all, Same situation here: Ceph 13.2.6 on Ubuntu 16.04. Best Mattia On 10/1/19 4:38 PM, Stefan Kooman wrote: > Quoting Wido den Hollander (w...@42on.com): >> Hi, >> >> The Telemetry [0] module has been in Ceph since the Mimic release and >> when enabled it sends back a anonymized JSON back to

[ceph-users] scrub errors because of missing shards on luminous

2019-09-19 Thread Mattia Belluco
ot;snapset_inconsistency" The pool has size=3 and min_size=2, the image size is 5TB with 4MB objects. Has anyone experienced a similar issue? I could not find anything relevant in the issue tracker but I'll be happy to open a case if this turns out to be a bug. Thanks in adva

Re: [ceph-users] cephfs quota setfattr permission denied

2019-07-31 Thread Mattia Belluco
m. > > On Wed, Jul 31, 2019 at 5:43 AM Mattia Belluco wrote: >> >> Dear ceph users, >> >> We have been recently trying to use the two quota attributes: >> >> - ceph.quota.max_files >> - ceph.quota.max_bytes >> >> to prepare for quota

[ceph-users] cephfs quota setfattr permission denied

2019-07-31 Thread Mattia Belluco
Dear ceph users, We have been recently trying to use the two quota attributes: - ceph.quota.max_files - ceph.quota.max_bytes to prepare for quota enforcing. While the idea is quite straightforward we found out we cannot set any additional file attribute (we tried with the directory pinning, too

Re: [ceph-users] Ceph features and linux kernel version for upmap

2019-07-09 Thread Mattia Belluco
1247 München > www.croit.io <http://www.croit.io> > Tel: +49 89 1896585 90 > > > On Tue, Jul 9, 2019 at 4:20 PM Mattia Belluco <mailto:mattia.bell...@uzh.ch>> wrote: > > Hello ml, > > I have been looking for an updated table like the one you can

[ceph-users] Ceph features and linux kernel version for upmap

2019-07-09 Thread Mattia Belluco
Hello ml, I have been looking for an updated table like the one you can see here: https://ceph.com/geen-categorie/feature-set-mismatch-error-on-ceph-kernel-client/ Case in point we would like to use upmap on our ceph cluster (currently used mainly for CephFS) but `ceph feature` return: "client

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-06-03 Thread Mattia Belluco
; the SSD, and speedup garbage collection? > > many thanks > > Jake > > > > On 5/29/19 9:56 AM, Mattia Belluco wrote: >> On 5/29/19 5:40 AM, Konstantin Shalygin wrote: >>> block.db should be 30Gb or 300Gb - anything between is pointless. There >

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-29 Thread Mattia Belluco
On 5/29/19 5:40 AM, Konstantin Shalygin wrote: > block.db should be 30Gb or 300Gb - anything between is pointless. There > is described why: > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html Following some discussions we had at the past Cephalocon I beg to differ on t

[ceph-users] crush location hook with mimic

2019-01-23 Thread Mattia Belluco
Hi, we are having issues with the crush location hooks on Mimic: we deployed the same script we have been using since Hammer (and has been working fine also in Jewel) that returns: root=fresh-install host=$(hostname -s)-fresh however it seems the output of the script is completely disregarded.

Re: [ceph-users] New hardware for OSDs

2017-03-27 Thread Mattia Belluco
an Balzer : >> >> >> >> Hello, >> >> On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote: >> >>> Hello all, >>> we are currently in the process of buying new hardware to expand an >>> existing Ceph cluster that already has 1200 osds. >>

[ceph-users] New hardware for OSDs

2017-03-27 Thread Mattia Belluco
Hello all, we are currently in the process of buying new hardware to expand an existing Ceph cluster that already has 1200 osds. We are currently using 24 * 4 TB SAS drives per osd with an SSD journal shared among 4 osds. For the upcoming expansion we were thinking of switching to either 6 or 8 TB