Hi All,
I have a query regarding objecter behaviour for homeless session. In
situations when all OSDs containing copies (*let say replication 3*) of an
object are down, the objecter assigns a homeless session (OSD=-1) to a
client request. This request makes radosgw thread hang indefinitely as the
On 11/26/19 4:10 AM, Frank R wrote:
Do you mean the block.db size should be 3, 30 or 300GB and nothing else?
Yes, if not - you will get data spillover of your RocksDB to slow_db at
compaction rounds.
k
___
ceph-users mailing list --
On 11/25/19 7:41 PM, Rodrigo Severo - Fábrica wrote:
I would like to know the impacts of having one single CephFS mount X
having several.
If I have several subdirectories in my CephFS that should be
accessible to different users, with users needing access to different
sets of mounts, would it
On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
What I can't find is the 138,509 G difference between the
ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not
static BTW, checking the same data historically shows we have about
1.12x of what we expect. This seems to make our 1.5x EC
Hi,
For your response:
"You should use not more 1Gb for WAL and 30Gb for RocksDB. Numbers !
3,30,300 (Gb) for block.db is useless.
"
Do you mean the block.db size should be 3, 30 or 300GB and nothing else?
If so, thy not?
Thanks,
Frank
___
I have a question about ceph cache pools as documented on this page:
https://docs.ceph.com/docs/nautilus/dev/cache-pool/
Is the cache pool feature still considered a good idea? Reading some of
the email archives I find some discussion of how this caching is not
recommended anymore, for
This is the seventh bugfix release of the Mimic v13.2.x long term stable
release series. We recommend all Mimic users upgrade.
For the full release notes, see
https://ceph.io/releases/v13-2-7-mimic-released/
Notable Changes
MDS:
- Cache trimming is now throttled. Dropping the MDS
On Mon, Nov 25, 2019 at 1:57 PM Robert Sander
wrote:
>
> Hi,
>
> Am 25.11.19 um 13:36 schrieb Rodrigo Severo - Fábrica:
>
> > I would like to know the expected differences between a FUSE and a kernel
> > mount.
> >
> > Why the 2 options? When should I use one and when should I use the other?
>
>
Hi,
Am 25.11.19 um 13:36 schrieb Rodrigo Severo - Fábrica:
> I would like to know the expected differences between a FUSE and a kernel
> mount.
>
> Why the 2 options? When should I use one and when should I use the other?
The kernel mount code always lags behind the development process. But
Hi,
Just starting to use CephFS.
I would like to know the impacts of having one single CephFS mount X
having several.
If I have several subdirectories in my CephFS that should be
accessible to different users, with users needing access to different
sets of mounts, would it be important for me
Hi,
I'm just deploying a CephFS service.
I would like to know the expected differences between a FUSE and a kernel mount.
Why the 2 options? When should I use one and when should I use the other?
Regards,
Rodrigo Severo
___
ceph-users mailing list
11 matches
Mail list logo