> If a CephFS client receive a cap release request and it is able to
> perform it (no processes accessing the file at the moment), the client
> cleaned up its internal state and allows the MDS to release the cap.
> This cleanup also involves removing file data from the page cache.
>
> If your MDS w
Hi,
On 03.11.18 10:31, jes...@krogh.cc wrote:
I suspect that mds asked client to trim its cache. Please run
following commands on an idle client.
In the mean time - we migrated to the RH Ceph version and deliered the MDS
both SSD's and more memory and the problem went away.
It still puzzles my
> I suspect that mds asked client to trim its cache. Please run
> following commands on an idle client.
In the mean time - we migrated to the RH Ceph version and deliered the MDS
both SSD's and more memory and the problem went away.
It still puzzles my mind a bit - why is there a connection betwe
On Mon, Oct 15, 2018 at 9:54 PM Dietmar Rieder
wrote:
>
> On 10/15/18 1:17 PM, jes...@krogh.cc wrote:
> >> On 10/15/18 12:41 PM, Dietmar Rieder wrote:
> >>> No big difference here.
> >>> all CentOS 7.5 official kernel 3.10.0-862.11.6.el7.x86_64
> >>
> >> ...forgot to mention: all is luminous ceph-
On 10/15/18 1:17 PM, jes...@krogh.cc wrote:
>> On 10/15/18 12:41 PM, Dietmar Rieder wrote:
>>> No big difference here.
>>> all CentOS 7.5 official kernel 3.10.0-862.11.6.el7.x86_64
>>
>> ...forgot to mention: all is luminous ceph-12.2.7
>
> Thanks for your time in testing, this is very valueable t
> On 10/15/18 12:41 PM, Dietmar Rieder wrote:
>> No big difference here.
>> all CentOS 7.5 official kernel 3.10.0-862.11.6.el7.x86_64
>
> ...forgot to mention: all is luminous ceph-12.2.7
Thanks for your time in testing, this is very valueable to me in the
debugging. 2 questions:
Did you "sleep 9
On 10/15/18 12:41 PM, Dietmar Rieder wrote:
> On 10/15/18 12:02 PM, jes...@krogh.cc wrote:
On Sun, Oct 14, 2018 at 8:21 PM wrote:
how many cephfs mounts that access the file? Is is possible that some
program opens that file in RW mode (even they just read the file)?
>>>
>>>
>>> The
On 10/15/18 12:02 PM, jes...@krogh.cc wrote:
>>> On Sun, Oct 14, 2018 at 8:21 PM wrote:
>>> how many cephfs mounts that access the file? Is is possible that some
>>> program opens that file in RW mode (even they just read the file)?
>>
>>
>> The nature of the program is that it is "prepped" by one
>> On Sun, Oct 14, 2018 at 8:21 PM wrote:
>> how many cephfs mounts that access the file? Is is possible that some
>> program opens that file in RW mode (even they just read the file)?
>
>
> The nature of the program is that it is "prepped" by one-set of commands
> and queried by another, thus the
> On Sun, Oct 14, 2018 at 8:21 PM wrote:
> how many cephfs mounts that access the file? Is is possible that some
> program opens that file in RW mode (even they just read the file)?
The nature of the program is that it is "prepped" by one-set of commands
and queried by another, thus the RW case
On Sun, Oct 14, 2018 at 8:21 PM wrote:
>
> Hi
>
> We have a dataset of ~300 GB on CephFS which as being used for computations
> over and over agian .. being refreshed daily or similar.
>
> When hosting it on NFS after refresh, they are transferred, but from
> there - they would be sitting in the k
> Actual amount of memory used by VFS cache is available through 'grep
> Cached /proc/meminfo'. slabtop provides information about cache
> of inodes, dentries, and IO memory buffers (buffer_head).
Thanks, that was also what I got out of it. And why I reported "free"
output in the first as it also
Actual amount of memory used by VFS cache is available through 'grep Cached
/proc/meminfo'. slabtop provides information about cache of inodes, dentries,
and IO memory buffers (buffer_head).
> On 14.10.2018, at 17:28, jes...@krogh.cc wrote:
>
>> Try looking in /proc/slabinfo / slabtop during y
> Try looking in /proc/slabinfo / slabtop during your tests.
I need a bit of guidance here.. Does the slabinfo cover the VFS page
cache ? .. I cannot seem to find any traces (sorting by size on
machines with a huge cache does not really give anything). Perhaps
I'm holding the screwdriver wrong?
Try looking in /proc/slabinfo / slabtop during your tests.
> On 14.10.2018, at 15:21, jes...@krogh.cc wrote:
>
> Hi
>
> We have a dataset of ~300 GB on CephFS which as being used for computations
> over and over agian .. being refreshed daily or similar.
>
> When hosting it on NFS after refres
On 14 Oct 2018, at 15.26, John Hearns wrote:
>
> This is a general question for the ceph list.
> Should Jesper be looking at these vm tunables?
> vm.dirty_ratio
> vm.dirty_centisecs
>
> What effect do they have when using Cephfs?
This situation is a read only, thus no dirty data in page cache.
This is a general question for the ceph list.
Should Jesper be looking at these vm tunables?
vm.dirty_ratio
vm.dirty_centisecs
What effect do they have when using Cephfs?
On Sun, 14 Oct 2018 at 14:24, John Hearns wrote:
> Hej Jesper.
> Sorry I do not have a direct answer to your question.
> Whe
Hej Jesper.
Sorry I do not have a direct answer to your question.
When looking at memory usage, I often use this command:
watch cat /rpoc/meminfo
On Sun, 14 Oct 2018 at 13:22, wrote:
> Hi
>
> We have a dataset of ~300 GB on CephFS which as being used for computations
> over and over agian
Hi
We have a dataset of ~300 GB on CephFS which as being used for computations
over and over agian .. being refreshed daily or similar.
When hosting it on NFS after refresh, they are transferred, but from
there - they would be sitting in the kernel page cache of the client
until they are refreshe
19 matches
Mail list logo