Re: mmap() question

2013-10-12 Thread Konstantin Belousov
On Fri, Oct 11, 2013 at 09:57:24AM +0400, Dmitry Sivachenko wrote:
 
 On 11.10.2013, at 9:17, Konstantin Belousov kostik...@gmail.com wrote:
 
  On Wed, Oct 09, 2013 at 03:42:27PM +0400, Dmitry Sivachenko wrote:
  Hello!
  
  I have a program which mmap()s a lot of large files (total size more that 
  RAM and I have no swap), but it needs only small parts of that files at a 
  time.
  
  My understanding is that when using mmap when I access some memory region 
  OS reads the relevant portion of that file from disk and caches the result 
  in memory.  If there is no free memory, OS will purge previously read part 
  of mmap'ed file to free memory for the new chunk.
  
  But this is not the case.  I use the following simple program which gets 
  list of files as command line arguments, mmap()s them all and then selects 
  random file and random 1K parts of that file and computes a XOR of bytes 
  from that region.
  After some time the program dies:
  pid 63251 (a.out), uid 1232, was killed: out of swap space
  
  It seems I incorrectly understand how mmap() works, can you please clarify 
  what's going wrong?
  
  I expect that program to run indefinitely, purging some regions out of RAM 
  and reading the relevant parts of files.
  
  
  You did not specified several very important parameters for your test:
  1. total amount of RAM installed
 
 
 24GB
 
 
  2. count of the test files and size of the files
 
 To be precise: I used 57 files with size varied form 74MB to 19GB.
 The total size of these files is 270GB.
 
  3. which filesystem files are located at
 
 
 UFS @ SSD drive
 
  4. version of the system.
 
 
 FreeBSD 9.2-PRERELEASE #0 r254880M: Wed Aug 28 11:07:54 MSK 2013

I was not able to reproduce the situation locally. I even tried to start
a lot of threads accessing the mapped regions, to try to outrun the
pagedaemon. The user threads sleep on the disk read, while pagedaemon
has a lot of time to rebalance the queues. It might be a case when SSD
indeed makes a difference.

Still, I see how this situation could appear. The code, which triggers
OOM, never fires if there is a free space in the swapfile, so the
absense of swap is neccessary condition to trigger the bug.  Next, OOM
calculation does not account for a possibility that almost all pages on
the queues can be reused. It just fires if free pages depleted too much
or free target cannot be reached.

IMO one of the possible solution is to account the queued pages in
addition to the swap space.  This is not entirely accurate, since some
pages on the queues cannot be reused, at least transiently.  Most precise
algorithm would count the hold and busy pages globally, and substract
this count from queues length, but it is probably too costly.

Instead, I think we could rely on the numbers which are counted by
pagedaemon threads during the passes.  Due to the transient nature of the
pagedaemon failures, this should be fine.

Below is the prototype patch, against HEAD.  It is not applicable to
stable, please use HEAD kernel for test.

diff --git a/sys/sys/vmmeter.h b/sys/sys/vmmeter.h
index d2ad920..ee5159a 100644
--- a/sys/sys/vmmeter.h
+++ b/sys/sys/vmmeter.h
@@ -93,9 +93,10 @@ struct vmmeter {
u_int v_free_min;   /* (c) pages desired free */
u_int v_free_count; /* (f) pages free */
u_int v_wire_count; /* (a) pages wired down */
-   u_int v_active_count;   /* (q) pages active */
+   u_int v_active_count;   /* (a) pages active */
u_int v_inactive_target; /* (c) pages desired inactive */
-   u_int v_inactive_count; /* (q) pages inactive */
+   u_int v_inactive_count; /* (a) pages inactive */
+   u_int v_queue_sticky;   /* (a) pages on queues but cannot process */
u_int v_cache_count;/* (f) pages on cache queue */
u_int v_cache_min;  /* (c) min pages desired on cache queue */
u_int v_cache_max;  /* (c) max pages in cached obj (unused) */
diff --git a/sys/vm/vm_meter.c b/sys/vm/vm_meter.c
index 713a2be..4bb1f1f 100644
--- a/sys/vm/vm_meter.c
+++ b/sys/vm/vm_meter.c
@@ -316,6 +316,7 @@ VM_STATS_VM(v_active_count, Active pages);
 VM_STATS_VM(v_inactive_target, Desired inactive pages);
 VM_STATS_VM(v_inactive_count, Inactive pages);
 VM_STATS_VM(v_cache_count, Pages on cache queue);
+VM_STATS_VM(v_queue_sticky, Pages which cannot be moved from queues);
 VM_STATS_VM(v_cache_min, Min pages on cache queue);
 VM_STATS_VM(v_cache_max, Max pages on cached queue);
 VM_STATS_VM(v_pageout_free_min, Min pages reserved for kernel);
diff --git a/sys/vm/vm_page.h b/sys/vm/vm_page.h
index 7846702..6943a0e 100644
--- a/sys/vm/vm_page.h
+++ b/sys/vm/vm_page.h
@@ -226,6 +226,7 @@ struct vm_domain {
long vmd_segs;  /* bitmask of the segments */
boolean_t vmd_oom;
int vmd_pass;   /* local pagedaemon pass */
+   int vmd_queue_sticky;   /* pages on queues which cannot be processed */
struct vm_page vmd_marker; /* marker for pagedaemon private 

Re: mmap() question

2013-10-12 Thread Dmitry Sivachenko

On 12.10.2013, at 13:59, Konstantin Belousov kostik...@gmail.com wrote:
 
 I was not able to reproduce the situation locally. I even tried to start
 a lot of threads accessing the mapped regions, to try to outrun the
 pagedaemon. The user threads sleep on the disk read, while pagedaemon
 has a lot of time to rebalance the queues. It might be a case when SSD
 indeed makes a difference.
 


With ordinary SATA drive it will take hours just to read 20GB of data from disk 
because of random access, it will do a lot of seeks and reading speed will be 
extremely low.

SSD dramatically improves reading speed.


 Still, I see how this situation could appear. The code, which triggers
 OOM, never fires if there is a free space in the swapfile, so the
 absense of swap is neccessary condition to trigger the bug.  Next, OOM
 calculation does not account for a possibility that almost all pages on
 the queues can be reused. It just fires if free pages depleted too much
 or free target cannot be reached.


First I tried with some swap space configured.  The OS started to swap out my 
process after it reached about 20GB which is also not what I expected:  what is 
the reason to swap out regions of read-only mmap()ed files?  Is it the expected 
behaviour?


 
 Below is the prototype patch, against HEAD.  It is not applicable to
 stable, please use HEAD kernel for test.


Thanks, I will test the patch soon and report the results.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: mmap() question

2013-10-12 Thread Konstantin Belousov
On Sat, Oct 12, 2013 at 04:04:31PM +0400, Dmitry Sivachenko wrote:
 
 On 12.10.2013, at 13:59, Konstantin Belousov kostik...@gmail.com wrote:
  
  I was not able to reproduce the situation locally. I even tried to start
  a lot of threads accessing the mapped regions, to try to outrun the
  pagedaemon. The user threads sleep on the disk read, while pagedaemon
  has a lot of time to rebalance the queues. It might be a case when SSD
  indeed makes a difference.
  
 
 
 With ordinary SATA drive it will take hours just to read 20GB of data from 
 disk because of random access, it will do a lot of seeks and reading speed 
 will be extremely low.
 
 SSD dramatically improves reading speed.
 
 
  Still, I see how this situation could appear. The code, which triggers
  OOM, never fires if there is a free space in the swapfile, so the
  absense of swap is neccessary condition to trigger the bug.  Next, OOM
  calculation does not account for a possibility that almost all pages on
  the queues can be reused. It just fires if free pages depleted too much
  or free target cannot be reached.
 
 
 First I tried with some swap space configured.  The OS started to swap out my 
 process after it reached about 20GB which is also not what I expected:  what 
 is the reason to swap out regions of read-only mmap()ed files?  Is it the 
 expected behaviour?
 
How did you concluded that the pages from your r/o mappings were paged out ?
VM never does this.  Only anonymous memory could be written to swap file,
including the shadow pages for the writeable COW mappings.  I suspect that
you have another 20GB of something used on the machine meantime.

 
  
  Below is the prototype patch, against HEAD.  It is not applicable to
  stable, please use HEAD kernel for test.
 
 
 Thanks, I will test the patch soon and report the results.


pgp4mxTG6rGdf.pgp
Description: PGP signature