Author: alc
Date: Sat May 27 16:40:00 2017
New Revision: 318995
URL: https://svnweb.freebsd.org/changeset/base/318995

Log:
  In r118390, the swap pager's approach to striping swap allocation over
  multiple devices was changed.  However, swapoff_one() was not fully and
  correctly converted.  In particular, with r118390's introduction of a per-
  device blist, the maximum swap block size, "dmmax", became irrelevant to
  swapoff_one()'s operation.  Moreover, swapoff_one() was performing out-of-
  range operations on the per-device blist that were silently ignored by
  blist_fill().
  
  This change corrects both of these problems with swapoff_one(), which will
  allow us to potentially increase MAX_PAGEOUT_CLUSTER.  Previously,
  swapoff_one() would panic inside of blist_fill() if you increased
  MAX_PAGEOUT_CLUSTER.
  
  Reviewed by:  kib, markj
  MFC after:    3 days

Modified:
  head/sys/vm/swap_pager.c

Modified: head/sys/vm/swap_pager.c
==============================================================================
--- head/sys/vm/swap_pager.c    Sat May 27 14:07:46 2017        (r318994)
+++ head/sys/vm/swap_pager.c    Sat May 27 16:40:00 2017        (r318995)
@@ -2310,9 +2310,13 @@ swapoff_one(struct swdevt *sp, struct uc
         */
        mtx_lock(&sw_dev_mtx);
        sp->sw_flags |= SW_CLOSING;
-       for (dvbase = 0; dvbase < sp->sw_end; dvbase += dmmax) {
+       for (dvbase = 0; dvbase < nblks; dvbase += BLIST_BMAP_RADIX) {
+               /*
+                * blist_fill() cannot allocate more than BLIST_BMAP_RADIX
+                * blocks per call.
+                */
                swap_pager_avail -= blist_fill(sp->sw_blist,
-                    dvbase, dmmax);
+                   dvbase, ulmin(nblks - dvbase, BLIST_BMAP_RADIX));
        }
        swap_total -= (vm_ooffset_t)nblks * PAGE_SIZE;
        mtx_unlock(&sw_dev_mtx);
_______________________________________________
[email protected] mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "[email protected]"

Reply via email to