Title: [243144] trunk/Source/bmalloc
Revision
243144
Author
msab...@apple.com
Date
2019-03-19 10:31:01 -0700 (Tue, 19 Mar 2019)

Log Message

[BMalloc] Scavenger should react to recent memory activity
https://bugs.webkit.org/show_bug.cgi?id=195895

Reviewed by Geoffrey Garen.

This change adds a recently used bit to objects that are scavenged.  When an object is allocated, that bit is set.
When we scavenge, if the bit is set, we clear it.  If the bit was already clear, we decommit the object.  The timing
to scavenging has been changed as well.  We perform our first scavne almost immediately after bmalloc is initialized
(10ms later).  Subsequent scavenging is done as a multiple of the time it took to scavenge.  We bound this computed
time between a minimum and maximum.  Through empirical testing, the multiplier, minimum and maximum are
150x, 100ms and 10,000ms respectively.  For mini-mode, when the JIT is disabled, we use much more aggressive values of
50x, 25ms and 500ms.

Eliminated partial scavenging since this change allows for any scavenge to be partial or full based on recent use of
the objects on the various free lists.

* bmalloc/Chunk.h:
(bmalloc::Chunk::usedSinceLastScavenge):
(bmalloc::Chunk::clearUsedSinceLastScavenge):
(bmalloc::Chunk::setUsedSinceLastScavenge):
* bmalloc/Heap.cpp:
(bmalloc::Heap::scavenge):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::scavengeToHighWatermark): Deleted.
* bmalloc/Heap.h:
* bmalloc/IsoDirectory.h:
* bmalloc/IsoDirectoryInlines.h:
(bmalloc::passedNumPages>::takeFirstEligible):
(bmalloc::passedNumPages>::scavenge):
(bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
* bmalloc/IsoHeapImpl.h:
* bmalloc/IsoHeapImplInlines.h:
(bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
* bmalloc/LargeRange.h:
(bmalloc::LargeRange::LargeRange):
(bmalloc::LargeRange::usedSinceLastScavenge):
(bmalloc::LargeRange::clearUsedSinceLastScavenge):
(bmalloc::LargeRange::setUsedSinceLastScavenge):
(): Deleted.
* bmalloc/Scavenger.cpp:
(bmalloc::Scavenger::Scavenger):
(bmalloc::Scavenger::threadRunLoop):
(bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
(bmalloc::Scavenger::partialScavenge): Deleted.
* bmalloc/Scavenger.h:
* bmalloc/SmallPage.h:
(bmalloc::SmallPage::usedSinceLastScavenge):
(bmalloc::SmallPage::clearUsedSinceLastScavenge):
(bmalloc::SmallPage::setUsedSinceLastScavenge):

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (243143 => 243144)


--- trunk/Source/bmalloc/ChangeLog	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/ChangeLog	2019-03-19 17:31:01 UTC (rev 243144)
@@ -1,3 +1,58 @@
+2019-03-18  Michael Saboff  <msab...@apple.com>
+
+        [BMalloc] Scavenger should react to recent memory activity
+        https://bugs.webkit.org/show_bug.cgi?id=195895
+
+        Reviewed by Geoffrey Garen.
+
+        This change adds a recently used bit to objects that are scavenged.  When an object is allocated, that bit is set.
+        When we scavenge, if the bit is set, we clear it.  If the bit was already clear, we decommit the object.  The timing
+        to scavenging has been changed as well.  We perform our first scavne almost immediately after bmalloc is initialized
+        (10ms later).  Subsequent scavenging is done as a multiple of the time it took to scavenge.  We bound this computed
+        time between a minimum and maximum.  Through empirical testing, the multiplier, minimum and maximum are
+        150x, 100ms and 10,000ms respectively.  For mini-mode, when the JIT is disabled, we use much more aggressive values of
+        50x, 25ms and 500ms.
+
+        Eliminated partial scavenging since this change allows for any scavenge to be partial or full based on recent use of
+        the objects on the various free lists.
+
+        * bmalloc/Chunk.h:
+        (bmalloc::Chunk::usedSinceLastScavenge):
+        (bmalloc::Chunk::clearUsedSinceLastScavenge):
+        (bmalloc::Chunk::setUsedSinceLastScavenge):
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::allocateSmallChunk):
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::scavengeToHighWatermark): Deleted.
+        * bmalloc/Heap.h:
+        * bmalloc/IsoDirectory.h:
+        * bmalloc/IsoDirectoryInlines.h:
+        (bmalloc::passedNumPages>::takeFirstEligible):
+        (bmalloc::passedNumPages>::scavenge):
+        (bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
+        * bmalloc/IsoHeapImpl.h:
+        * bmalloc/IsoHeapImplInlines.h:
+        (bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
+        * bmalloc/LargeRange.h:
+        (bmalloc::LargeRange::LargeRange):
+        (bmalloc::LargeRange::usedSinceLastScavenge):
+        (bmalloc::LargeRange::clearUsedSinceLastScavenge):
+        (bmalloc::LargeRange::setUsedSinceLastScavenge):
+        (): Deleted.
+        * bmalloc/Scavenger.cpp:
+        (bmalloc::Scavenger::Scavenger):
+        (bmalloc::Scavenger::threadRunLoop):
+        (bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
+        (bmalloc::Scavenger::partialScavenge): Deleted.
+        * bmalloc/Scavenger.h:
+        * bmalloc/SmallPage.h:
+        (bmalloc::SmallPage::usedSinceLastScavenge):
+        (bmalloc::SmallPage::clearUsedSinceLastScavenge):
+        (bmalloc::SmallPage::setUsedSinceLastScavenge):
+
 2019-03-14  Yusuke Suzuki  <ysuz...@apple.com>
 
         [bmalloc] Add StaticPerProcess for known types to save pages

Modified: trunk/Source/bmalloc/bmalloc/Chunk.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/Chunk.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/Chunk.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -45,6 +45,10 @@
     void deref() { BASSERT(m_refCount); --m_refCount; }
     unsigned refCount() { return m_refCount; }
 
+    bool usedSinceLastScavenge() { return m_usedSinceLastScavenge; }
+    void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
+    void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
+
     size_t offset(void*);
 
     char* address(size_t offset);
@@ -59,6 +63,7 @@
 
 private:
     size_t m_refCount { };
+    bool m_usedSinceLastScavenge: 1;
     List<SmallPage> m_freePages { };
 
     std::array<SmallLine, chunkSize / smallLineSize> m_lines { };

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2019-03-19 17:31:01 UTC (rev 243144)
@@ -175,7 +175,7 @@
 #endif
 }
 
-void Heap::scavenge(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter)
+void Heap::scavenge(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter, size_t& deferredDecommits)
 {
     for (auto& list : m_freePages) {
         for (auto* chunk : list) {
@@ -182,6 +182,11 @@
             for (auto* page : chunk->freePages()) {
                 if (!page->hasPhysicalPages())
                     continue;
+                if (page->usedSinceLastScavenge()) {
+                    page->clearUsedSinceLastScavenge();
+                    deferredDecommits++;
+                    continue;
+                }
 
                 size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
                 size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
@@ -189,38 +194,38 @@
                 m_footprint -= decommitSize;
                 decommitter.addEager(page->begin()->begin(), pageSize);
                 page->setHasPhysicalPages(false);
-#if ENABLE_PHYSICAL_PAGE_MAP 
+#if ENABLE_PHYSICAL_PAGE_MAP
                 m_physicalPageMap.decommit(page->begin()->begin(), pageSize);
 #endif
             }
         }
     }
-    
+
     for (auto& list : m_chunkCache) {
-        while (!list.isEmpty())
-            deallocateSmallChunk(list.pop(), &list - &m_chunkCache[0]);
+        for (auto iter = list.begin(); iter != list.end(); ) {
+            Chunk* chunk = *iter;
+            if (chunk->usedSinceLastScavenge()) {
+                chunk->clearUsedSinceLastScavenge();
+                deferredDecommits++;
+                ++iter;
+                continue;
+            }
+            ++iter;
+            list.remove(chunk);
+            deallocateSmallChunk(chunk, &list - &m_chunkCache[0]);
+        }
     }
 
     for (LargeRange& range : m_largeFree) {
-        m_highWatermark = std::min(m_highWatermark, static_cast<void*>(range.begin()));
+        if (range.usedSinceLastScavenge()) {
+            range.clearUsedSinceLastScavenge();
+            deferredDecommits++;
+            continue;
+        }
         decommitLargeRange(lock, range, decommitter);
     }
-
-    m_freeableMemory = 0;
 }
 
-void Heap::scavengeToHighWatermark(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter)
-{
-    void* newHighWaterMark = nullptr;
-    for (LargeRange& range : m_largeFree) {
-        if (range.begin() <= m_highWatermark)
-            newHighWaterMark = std::min(newHighWaterMark, static_cast<void*>(range.begin()));
-        else
-            decommitLargeRange(lock, range, decommitter);
-    }
-    m_highWatermark = newHighWaterMark;
-}
-
 void Heap::deallocateLineCache(std::unique_lock<Mutex>&, LineCache& lineCache)
 {
     for (auto& list : lineCache) {
@@ -249,6 +254,7 @@
 
         forEachPage(chunk, pageSize, [&](SmallPage* page) {
             page->setHasPhysicalPages(true);
+            page->setUsedSinceLastScavenge();
             page->setHasFreeLines(lock, true);
             chunk->freePages().push(page);
         });
@@ -310,6 +316,7 @@
         Chunk* chunk = m_freePages[pageClass].tail();
 
         chunk->ref();
+        chunk->setUsedSinceLastScavenge();
 
         SmallPage* page = chunk->freePages().pop();
         if (chunk->freePages().isEmpty())
@@ -324,10 +331,11 @@
             m_footprint += physicalSize;
             vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
             page->setHasPhysicalPages(true);
-#if ENABLE_PHYSICAL_PAGE_MAP 
+#if ENABLE_PHYSICAL_PAGE_MAP
             m_physicalPageMap.commit(page->begin()->begin(), pageSize);
 #endif
         }
+        page->setUsedSinceLastScavenge();
 
         return page;
     }();
@@ -585,7 +593,6 @@
     m_freeableMemory -= range.totalPhysicalSize();
 
     void* result = splitAndAllocate(lock, range, alignment, size).begin();
-    m_highWatermark = std::max(m_highWatermark, result);
     return result;
 }
 

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -76,9 +76,8 @@
     size_t largeSize(std::unique_lock<Mutex>&, void*);
     void shrinkLarge(std::unique_lock<Mutex>&, const Range&, size_t);
 
-    void scavenge(std::lock_guard<Mutex>&, BulkDecommit&);
+    void scavenge(std::lock_guard<Mutex>&, BulkDecommit&, size_t& deferredDecommits);
     void scavenge(std::lock_guard<Mutex>&, BulkDecommit&, size_t& freed, size_t goal);
-    void scavengeToHighWatermark(std::lock_guard<Mutex>&, BulkDecommit&);
 
     size_t freeableMemory(std::lock_guard<Mutex>&);
     size_t footprint();
@@ -153,8 +152,6 @@
 #if ENABLE_PHYSICAL_PAGE_MAP 
     PhysicalPageMap m_physicalPageMap;
 #endif
-
-    void* m_highWatermark { nullptr };
 };
 
 inline void Heap::allocateSmallBumpRanges(

Modified: trunk/Source/bmalloc/bmalloc/IsoDirectory.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/IsoDirectory.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/IsoDirectory.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -75,7 +75,6 @@
     // Iterate over all empty and committed pages, and put them into the vector. This also records the
     // pages as being decommitted. It's the caller's job to do the actual decommitting.
     void scavenge(Vector<DeferredDecommit>&);
-    void scavengeToHighWatermark(Vector<DeferredDecommit>&);
 
     template<typename Func>
     void forEachCommittedPage(const Func&);
@@ -90,7 +89,6 @@
     Bits<numPages> m_committed;
     std::array<IsoPage<Config>*, numPages> m_pages;
     unsigned m_firstEligible { 0 };
-    unsigned m_highWatermark { 0 };
 };
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -51,8 +51,6 @@
     if (pageIndex >= numPages)
         return EligibilityKind::Full;
 
-    m_highWatermark = std::max(pageIndex, m_highWatermark);
-    
     Scavenger& scavenger = *Scavenger::get();
     scavenger.didStartGrowing();
     
@@ -143,21 +141,9 @@
         [&] (size_t index) {
             scavengePage(index, decommits);
         });
-    m_highWatermark = 0;
 }
 
 template<typename Config, unsigned passedNumPages>
-void IsoDirectory<Config, passedNumPages>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
-{
-    (m_empty & m_committed).forEachSetBit(
-        [&] (size_t index) {
-            if (index > m_highWatermark)
-                scavengePage(index, decommits);
-        });
-    m_highWatermark = 0;
-}
-
-template<typename Config, unsigned passedNumPages>
 template<typename Func>
 void IsoDirectory<Config, passedNumPages>::forEachCommittedPage(const Func& func)
 {

Modified: trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -40,7 +40,6 @@
     virtual ~IsoHeapImplBase();
     
     virtual void scavenge(Vector<DeferredDecommit>&) = 0;
-    virtual void scavengeToHighWatermark(Vector<DeferredDecommit>&) = 0;
     virtual size_t freeableMemory() = 0;
     virtual size_t footprint() = 0;
     
@@ -72,7 +71,6 @@
     void didBecomeEligible(IsoDirectory<Config, IsoDirectoryPage<Config>::numPages>*);
     
     void scavenge(Vector<DeferredDecommit>&) override;
-    void scavengeToHighWatermark(Vector<DeferredDecommit>&) override;
 
     size_t freeableMemory() override;
 

Modified: trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -110,19 +110,6 @@
 }
 
 template<typename Config>
-void IsoHeapImpl<Config>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
-{
-    std::lock_guard<Mutex> locker(this->lock);
-    if (!m_directoryHighWatermark)
-        m_inlineDirectory.scavengeToHighWatermark(decommits);
-    for (IsoDirectoryPage<Config>* page = m_headDirectory; page; page = page->next) {
-        if (page->index() >= m_directoryHighWatermark)
-            page->payload.scavengeToHighWatermark(decommits);
-    }
-    m_directoryHighWatermark = 0;
-}
-
-template<typename Config>
 size_t IsoHeapImpl<Config>::freeableMemory()
 {
     return m_freeableMemory;

Modified: trunk/Source/bmalloc/bmalloc/LargeMap.cpp (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/LargeMap.cpp	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/LargeMap.cpp	2019-03-19 17:31:01 UTC (rev 243144)
@@ -75,7 +75,8 @@
 
         merged = merge(merged, m_free.pop(i--));
     }
-    
+
+    merged.setUsedSinceLastScavenge();
     m_free.push(merged);
 }
 

Modified: trunk/Source/bmalloc/bmalloc/LargeRange.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/LargeRange.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/LargeRange.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -37,6 +37,8 @@
         : Range()
         , m_startPhysicalSize(0)
         , m_totalPhysicalSize(0)
+        , m_isEligible(true)
+        , m_usedSinceLastScavenge(false)
     {
     }
 
@@ -44,15 +46,19 @@
         : Range(other)
         , m_startPhysicalSize(startPhysicalSize)
         , m_totalPhysicalSize(totalPhysicalSize)
+        , m_isEligible(true)
+        , m_usedSinceLastScavenge(false)
     {
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     }
 
-    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize)
+    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize, bool usedSinceLastScavenge = false)
         : Range(begin, size)
         , m_startPhysicalSize(startPhysicalSize)
         , m_totalPhysicalSize(totalPhysicalSize)
+        , m_isEligible(true)
+        , m_usedSinceLastScavenge(usedSinceLastScavenge)
     {
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
@@ -83,6 +89,10 @@
     void setEligible(bool eligible) { m_isEligible = eligible; }
     bool isEligibile() const { return m_isEligible; }
 
+    bool usedSinceLastScavenge() const { return m_usedSinceLastScavenge; }
+    void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
+    void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
+
     bool operator<(const void* other) const { return begin() < other; }
     bool operator<(const LargeRange& other) const { return begin() < other.begin(); }
 
@@ -89,7 +99,8 @@
 private:
     size_t m_startPhysicalSize;
     size_t m_totalPhysicalSize;
-    bool m_isEligible { true };
+    unsigned m_isEligible: 1;
+    unsigned m_usedSinceLastScavenge: 1;
 };
 
 inline bool canMerge(const LargeRange& a, const LargeRange& b)
@@ -112,12 +123,14 @@
 inline LargeRange merge(const LargeRange& a, const LargeRange& b)
 {
     const LargeRange& left = std::min(a, b);
+    bool mergedUsedSinceLastScavenge = a.usedSinceLastScavenge() || b.usedSinceLastScavenge();
     if (left.size() == left.startPhysicalSize()) {
         return LargeRange(
             left.begin(),
             a.size() + b.size(),
             a.startPhysicalSize() + b.startPhysicalSize(),
-            a.totalPhysicalSize() + b.totalPhysicalSize());
+            a.totalPhysicalSize() + b.totalPhysicalSize(),
+            mergedUsedSinceLastScavenge);
     }
 
     return LargeRange(
@@ -124,7 +137,8 @@
         left.begin(),
         a.size() + b.size(),
         left.startPhysicalSize(),
-        a.totalPhysicalSize() + b.totalPhysicalSize());
+        a.totalPhysicalSize() + b.totalPhysicalSize(),
+        mergedUsedSinceLastScavenge);
 }
 
 inline std::pair<LargeRange, LargeRange> LargeRange::split(size_t leftSize) const

Modified: trunk/Source/bmalloc/bmalloc/Scavenger.cpp (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/Scavenger.cpp	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/Scavenger.cpp	2019-03-19 17:31:01 UTC (rev 243144)
@@ -80,7 +80,8 @@
     dispatch_resume(m_pressureHandlerDispatchSource);
     dispatch_release(queue);
 #endif
-    
+    m_waitTime = std::chrono::milliseconds(10);
+
     m_thread = std::thread(&threadEntryPoint, this);
 }
 
@@ -177,12 +178,6 @@
     return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastFullScavengeTime);
 }
 
-std::chrono::milliseconds Scavenger::timeSinceLastPartialScavenge()
-{
-    std::unique_lock<Mutex> lock(m_mutex);
-    return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastPartialScavengeTime);
-}
-
 void Scavenger::enableMiniMode()
 {
     m_isInMiniMode = true; // We just store to this racily. The scavenger thread will eventually pick up the right value.
@@ -205,13 +200,17 @@
 
         {
             PrintTime printTime("\nfull scavenge under lock time");
+            size_t deferredDecommits = 0;
             std::lock_guard<Mutex> lock(Heap::mutex());
             for (unsigned i = numHeaps; i--;) {
                 if (!isActiveHeapKind(static_cast<HeapKind>(i)))
                     continue;
-                PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter);
+                PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter, deferredDecommits);
             }
             decommitter.processEager();
+
+            if (deferredDecommits)
+                m_state = State::RunSoon;
         }
 
         {
@@ -252,73 +251,6 @@
     }
 }
 
-void Scavenger::partialScavenge()
-{
-    std::unique_lock<Mutex> lock(m_scavengingMutex);
-
-    if (verbose) {
-        fprintf(stderr, "--------------------------------\n");
-        fprintf(stderr, "--before partial scavenging--\n");
-        dumpStats();
-    }
-
-    {
-        BulkDecommit decommitter;
-        {
-            PrintTime printTime("\npartialScavenge under lock time");
-            std::lock_guard<Mutex> lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                size_t freeableMemory = heap.freeableMemory(lock);
-                if (freeableMemory < 4 * MB)
-                    continue;
-                heap.scavengeToHighWatermark(lock, decommitter);
-            }
-
-            decommitter.processEager();
-        }
-
-        {
-            PrintTime printTime("partialScavenge lazy decommit time");
-            decommitter.processLazy();
-        }
-
-        {
-            PrintTime printTime("partialScavenge mark all as eligible time");
-            std::lock_guard<Mutex> lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                heap.markAllLargeAsEligibile(lock);
-            }
-        }
-    }
-
-    {
-        RELEASE_BASSERT(!m_deferredDecommits.size());
-        AllIsoHeaps::get()->forEach(
-            [&] (IsoHeapImplBase& heap) {
-                heap.scavengeToHighWatermark(m_deferredDecommits);
-            });
-        IsoHeapImplBase::finishScavenging(m_deferredDecommits);
-        m_deferredDecommits.shrink(0);
-    }
-
-    if (verbose) {
-        fprintf(stderr, "--after partial scavenging--\n");
-        dumpStats();
-        fprintf(stderr, "--------------------------------\n");
-    }
-
-    {
-        std::unique_lock<Mutex> lock(m_mutex);
-        m_lastPartialScavengeTime = std::chrono::steady_clock::now();
-    }
-}
-
 size_t Scavenger::freeableMemory()
 {
     size_t result = 0;
@@ -386,7 +318,7 @@
         
         if (m_state == State::RunSoon) {
             std::unique_lock<Mutex> lock(m_mutex);
-            m_condition.wait_for(lock, std::chrono::milliseconds(m_isInMiniMode ? 200 : 2000), [&]() { return m_state != State::RunSoon; });
+            m_condition.wait_for(lock, m_waitTime, [&]() { return m_state != State::RunSoon; });
         }
         
         m_state = State::Sleep;
@@ -400,67 +332,31 @@
             fprintf(stderr, "--------------------------------\n");
         }
 
-        enum class ScavengeMode {
-            None,
-            Partial,
-            Full
-        };
+        std::chrono::steady_clock::time_point start { std::chrono::steady_clock::now() };
+        
+        scavenge();
 
-        size_t freeableMemory = this->freeableMemory();
+        auto timeSpentScavenging = std::chrono::steady_clock::now() - start;
 
-        ScavengeMode scavengeMode = [&] {
-            auto timeSinceLastFullScavenge = this->timeSinceLastFullScavenge();
-            auto timeSinceLastPartialScavenge = this->timeSinceLastPartialScavenge();
-            auto timeSinceLastScavenge = std::min(timeSinceLastPartialScavenge, timeSinceLastFullScavenge);
+        if (verbose) {
+            fprintf(stderr, "time spent scavenging %lfms\n",
+                static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(timeSpentScavenging).count()) / 1000);
+        }
 
-            if (isUnderMemoryPressure() && freeableMemory > 1 * MB && timeSinceLastScavenge > std::chrono::milliseconds(5))
-                return ScavengeMode::Full;
+        std::chrono::milliseconds newWaitTime;
 
-            if (!m_isProbablyGrowing) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(1000) && !m_isInMiniMode)
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
+        if (m_isInMiniMode) {
+            timeSpentScavenging *= 50;
+            newWaitTime = std::chrono::duration_cast<std::chrono::milliseconds>(timeSpentScavenging);
+            newWaitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(25)), std::chrono::milliseconds(500));
+        } else {
+            timeSpentScavenging *= 150;
+            newWaitTime = std::chrono::duration_cast<std::chrono::milliseconds>(timeSpentScavenging);
+            m_waitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(100)), std::chrono::milliseconds(10000));
+        }
 
-            if (m_isInMiniMode) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(200))
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
-
-#if BCPU(X86_64)
-            auto partialScavengeInterval = std::chrono::milliseconds(12000);
-#else
-            auto partialScavengeInterval = std::chrono::milliseconds(8000);
-#endif
-            if (timeSinceLastScavenge < partialScavengeInterval) {
-                // Rate limit partial scavenges.
-                return ScavengeMode::None;
-            }
-            if (freeableMemory < 25 * MB)
-                return ScavengeMode::None;
-            if (5 * freeableMemory < footprint())
-                return ScavengeMode::None;
-            return ScavengeMode::Partial;
-        }();
-
-        m_isProbablyGrowing = false;
-
-        switch (scavengeMode) {
-        case ScavengeMode::None: {
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Partial: {
-            partialScavenge();
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Full: {
-            scavenge();
-            break;
-        }
-        }
+        if (verbose)
+            fprintf(stderr, "new wait time %lldms\n", m_waitTime.count());
     }
 }
 

Modified: trunk/Source/bmalloc/bmalloc/Scavenger.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/Scavenger.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/Scavenger.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -89,11 +89,10 @@
     void setThreadName(const char*);
 
     std::chrono::milliseconds timeSinceLastFullScavenge();
-    std::chrono::milliseconds timeSinceLastPartialScavenge();
-    void partialScavenge();
 
     std::atomic<State> m_state { State::Sleep };
     size_t m_scavengerBytes { 0 };
+    std::chrono::milliseconds m_waitTime;
     bool m_isProbablyGrowing { false };
     bool m_isInMiniMode { false };
     
@@ -103,7 +102,6 @@
 
     std::thread m_thread;
     std::chrono::steady_clock::time_point m_lastFullScavengeTime { std::chrono::steady_clock::now() };
-    std::chrono::steady_clock::time_point m_lastPartialScavengeTime { std::chrono::steady_clock::now() };
     
 #if BOS(DARWIN)
     dispatch_source_t m_pressureHandlerDispatchSource;

Modified: trunk/Source/bmalloc/bmalloc/SmallPage.h (243143 => 243144)


--- trunk/Source/bmalloc/bmalloc/SmallPage.h	2019-03-19 17:05:55 UTC (rev 243143)
+++ trunk/Source/bmalloc/bmalloc/SmallPage.h	2019-03-19 17:31:01 UTC (rev 243144)
@@ -51,6 +51,10 @@
     bool hasPhysicalPages() { return m_hasPhysicalPages; }
     void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
     
+    bool usedSinceLastScavenge() { return m_usedSinceLastScavenge; }
+    void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
+    void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
+
     SmallLine* begin();
 
     unsigned char slide() const { return m_slide; }
@@ -59,6 +63,7 @@
 private:
     unsigned char m_hasFreeLines: 1;
     unsigned char m_hasPhysicalPages: 1;
+    unsigned char m_usedSinceLastScavenge: 1;
     unsigned char m_refCount: 7;
     unsigned char m_sizeClass;
     unsigned char m_slide;
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to