Title: [198821] trunk/Source/bmalloc
Revision
198821
Author
gga...@apple.com
Date
2016-03-29 19:22:47 -0700 (Tue, 29 Mar 2016)

Log Message

bmalloc: page size should be configurable at runtime
https://bugs.webkit.org/show_bug.cgi?id=155993

Reviewed by Andreas Kling.

This is a memory win on 32bit iOS devices, since their page sizes are
4kB and not 16kB.

It's also a step toward supporting 64bit iOS devices that have a
16kB/4kB virtual/physical page size split.

* bmalloc/Chunk.h: Align to largeAlignment since 2 * smallMax isn't
required by the boundary tag allocator.

(bmalloc::Chunk::page): Account for the slide when accessing a page.
Each SmallPage hashes 4kB of memory. When we want to allocate a region
of memory larger than 4kB, we store our metadata in the first SmallPage
in the region and we assign a slide to the remaining SmallPages, so
they forward to that first SmallPage when accessed.

NOTE: We could use a less flexible technique that just hashed by
vmPageSize() instead of 4kB at runtime, with no slide, but I think we'll
be able to use this slide technique to make even more page sizes
dynamically at runtime, which should save some memory and simplify
the allocator.

(bmalloc::SmallPage::begin): It's invalid to access a SmallPage with
a slide, since such SmallPages do not contain meaningful data.

(bmalloc::SmallPage::end): Account for smallPageCount when computing
the size of a page.

(bmalloc::Chunk::pageBegin): Deleted.
(bmalloc::Chunk::pageEnd): Deleted.
(bmalloc::Object::pageBegin): Deleted.

* bmalloc/Heap.cpp:
(bmalloc::Heap::Heap): Cache vmPageSize because computing it might require
a syscall.

(bmalloc::Heap::initializeLineMetadata): Line metadata is a vector instead
of a 2D array because we don't know how much metadata we'll need until
we know the page size.

(bmalloc::Heap::scavengeSmallPage): Be sure to revert the slide when
deallocating a page. Otherwise, the next attempt to allocate the page
will slide when initializing it, sliding to nowhere.

(bmalloc::Heap::allocateSmallBumpRanges): Account for vector change to
line metadata.

(bmalloc::Heap::allocateSmallPage): Initialize slide and smallPageCount
since they aren't constant anymore.

(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateXLarge):
(bmalloc::Heap::shrinkXLarge): Adopt dynamic page size.

* bmalloc/Heap.h:

* bmalloc/Sizes.h: smallPageSize is no longer equal to the VM page
size -- it's just the smallest VM page size we're interested in supporting.

* bmalloc/SmallPage.h:
(bmalloc::SmallPage::slide):
(bmalloc::SmallPage::setSlide):
(bmalloc::SmallPage::smallPageCount):
(bmalloc::SmallPage::setSmallPageCount):
(bmalloc::SmallPage::ref):
(bmalloc::SmallPage::deref): Support slide and small page count as
dynamic values. This doesn't increase metadata size since sizeof(SmallPage)
rounds up to alignment anyway.

* bmalloc/VMAllocate.h:
(bmalloc::vmPageSize):
(bmalloc::vmPageShift):
(bmalloc::vmSize):
(bmalloc::vmValidate):
(bmalloc::tryVMAllocate):
(bmalloc::vmDeallocatePhysicalPagesSloppy):
(bmalloc::vmAllocatePhysicalPagesSloppy): Treat page size as a variable.

* bmalloc/Vector.h:
(bmalloc::Vector::initialCapacity):
(bmalloc::Vector<T>::insert):
(bmalloc::Vector<T>::grow):
(bmalloc::Vector<T>::shrink):
(bmalloc::Vector<T>::shrinkCapacity):
(bmalloc::Vector<T>::growCapacity): Treat page size as a variable.

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (198820 => 198821)


--- trunk/Source/bmalloc/ChangeLog	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/ChangeLog	2016-03-30 02:22:47 UTC (rev 198821)
@@ -1,3 +1,96 @@
+2016-03-29  Geoffrey Garen  <gga...@apple.com>
+
+        bmalloc: page size should be configurable at runtime
+        https://bugs.webkit.org/show_bug.cgi?id=155993
+
+        Reviewed by Andreas Kling.
+
+        This is a memory win on 32bit iOS devices, since their page sizes are
+        4kB and not 16kB.
+
+        It's also a step toward supporting 64bit iOS devices that have a
+        16kB/4kB virtual/physical page size split.
+
+        * bmalloc/Chunk.h: Align to largeAlignment since 2 * smallMax isn't
+        required by the boundary tag allocator.
+
+        (bmalloc::Chunk::page): Account for the slide when accessing a page.
+        Each SmallPage hashes 4kB of memory. When we want to allocate a region
+        of memory larger than 4kB, we store our metadata in the first SmallPage
+        in the region and we assign a slide to the remaining SmallPages, so
+        they forward to that first SmallPage when accessed.
+
+        NOTE: We could use a less flexible technique that just hashed by
+        vmPageSize() instead of 4kB at runtime, with no slide, but I think we'll
+        be able to use this slide technique to make even more page sizes
+        dynamically at runtime, which should save some memory and simplify
+        the allocator.
+
+        (bmalloc::SmallPage::begin): It's invalid to access a SmallPage with
+        a slide, since such SmallPages do not contain meaningful data.
+
+        (bmalloc::SmallPage::end): Account for smallPageCount when computing
+        the size of a page.
+
+        (bmalloc::Chunk::pageBegin): Deleted.
+        (bmalloc::Chunk::pageEnd): Deleted.
+        (bmalloc::Object::pageBegin): Deleted.
+
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::Heap): Cache vmPageSize because computing it might require
+        a syscall.
+
+        (bmalloc::Heap::initializeLineMetadata): Line metadata is a vector instead
+        of a 2D array because we don't know how much metadata we'll need until
+        we know the page size.
+
+        (bmalloc::Heap::scavengeSmallPage): Be sure to revert the slide when
+        deallocating a page. Otherwise, the next attempt to allocate the page
+        will slide when initializing it, sliding to nowhere.
+
+        (bmalloc::Heap::allocateSmallBumpRanges): Account for vector change to
+        line metadata.
+
+        (bmalloc::Heap::allocateSmallPage): Initialize slide and smallPageCount
+        since they aren't constant anymore.
+
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::tryAllocateXLarge):
+        (bmalloc::Heap::shrinkXLarge): Adopt dynamic page size.
+
+        * bmalloc/Heap.h:
+
+        * bmalloc/Sizes.h: smallPageSize is no longer equal to the VM page
+        size -- it's just the smallest VM page size we're interested in supporting.
+
+        * bmalloc/SmallPage.h:
+        (bmalloc::SmallPage::slide):
+        (bmalloc::SmallPage::setSlide):
+        (bmalloc::SmallPage::smallPageCount):
+        (bmalloc::SmallPage::setSmallPageCount):
+        (bmalloc::SmallPage::ref):
+        (bmalloc::SmallPage::deref): Support slide and small page count as
+        dynamic values. This doesn't increase metadata size since sizeof(SmallPage)
+        rounds up to alignment anyway.
+
+        * bmalloc/VMAllocate.h:
+        (bmalloc::vmPageSize):
+        (bmalloc::vmPageShift):
+        (bmalloc::vmSize):
+        (bmalloc::vmValidate):
+        (bmalloc::tryVMAllocate):
+        (bmalloc::vmDeallocatePhysicalPagesSloppy):
+        (bmalloc::vmAllocatePhysicalPagesSloppy): Treat page size as a variable.
+
+        * bmalloc/Vector.h:
+        (bmalloc::Vector::initialCapacity):
+        (bmalloc::Vector<T>::insert):
+        (bmalloc::Vector<T>::grow):
+        (bmalloc::Vector<T>::shrink):
+        (bmalloc::Vector<T>::shrinkCapacity):
+        (bmalloc::Vector<T>::growCapacity): Treat page size as a variable.
+
 2016-03-29  David Kilzer  <ddkil...@apple.com>
 
         bmalloc: add logging for mmap() failures

Modified: trunk/Source/bmalloc/bmalloc/Chunk.h (198820 => 198821)


--- trunk/Source/bmalloc/bmalloc/Chunk.h	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/bmalloc/Chunk.h	2016-03-30 02:22:47 UTC (rev 198821)
@@ -53,9 +53,6 @@
     SmallPage* page(size_t offset);
     SmallLine* line(size_t offset);
 
-    SmallPage* pageBegin() { return Object(m_memory).page(); }
-    SmallPage* pageEnd() { return m_pages.end(); }
-    
     SmallLine* lines() { return m_lines.begin(); }
     SmallPage* pages() { return m_pages.begin(); }
 
@@ -81,15 +78,12 @@
     // We use the X's for boundary tags and the O's for edge sentinels.
 
     std::array<SmallLine, chunkSize / smallLineSize> m_lines;
-    std::array<SmallPage, chunkSize / vmPageSize> m_pages;
+    std::array<SmallPage, chunkSize / smallPageSize> m_pages;
     std::array<BoundaryTag, boundaryTagCount> m_boundaryTags;
-    char m_memory[] __attribute__((aligned(2 * smallMax + 0)));
+    char m_memory[] __attribute__((aligned(largeAlignment + 0)));
 };
 
 static_assert(sizeof(Chunk) + largeMax <= chunkSize, "largeMax is too big");
-static_assert(
-    sizeof(Chunk) % vmPageSize + 2 * smallMax <= vmPageSize,
-    "the first page of object memory in a small chunk must be able to allocate smallMax");
 
 inline Chunk::Chunk(std::lock_guard<StaticMutex>& lock)
 {
@@ -165,8 +159,9 @@
 
 inline SmallPage* Chunk::page(size_t offset)
 {
-    size_t pageNumber = offset / vmPageSize;
-    return &m_pages[pageNumber];
+    size_t pageNumber = offset / smallPageSize;
+    SmallPage* page = &m_pages[pageNumber];
+    return page - page->slide();
 }
 
 inline SmallLine* Chunk::line(size_t offset)
@@ -190,15 +185,17 @@
 
 inline SmallLine* SmallPage::begin()
 {
+    BASSERT(!m_slide);
     Chunk* chunk = Chunk::get(this);
     size_t pageNumber = this - chunk->pages();
-    size_t lineNumber = pageNumber * smallLineCount;
+    size_t lineNumber = pageNumber * smallPageLineCount;
     return &chunk->lines()[lineNumber];
 }
 
 inline SmallLine* SmallPage::end()
 {
-    return begin() + smallLineCount;
+    BASSERT(!m_slide);
+    return begin() + m_smallPageCount * smallPageLineCount;
 }
 
 inline Object::Object(void* object)
@@ -219,11 +216,6 @@
     return m_chunk->object(m_offset);
 }
 
-inline void* Object::pageBegin()
-{
-    return m_chunk->object(roundDownToMultipleOf(vmPageSize, m_offset));
-}
-
 inline SmallLine* Object::line()
 {
     return m_chunk->line(m_offset);

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (198820 => 198821)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2016-03-30 02:22:47 UTC (rev 198821)
@@ -35,7 +35,8 @@
 namespace bmalloc {
 
 Heap::Heap(std::lock_guard<StaticMutex>&)
-    : m_largeObjects(VMState::HasPhysical::True)
+    : m_vmPageSize(vmPageSize())
+    , m_largeObjects(VMState::HasPhysical::True)
     , m_isAllocatingPages(false)
     , m_scavenger(*this, &Heap::concurrentScavenge)
 {
@@ -46,13 +47,16 @@
 {
     // We assume that m_smallLineMetadata is zero-filled.
 
+    size_t smallLineCount = m_vmPageSize / smallLineSize;
+    m_smallLineMetadata.grow(sizeClassCount * smallLineCount);
+
     for (size_t sizeClass = 0; sizeClass < sizeClassCount; ++sizeClass) {
         size_t size = objectSize(sizeClass);
-        auto& metadata = m_smallLineMetadata[sizeClass];
+        LineMetadata* pageMetadata = &m_smallLineMetadata[sizeClass * smallLineCount];
 
         size_t object = 0;
         size_t line = 0;
-        while (object < vmPageSize) {
+        while (object < m_vmPageSize) {
             line = object / smallLineSize;
             size_t leftover = object % smallLineSize;
 
@@ -60,15 +64,15 @@
             size_t remainder;
             divideRoundingUp(smallLineSize - leftover, size, objectCount, remainder);
 
-            metadata[line] = { static_cast<unsigned short>(leftover), static_cast<unsigned short>(objectCount) };
+            pageMetadata[line] = { static_cast<unsigned short>(leftover), static_cast<unsigned short>(objectCount) };
 
             object += objectCount * size;
         }
 
         // Don't allow the last object in a page to escape the page.
-        if (object > vmPageSize) {
-            BASSERT(metadata[line].objectCount);
-            --metadata[line].objectCount;
+        if (object > m_vmPageSize) {
+            BASSERT(pageMetadata[line].objectCount);
+            --pageMetadata[line].objectCount;
         }
     }
 }
@@ -100,7 +104,12 @@
 {
     SmallPage* page = m_smallPages.pop();
 
-    // Transform small object page back into a large object.
+    // Revert the slide() value on intermediate SmallPages so they hash to
+    // themselves again.
+    for (size_t i = 1; i < page->smallPageCount(); ++i)
+        page[i].setSlide(0);
+
+    // Revert our small object page back to large object.
     page->setObjectType(ObjectType::Large);
 
     LargeObject largeObject(page->begin()->begin());
@@ -143,13 +152,15 @@
     SmallPage* page = allocateSmallPage(lock, sizeClass);
     SmallLine* lines = page->begin();
     BASSERT(page->hasFreeLines(lock));
+    size_t smallLineCount = m_vmPageSize / smallLineSize;
+    LineMetadata* pageMetadata = &m_smallLineMetadata[sizeClass * smallLineCount];
 
     // Find a free line.
     for (size_t lineNumber = 0; lineNumber < smallLineCount; ++lineNumber) {
         if (lines[lineNumber].refCount(lock))
             continue;
 
-        LineMetadata& lineMetadata = m_smallLineMetadata[sizeClass][lineNumber];
+        LineMetadata& lineMetadata = pageMetadata[lineNumber];
         if (!lineMetadata.objectCount)
             continue;
 
@@ -170,7 +181,7 @@
             if (lines[lineNumber].refCount(lock))
                 break;
 
-            LineMetadata& lineMetadata = m_smallLineMetadata[sizeClass][lineNumber];
+            LineMetadata& lineMetadata = pageMetadata[lineNumber];
             if (!lineMetadata.objectCount)
                 continue;
 
@@ -200,16 +211,24 @@
         return page;
     }
 
-    size_t unalignedSize = largeMin + vmPageSize - largeAlignment + vmPageSize;
-    LargeObject largeObject = allocateLarge(lock, vmPageSize, vmPageSize, unalignedSize);
-
+    size_t unalignedSize = largeMin + m_vmPageSize - largeAlignment + m_vmPageSize;
+    LargeObject largeObject = allocateLarge(lock, m_vmPageSize, m_vmPageSize, unalignedSize);
+    
     // Transform our large object into a small object page. We deref here
-    // because our small objects will keep their own refcounts on the line.
+    // because our small objects will keep their own line refcounts.
     Object object(largeObject.begin());
     object.line()->deref(lock);
     object.page()->setObjectType(ObjectType::Small);
 
-    object.page()->setSizeClass(sizeClass);
+    SmallPage* page = object.page();
+    page->setSizeClass(sizeClass);
+    page->setSmallPageCount(m_vmPageSize / smallPageSize);
+
+    // Set a slide() value on intermediate SmallPages so they hash to their
+    // vmPageSize-sized page.
+    for (size_t i = 1; i < page->smallPageCount(); ++i)
+        page[i].setSlide(i);
+
     return object.page();
 }
 
@@ -307,7 +326,7 @@
     BASSERT(size >= largeMin);
     BASSERT(size == roundUpToMultipleOf<largeAlignment>(size));
     
-    if (size <= vmPageSize)
+    if (size <= m_vmPageSize)
         scavengeSmallPages(lock);
 
     LargeObject largeObject = m_largeObjects.take(size);
@@ -338,7 +357,7 @@
     BASSERT(alignment >= largeAlignment);
     BASSERT(isPowerOfTwo(alignment));
 
-    if (size <= vmPageSize)
+    if (size <= m_vmPageSize)
         scavengeSmallPages(lock);
 
     LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
@@ -412,7 +431,7 @@
     // in the allocated list. This is an important optimization because it
     // keeps the free list short, speeding up allocation and merging.
 
-    std::pair<XLargeRange, XLargeRange> allocated = range.split(roundUpToMultipleOf<vmPageSize>(size));
+    std::pair<XLargeRange, XLargeRange> allocated = range.split(roundUpToMultipleOf(m_vmPageSize, size));
     if (allocated.first.vmState().hasVirtual()) {
         vmAllocatePhysicalPagesSloppy(allocated.first.begin(), allocated.first.size());
         allocated.first.setVMState(VMState::Physical);
@@ -429,7 +448,7 @@
 
     m_isAllocatingPages = true;
 
-    size = std::max(vmPageSize, size);
+    size = std::max(m_vmPageSize, size);
     alignment = roundUpToMultipleOf<xLargeAlignment>(alignment);
 
     XLargeRange range = m_xLargeMap.takeFree(alignment, size);
@@ -456,7 +475,7 @@
 {
     BASSERT(object.size() > newSize);
 
-    if (object.size() - newSize < vmPageSize)
+    if (object.size() - newSize < m_vmPageSize)
         return;
     
     XLargeRange range = m_xLargeMap.takeAllocated(object.begin());

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (198820 => 198821)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2016-03-30 02:22:47 UTC (rev 198821)
@@ -93,8 +93,10 @@
     void scavengeLargeObjects(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
     void scavengeXLargeObjects(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
 
-    std::array<std::array<LineMetadata, smallLineCount>, sizeClassCount> m_smallLineMetadata;
+    size_t m_vmPageSize;
 
+    Vector<LineMetadata> m_smallLineMetadata;
+
     std::array<List<SmallPage>, sizeClassCount> m_smallPagesWithFreeLines;
 
     List<SmallPage> m_smallPages;

Modified: trunk/Source/bmalloc/bmalloc/Sizes.h (198820 => 198821)


--- trunk/Source/bmalloc/bmalloc/Sizes.h	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/bmalloc/Sizes.h	2016-03-30 02:22:47 UTC (rev 198821)
@@ -46,14 +46,9 @@
     static const size_t alignment = 8;
     static const size_t alignmentMask = alignment - 1ul;
 
-#if BPLATFORM(IOS)
-    static const size_t vmPageSize = 16 * kB;
-#else
-    static const size_t vmPageSize = 4 * kB;
-#endif
-    
     static const size_t smallLineSize = 256;
-    static const size_t smallLineCount = vmPageSize / smallLineSize;
+    static const size_t smallPageSize = 4 * kB;
+    static const size_t smallPageLineCount = smallPageSize / smallLineSize;
 
     static const size_t smallMax = 1 * kB;
     static const size_t maskSizeClassMax = 512;

Modified: trunk/Source/bmalloc/bmalloc/SmallPage.h (198820 => 198821)


--- trunk/Source/bmalloc/bmalloc/SmallPage.h	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/bmalloc/SmallPage.h	2016-03-30 02:22:47 UTC (rev 198821)
@@ -58,10 +58,18 @@
     SmallLine* begin();
     SmallLine* end();
 
+    unsigned char slide() const { return m_slide; }
+    void setSlide(unsigned char slide) { m_slide = slide; }
+
+    unsigned char smallPageCount() const { return m_smallPageCount; }
+    void setSmallPageCount(unsigned char smallPageCount) { m_smallPageCount = smallPageCount; }
+
 private:
     unsigned char m_hasFreeLines: 1;
     unsigned char m_refCount: 7;
     unsigned char m_sizeClass;
+    unsigned char m_smallPageCount;
+    unsigned char m_slide;
     ObjectType m_objectType;
 
 static_assert(
@@ -71,12 +79,14 @@
 
 inline void SmallPage::ref(std::lock_guard<StaticMutex>&)
 {
+    BASSERT(!m_slide);
     ++m_refCount;
     BASSERT(m_refCount);
 }
 
 inline bool SmallPage::deref(std::lock_guard<StaticMutex>&)
 {
+    BASSERT(!m_slide);
     BASSERT(m_refCount);
     --m_refCount;
     return !m_refCount;

Modified: trunk/Source/bmalloc/bmalloc/VMAllocate.h (198820 => 198821)


--- trunk/Source/bmalloc/bmalloc/VMAllocate.h	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/bmalloc/VMAllocate.h	2016-03-30 02:22:47 UTC (rev 198821)
@@ -47,31 +47,35 @@
 #define BMALLOC_VM_TAG -1
 #endif
 
+inline size_t vmPageSize()
+{
+    return sysconf(_SC_PAGESIZE);
+}
+
+inline size_t vmPageShift()
+{
+    return log2(vmPageSize());
+}
+
 inline size_t vmSize(size_t size)
 {
-    return roundUpToMultipleOf<vmPageSize>(size);
+    return roundUpToMultipleOf(vmPageSize(), size);
 }
 
 inline void vmValidate(size_t vmSize)
 {
-    // We use getpagesize() here instead of vmPageSize because vmPageSize is
-    // allowed to be larger than the OS's true page size.
-
     UNUSED(vmSize);
     BASSERT(vmSize);
-    BASSERT(vmSize == roundUpToMultipleOf(static_cast<size_t>(getpagesize()), vmSize));
+    BASSERT(vmSize == roundUpToMultipleOf(vmPageSize(), vmSize));
 }
 
 inline void vmValidate(void* p, size_t vmSize)
 {
-    // We use getpagesize() here instead of vmPageSize because vmPageSize is
-    // allowed to be larger than the OS's true page size.
-
     vmValidate(vmSize);
     
     UNUSED(p);
     BASSERT(p);
-    BASSERT(p == mask(p, ~(getpagesize() - 1)));
+    BASSERT(p == mask(p, ~(vmPageSize() - 1)));
 }
 
 inline void* tryVMAllocate(size_t vmSize)
@@ -106,10 +110,7 @@
     vmValidate(vmSize);
     vmValidate(vmAlignment);
 
-    // We use getpagesize() here instead of vmPageSize because vmPageSize is
-    // allowed to be larger than the OS's true page size.
-
-    size_t mappedSize = vmAlignment - getpagesize() + vmSize;
+    size_t mappedSize = vmAlignment - vmPageSize() + vmSize;
     char* mapped = static_cast<char*>(tryVMAllocate(mappedSize));
     if (!mapped)
         return nullptr;
@@ -159,8 +160,8 @@
 // Trims requests that are un-page-aligned.
 inline void vmDeallocatePhysicalPagesSloppy(void* p, size_t size)
 {
-    char* begin = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p));
-    char* end = roundDownToMultipleOf<vmPageSize>(static_cast<char*>(p) + size);
+    char* begin = roundUpToMultipleOf(vmPageSize(), static_cast<char*>(p));
+    char* end = roundDownToMultipleOf(vmPageSize(), static_cast<char*>(p) + size);
 
     if (begin >= end)
         return;
@@ -171,8 +172,8 @@
 // Expands requests that are un-page-aligned.
 inline void vmAllocatePhysicalPagesSloppy(void* p, size_t size)
 {
-    char* begin = roundDownToMultipleOf<vmPageSize>(static_cast<char*>(p));
-    char* end = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p) + size);
+    char* begin = roundDownToMultipleOf(vmPageSize(), static_cast<char*>(p));
+    char* end = roundUpToMultipleOf(vmPageSize(), static_cast<char*>(p) + size);
 
     if (begin >= end)
         return;

Modified: trunk/Source/bmalloc/bmalloc/Vector.h (198820 => 198821)


--- trunk/Source/bmalloc/bmalloc/Vector.h	2016-03-30 02:18:56 UTC (rev 198820)
+++ trunk/Source/bmalloc/bmalloc/Vector.h	2016-03-30 02:22:47 UTC (rev 198821)
@@ -66,6 +66,7 @@
     
     void insert(iterator, const T&);
 
+    void grow(size_t);
     void shrink(size_t);
 
     void shrinkToFit();
@@ -73,7 +74,7 @@
 private:
     static const size_t growFactor = 2;
     static const size_t shrinkFactor = 4;
-    static const size_t initialCapacity = vmPageSize / sizeof(T);
+    static size_t initialCapacity() { return vmPageSize() / sizeof(T); }
 
     void growCapacity();
     void shrinkCapacity();
@@ -146,11 +147,19 @@
 }
 
 template<typename T>
+inline void Vector<T>::grow(size_t size)
+{
+    BASSERT(size >= m_size);
+    while (m_size < size)
+        push(T());
+}
+
+template<typename T>
 inline void Vector<T>::shrink(size_t size)
 {
     BASSERT(size <= m_size);
     m_size = size;
-    if (m_capacity > initialCapacity && m_size < m_capacity / shrinkFactor)
+    if (m_size < m_capacity / shrinkFactor && m_capacity > initialCapacity())
         shrinkCapacity();
 }
 
@@ -171,14 +180,14 @@
 template<typename T>
 NO_INLINE void Vector<T>::shrinkCapacity()
 {
-    size_t newCapacity = max(initialCapacity, m_capacity / shrinkFactor);
+    size_t newCapacity = max(initialCapacity(), m_capacity / shrinkFactor);
     reallocateBuffer(newCapacity);
 }
 
 template<typename T>
 NO_INLINE void Vector<T>::growCapacity()
 {
-    size_t newCapacity = max(initialCapacity, m_size * growFactor);
+    size_t newCapacity = max(initialCapacity(), m_size * growFactor);
     reallocateBuffer(newCapacity);
 }
 
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to