Title: [180576] trunk/Source/bmalloc
Revision
180576
Author
gga...@apple.com
Date
2015-02-24 11:12:20 -0800 (Tue, 24 Feb 2015)

Log Message

bmalloc: Added a little more abstraction for large objects
https://bugs.webkit.org/show_bug.cgi?id=141978

Reviewed by Sam Weinig.

Previously, each client needed to manage the boundary tags of
a large object using free functions. This patch introduces a LargeObject
class that does things a little more automatically.

* bmalloc.xcodeproj/project.pbxproj:

* bmalloc/Allocator.cpp:
(bmalloc::Allocator::reallocate): Use the new LargeObject class.

* bmalloc/BeginTag.h:
(bmalloc::BeginTag::isInFreeList): Deleted. Moved this logic into the
LargeObject class.

* bmalloc/BoundaryTag.h:
(bmalloc::BoundaryTag::isSentinel):
(bmalloc::BoundaryTag::compactBegin):
(bmalloc::BoundaryTag::setRange):
(bmalloc::BoundaryTag::initSentinel): Added an explicit API for sentinels,
which we used to create and test for implicitly.

* bmalloc/BoundaryTagInlines.h:
(bmalloc::BoundaryTag::init):
(bmalloc::validate): Deleted.
(bmalloc::validatePrev): Deleted.
(bmalloc::validateNext): Deleted.
(bmalloc::BoundaryTag::mergeLeft): Deleted.
(bmalloc::BoundaryTag::mergeRight): Deleted.
(bmalloc::BoundaryTag::merge): Deleted.
(bmalloc::BoundaryTag::deallocate): Deleted.
(bmalloc::BoundaryTag::split): Deleted.
(bmalloc::BoundaryTag::allocate): Deleted. Moved this logic into the
LargeObject class.

* bmalloc/EndTag.h:
(bmalloc::EndTag::init):
(bmalloc::EndTag::operator=): Deleted. Re-reading this code, I found
special behavior in the assignment operator to be a surprising API.
So, I replaced the assignment operation with an explicit initializing
function.

* bmalloc/Heap.cpp:
(bmalloc::Heap::scavengeLargeRanges):
(bmalloc::Heap::allocateXLarge):
(bmalloc::Heap::findXLarge):
(bmalloc::Heap::deallocateXLarge):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::deallocateLarge):
* bmalloc/Heap.h: No behavior changes here -- just adopting the
LargeObject interface.

* bmalloc/LargeObject.h: Added.
(bmalloc::LargeObject::operator!):
(bmalloc::LargeObject::begin):
(bmalloc::LargeObject::size):
(bmalloc::LargeObject::range):
(bmalloc::LargeObject::LargeObject):
(bmalloc::LargeObject::setFree):
(bmalloc::LargeObject::isFree):
(bmalloc::LargeObject::hasPhysicalPages):
(bmalloc::LargeObject::setHasPhysicalPages):
(bmalloc::LargeObject::isValidAndFree):
(bmalloc::LargeObject::merge):
(bmalloc::LargeObject::split):
(bmalloc::LargeObject::validateSelf):
(bmalloc::LargeObject::validate): Moved this code into a class, out of
BoundaryTag free functions.

New to the class are these features:

    (1) Every reference to an object is validated upon creation and use.

    (2) There's an explicit API for "This is a reference to an object
    that might be stale (the DoNotValidate API)".

    (3) The begin and end tags are kept in sync automatically.

* bmalloc/SegregatedFreeList.cpp:
(bmalloc::SegregatedFreeList::insert):
(bmalloc::SegregatedFreeList::takeGreedy):
(bmalloc::SegregatedFreeList::take):
* bmalloc/SegregatedFreeList.h: Adopt the LargeObject interface.

* bmalloc/VMHeap.cpp:
(bmalloc::VMHeap::grow):
* bmalloc/VMHeap.h:
(bmalloc::VMHeap::allocateLargeRange):
(bmalloc::VMHeap::deallocateLargeRange): Adopt the LargeObject interface.

Modified Paths

Added Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (180575 => 180576)


--- trunk/Source/bmalloc/ChangeLog	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/ChangeLog	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,3 +1,98 @@
+2015-02-24  Geoffrey Garen  <gga...@apple.com>
+
+        bmalloc: Added a little more abstraction for large objects
+        https://bugs.webkit.org/show_bug.cgi?id=141978
+
+        Reviewed by Sam Weinig.
+
+        Previously, each client needed to manage the boundary tags of
+        a large object using free functions. This patch introduces a LargeObject
+        class that does things a little more automatically.
+
+        * bmalloc.xcodeproj/project.pbxproj:
+
+        * bmalloc/Allocator.cpp:
+        (bmalloc::Allocator::reallocate): Use the new LargeObject class.
+
+        * bmalloc/BeginTag.h:
+        (bmalloc::BeginTag::isInFreeList): Deleted. Moved this logic into the
+        LargeObject class.
+
+        * bmalloc/BoundaryTag.h:
+        (bmalloc::BoundaryTag::isSentinel):
+        (bmalloc::BoundaryTag::compactBegin):
+        (bmalloc::BoundaryTag::setRange):
+        (bmalloc::BoundaryTag::initSentinel): Added an explicit API for sentinels,
+        which we used to create and test for implicitly.
+
+        * bmalloc/BoundaryTagInlines.h:
+        (bmalloc::BoundaryTag::init):
+        (bmalloc::validate): Deleted.
+        (bmalloc::validatePrev): Deleted.
+        (bmalloc::validateNext): Deleted.
+        (bmalloc::BoundaryTag::mergeLeft): Deleted.
+        (bmalloc::BoundaryTag::mergeRight): Deleted.
+        (bmalloc::BoundaryTag::merge): Deleted.
+        (bmalloc::BoundaryTag::deallocate): Deleted.
+        (bmalloc::BoundaryTag::split): Deleted.
+        (bmalloc::BoundaryTag::allocate): Deleted. Moved this logic into the
+        LargeObject class.
+
+        * bmalloc/EndTag.h:
+        (bmalloc::EndTag::init):
+        (bmalloc::EndTag::operator=): Deleted. Re-reading this code, I found
+        special behavior in the assignment operator to be a surprising API.
+        So, I replaced the assignment operation with an explicit initializing
+        function.
+
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::scavengeLargeRanges):
+        (bmalloc::Heap::allocateXLarge):
+        (bmalloc::Heap::findXLarge):
+        (bmalloc::Heap::deallocateXLarge):
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::deallocateLarge):
+        * bmalloc/Heap.h: No behavior changes here -- just adopting the
+        LargeObject interface.
+
+        * bmalloc/LargeObject.h: Added.
+        (bmalloc::LargeObject::operator!):
+        (bmalloc::LargeObject::begin):
+        (bmalloc::LargeObject::size):
+        (bmalloc::LargeObject::range):
+        (bmalloc::LargeObject::LargeObject):
+        (bmalloc::LargeObject::setFree):
+        (bmalloc::LargeObject::isFree):
+        (bmalloc::LargeObject::hasPhysicalPages):
+        (bmalloc::LargeObject::setHasPhysicalPages):
+        (bmalloc::LargeObject::isValidAndFree):
+        (bmalloc::LargeObject::merge):
+        (bmalloc::LargeObject::split):
+        (bmalloc::LargeObject::validateSelf):
+        (bmalloc::LargeObject::validate): Moved this code into a class, out of
+        BoundaryTag free functions.
+
+        New to the class are these features:
+
+            (1) Every reference to an object is validated upon creation and use.
+
+            (2) There's an explicit API for "This is a reference to an object
+            that might be stale (the DoNotValidate API)".
+
+            (3) The begin and end tags are kept in sync automatically.
+
+        * bmalloc/SegregatedFreeList.cpp:
+        (bmalloc::SegregatedFreeList::insert):
+        (bmalloc::SegregatedFreeList::takeGreedy):
+        (bmalloc::SegregatedFreeList::take):
+        * bmalloc/SegregatedFreeList.h: Adopt the LargeObject interface.
+
+        * bmalloc/VMHeap.cpp:
+        (bmalloc::VMHeap::grow):
+        * bmalloc/VMHeap.h:
+        (bmalloc::VMHeap::allocateLargeRange):
+        (bmalloc::VMHeap::deallocateLargeRange): Adopt the LargeObject interface.
+
 2015-02-20  Geoffrey Garen  <gga...@apple.com>
 
         bmalloc should implement malloc introspection (to stop false-positive leaks when MallocStackLogging is off)

Modified: trunk/Source/bmalloc/bmalloc/Allocator.cpp (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/Allocator.cpp	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/Allocator.cpp	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -28,6 +28,7 @@
 #include "Deallocator.h"
 #include "Heap.h"
 #include "LargeChunk.h"
+#include "LargeObject.h"
 #include "PerProcess.h"
 #include "Sizes.h"
 #include <algorithm>
@@ -116,8 +117,8 @@
         break;
     }
     case Large: {
-        BeginTag* beginTag = LargeChunk::beginTag(object);
-        oldSize = beginTag->size();
+        LargeObject largeObject(object);
+        oldSize = largeObject.size();
         break;
     }
     case XLarge: {

Modified: trunk/Source/bmalloc/bmalloc/BeginTag.h (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/BeginTag.h	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/BeginTag.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -31,15 +31,8 @@
 namespace bmalloc {
 
 class BeginTag : public BoundaryTag {
-public:
-    bool isInFreeList(const Range&);
 };
 
-inline bool BeginTag::isInFreeList(const Range& range)
-{
-    return isFree() && !isEnd() && this->size() == range.size() && this->compactBegin() == compactBegin(range);
-}
-
 } // namespace bmalloc
 
 #endif // BeginTag_h

Modified: trunk/Source/bmalloc/bmalloc/BoundaryTag.h (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/BoundaryTag.h	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/BoundaryTag.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -41,9 +41,7 @@
 class BoundaryTag {
 public:
     static Range init(LargeChunk*);
-    static Range deallocate(void*);
-    static void allocate(const Range&, size_t, Range& leftover, bool& hasPhysicalPages);
-    static unsigned compactBegin(const Range&);
+    static unsigned compactBegin(void*);
 
     bool isFree() { return m_isFree; }
     void setFree(bool isFree) { m_isFree = isFree; }
@@ -62,6 +60,9 @@
 
     void setRange(const Range&);
     
+    bool isSentinel() { return !m_compactBegin; }
+    void initSentinel();
+    
     EndTag* prev();
     BeginTag* next();
 
@@ -73,11 +74,6 @@
     static_assert((1 << compactBeginBits) - 1 >= largeMin / largeAlignment, "compactBegin must be encodable in a BoundaryTag.");
     static_assert((1 << sizeBits) - 1 >= largeMax, "largeMax must be encodable in a BoundaryTag.");
 
-    static void split(const Range&, size_t, BeginTag*, EndTag*&, Range& leftover);
-    static Range mergeLeft(const Range&, BeginTag*&, EndTag* prev, bool& hasPhysicalPages);
-    static Range mergeRight(const Range&, EndTag*&, BeginTag* next, bool& hasPhysicalPages);
-    static Range merge(const Range&, BeginTag*&, EndTag*&);
-
     bool m_isFree: 1;
     bool m_isEnd: 1;
     bool m_hasPhysicalPages: 1;
@@ -85,17 +81,17 @@
     unsigned m_size: sizeBits;
 };
 
-inline unsigned BoundaryTag::compactBegin(const Range& range)
+inline unsigned BoundaryTag::compactBegin(void* object)
 {
     return static_cast<unsigned>(
         reinterpret_cast<uintptr_t>(
             rightShift(
-                mask(range.begin(), largeMin - 1), largeAlignmentShift)));
+                mask(object, largeMin - 1), largeAlignmentShift)));
 }
 
 inline void BoundaryTag::setRange(const Range& range)
 {
-    m_compactBegin = compactBegin(range);
+    m_compactBegin = compactBegin(range.begin());
     m_size = static_cast<unsigned>(range.size());
     BASSERT(this->size() == range.size());
 }
@@ -112,6 +108,12 @@
     return reinterpret_cast<BeginTag*>(next);
 }
 
+inline void BoundaryTag::initSentinel()
+{
+    setRange(Range(nullptr, largeMin));
+    setFree(false);
+}
+
 } // namespace bmalloc
 
 #endif // BoundaryTag_h

Modified: trunk/Source/bmalloc/bmalloc/BoundaryTagInlines.h (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/BoundaryTagInlines.h	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/BoundaryTagInlines.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -34,48 +34,6 @@
 
 namespace bmalloc {
 
-static inline void validate(const Range& range)
-{
-    UNUSED(range);
-IF_DEBUG(
-    BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-    EndTag* endTag = LargeChunk::endTag(range.begin(), range.size());
-
-    BASSERT(!beginTag->isEnd());
-    BASSERT(range.size() >= largeMin);
-    BASSERT(beginTag->size() == range.size());
-
-    BASSERT(beginTag->size() == endTag->size());
-    BASSERT(beginTag->isFree() == endTag->isFree());
-    BASSERT(beginTag->hasPhysicalPages() == endTag->hasPhysicalPages());
-    BASSERT(static_cast<BoundaryTag*>(endTag) == static_cast<BoundaryTag*>(beginTag) || endTag->isEnd());
-);
-}
-
-static inline void validatePrev(EndTag* prev, void* object)
-{
-    size_t prevSize = prev->size();
-    void* prevObject = static_cast<char*>(object) - prevSize;
-    validate(Range(prevObject, prevSize));
-}
-
-static inline void validateNext(BeginTag* next, const Range& range)
-{
-    if (next->size() == largeMin && !next->compactBegin() && !next->isFree()) // Right sentinel tag.
-        return;
-
-    void* nextObject = range.end();
-    size_t nextSize = next->size();
-    validate(Range(nextObject, nextSize));
-}
-
-static inline void validate(EndTag* prev, const Range& range, BeginTag* next)
-{
-    validatePrev(prev, range.begin());
-    validate(range);
-    validateNext(next, range);
-}
-
 inline Range BoundaryTag::init(LargeChunk* chunk)
 {
     Range range(chunk->begin(), chunk->end() - chunk->begin());
@@ -86,7 +44,7 @@
     beginTag->setHasPhysicalPages(false);
 
     EndTag* endTag = LargeChunk::endTag(range.begin(), range.size());
-    *endTag = *beginTag;
+    endTag->init(beginTag);
 
     // Mark the left and right edges of our chunk as allocated. This naturally
     // prevents merging logic from overflowing beyond our chunk, without requiring
@@ -94,127 +52,15 @@
     
     EndTag* leftSentinel = beginTag->prev();
     BASSERT(leftSentinel >= static_cast<void*>(chunk));
-    leftSentinel->setRange(Range(nullptr, largeMin));
-    leftSentinel->setFree(false);
+    leftSentinel->initSentinel();
 
     BeginTag* rightSentinel = endTag->next();
     BASSERT(rightSentinel < static_cast<void*>(range.begin()));
-    rightSentinel->setRange(Range(nullptr, largeMin));
-    rightSentinel->setFree(false);
+    rightSentinel->initSentinel();
     
     return range;
 }
 
-inline Range BoundaryTag::mergeLeft(const Range& range, BeginTag*& beginTag, EndTag* prev, bool& hasPhysicalPages)
-{
-    Range left(range.begin() - prev->size(), prev->size());
-    Range merged(left.begin(), left.size() + range.size());
-
-    hasPhysicalPages &= prev->hasPhysicalPages();
-
-    prev->clear();
-    beginTag->clear();
-
-    beginTag = LargeChunk::beginTag(merged.begin());
-    return merged;
-}
-
-inline Range BoundaryTag::mergeRight(const Range& range, EndTag*& endTag, BeginTag* next, bool& hasPhysicalPages)
-{
-    Range right(range.end(), next->size());
-    Range merged(range.begin(), range.size() + right.size());
-
-    hasPhysicalPages &= next->hasPhysicalPages();
-
-    endTag->clear();
-    next->clear();
-
-    endTag = LargeChunk::endTag(merged.begin(), merged.size());
-    return merged;
-}
-
-INLINE Range BoundaryTag::merge(const Range& range, BeginTag*& beginTag, EndTag*& endTag)
-{
-    EndTag* prev = beginTag->prev();
-    BeginTag* next = endTag->next();
-    bool hasPhysicalPages = beginTag->hasPhysicalPages();
-
-    validate(prev, range, next);
-    
-    Range merged = range;
-
-    if (prev->isFree())
-        merged = mergeLeft(merged, beginTag, prev, hasPhysicalPages);
-
-    if (next->isFree())
-        merged = mergeRight(merged, endTag, next, hasPhysicalPages);
-
-    beginTag->setRange(merged);
-    beginTag->setFree(true);
-    beginTag->setHasPhysicalPages(hasPhysicalPages);
-
-    if (endTag != static_cast<BoundaryTag*>(beginTag))
-        *endTag = *beginTag;
-
-    validate(beginTag->prev(), merged, endTag->next());
-    return merged;
-}
-
-inline Range BoundaryTag::deallocate(void* object)
-{
-    BeginTag* beginTag = LargeChunk::beginTag(object);
-    BASSERT(!beginTag->isFree());
-
-    Range range(object, beginTag->size());
-    EndTag* endTag = LargeChunk::endTag(range.begin(), range.size());
-    return merge(range, beginTag, endTag);
-}
-
-INLINE void BoundaryTag::split(const Range& range, size_t size, BeginTag* beginTag, EndTag*& endTag, Range& leftover)
-{
-    leftover = Range(range.begin() + size, range.size() - size);
-    Range split(range.begin(), size);
-
-    beginTag->setRange(split);
-
-    EndTag* splitEndTag = LargeChunk::endTag(split.begin(), size);
-    if (splitEndTag != static_cast<BoundaryTag*>(beginTag))
-        *splitEndTag = *beginTag;
-
-    BASSERT(leftover.size() >= largeMin);
-    BeginTag* leftoverBeginTag = LargeChunk::beginTag(leftover.begin());
-    *leftoverBeginTag = *beginTag;
-    leftoverBeginTag->setRange(leftover);
-
-    if (leftoverBeginTag != static_cast<BoundaryTag*>(endTag))
-        *endTag = *leftoverBeginTag;
-
-    validate(beginTag->prev(), split, leftoverBeginTag);
-    validate(leftoverBeginTag->prev(), leftover, endTag->next());
-
-    endTag = splitEndTag;
-}
-
-INLINE void BoundaryTag::allocate(const Range& range, size_t size, Range& leftover, bool& hasPhysicalPages)
-{
-    BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-    EndTag* endTag = LargeChunk::endTag(range.begin(), range.size());
-
-    BASSERT(beginTag->isFree());
-    validate(beginTag->prev(), range, endTag->next());
-
-    if (range.size() - size > largeMin)
-        split(range, size, beginTag, endTag, leftover);
-
-    hasPhysicalPages = beginTag->hasPhysicalPages();
-
-    beginTag->setHasPhysicalPages(true);
-    beginTag->setFree(false);
-
-    endTag->setHasPhysicalPages(true);
-    endTag->setFree(false);
-}
-
 } // namespace bmalloc
 
 #endif // BoundaryTagInlines_h

Modified: trunk/Source/bmalloc/bmalloc/EndTag.h (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/EndTag.h	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/EndTag.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -32,14 +32,19 @@
 
 class EndTag : public BoundaryTag {
 public:
-    EndTag& operator=(const BeginTag&);
+    void init(BeginTag*);
 };
 
-inline EndTag& EndTag::operator=(const BeginTag& other)
+inline void EndTag::init(BeginTag* other)
 {
-    std::memcpy(this, &other, sizeof(BoundaryTag));
+    // To save space, an object can have only one tag, representing both
+    // its begin and its end. In that case, we must avoid initializing the
+    // end tag, since there is no end tag.
+    if (static_cast<BoundaryTag*>(this) == static_cast<BoundaryTag*>(other))
+        return;
+
+    std::memcpy(this, other, sizeof(BoundaryTag));
     setEnd(true);
-    return *this;
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -23,9 +23,9 @@
  * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
  */
 
-#include "BoundaryTagInlines.h"
 #include "Heap.h"
 #include "LargeChunk.h"
+#include "LargeObject.h"
 #include "Line.h"
 #include "MediumChunk.h"
 #include "Page.h"
@@ -144,10 +144,10 @@
             continue;
         }
 
-        Range range = m_largeRanges.takeGreedy(vmPageSize);
-        if (!range)
+        LargeObject largeObject = m_largeObjects.takeGreedy(vmPageSize);
+        if (!largeObject)
             return;
-        m_vmHeap.deallocateLargeRange(lock, range);
+        m_vmHeap.deallocateLargeRange(lock, largeObject);
     }
 }
 
@@ -328,7 +328,7 @@
     m_isAllocatingPages = true;
 
     void* result = vmAllocate(alignment, size);
-    m_xLargeRanges.push(Range(result, size));
+    m_xLargeObjects.push(Range(result, size));
     return result;
 }
 
@@ -339,7 +339,7 @@
 
 Range Heap::findXLarge(std::lock_guard<StaticMutex>&, void* object)
 {
-    for (auto& range : m_xLargeRanges) {
+    for (auto& range : m_xLargeObjects) {
         if (range.begin() != object)
             continue;
         return range;
@@ -350,11 +350,11 @@
 
 void Heap::deallocateXLarge(std::unique_lock<StaticMutex>& lock, void* object)
 {
-    for (auto& range : m_xLargeRanges) {
+    for (auto& range : m_xLargeObjects) {
         if (range.begin() != object)
             continue;
 
-        Range toDeallocate = m_xLargeRanges.pop(&range);
+        Range toDeallocate = m_xLargeObjects.pop(&range);
 
         lock.unlock();
         vmDeallocate(toDeallocate.begin(), toDeallocate.size());
@@ -364,24 +364,24 @@
     }
 }
 
-void Heap::allocateLarge(std::lock_guard<StaticMutex>&, const Range& range, size_t size, Range& leftover)
+void* Heap::allocateLarge(std::lock_guard<StaticMutex>&, LargeObject& largeObject, size_t size)
 {
-    bool hasPhysicalPages;
-    BoundaryTag::allocate(range, size, leftover, hasPhysicalPages);
+    BASSERT(largeObject.isFree());
 
-    if (!hasPhysicalPages)
-        vmAllocatePhysicalPagesSloppy(range.begin(), range.size());
-}
+    if (largeObject.size() - size > largeMin) {
+        std::pair<LargeObject, LargeObject> split = largeObject.split(size);
+        largeObject = split.first;
+        m_largeObjects.insert(split.second);
+    }
 
-void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, const Range& range, size_t size)
-{
-    Range leftover;
-    allocateLarge(lock, range, size, leftover);
+    largeObject.setFree(false);
 
-    if (!!leftover)
-        m_largeRanges.insert(leftover);
+    if (!largeObject.hasPhysicalPages()) {
+        vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
+        largeObject.setHasPhysicalPages(true);
+    }
     
-    return range.begin();
+    return largeObject.begin();
 }
 
 void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t size)
@@ -392,11 +392,11 @@
     
     m_isAllocatingPages = true;
 
-    Range range = m_largeRanges.take(size);
-    if (!range)
-        range = m_vmHeap.allocateLargeRange(size);
+    LargeObject largeObject = m_largeObjects.take(size);
+    if (!largeObject)
+        largeObject = m_vmHeap.allocateLargeRange(size);
 
-    return allocateLarge(lock, range, size);
+    return allocateLarge(lock, largeObject, size);
 }
 
 void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size, size_t unalignedSize)
@@ -413,31 +413,39 @@
 
     m_isAllocatingPages = true;
 
-    Range range = m_largeRanges.take(alignment, size, unalignedSize);
-    if (!range)
-        range = m_vmHeap.allocateLargeRange(alignment, size, unalignedSize);
+    LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
+    if (!largeObject)
+        largeObject = m_vmHeap.allocateLargeRange(alignment, size, unalignedSize);
 
     size_t alignmentMask = alignment - 1;
-    if (test(range.begin(), alignmentMask)) {
-        // Because we allocate left-to-right, we must explicitly allocate the
-        // unaligned space on the left in order to break off the aligned space
-        // we want in the middle.
-        Range aligned;
-        size_t unalignedSize = roundUpToMultipleOf(alignment, range.begin() + largeMin) - range.begin();
-        allocateLarge(lock, range, unalignedSize, aligned);
-        allocateLarge(lock, aligned, size);
-        deallocateLarge(lock, range.begin());
-        return aligned.begin();
-    }
+    if (!test(largeObject.begin(), alignmentMask))
+        return allocateLarge(lock, largeObject, size);
 
-    return allocateLarge(lock, range, size);
+    // Because we allocate VM left-to-right, we must explicitly allocate the
+    // unaligned space on the left in order to break off the aligned space
+    // we want in the middle.
+    size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin();
+    std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize);
+    allocateLarge(lock, pair.first, prefixSize);
+    allocateLarge(lock, pair.second, size);
+    deallocateLarge(lock, pair.first);
+    return pair.second.begin();
 }
 
-void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object)
+void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, const LargeObject& largeObject)
 {
-    Range range = BoundaryTag::deallocate(object);
-    m_largeRanges.insert(range);
+    BASSERT(!largeObject.isFree());
+    largeObject.setFree(true);
+    
+    LargeObject merged = largeObject.merge();
+    m_largeObjects.insert(merged);
     m_scavenger.run();
 }
 
+void Heap::deallocateLarge(std::lock_guard<StaticMutex>& lock, void* object)
+{
+    LargeObject largeObject(object);
+    deallocateLarge(lock, largeObject);
+}
+
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -81,9 +81,8 @@
     void deallocateSmallLine(std::lock_guard<StaticMutex>&, SmallLine*);
     void deallocateMediumLine(std::lock_guard<StaticMutex>&, MediumLine*);
 
-    void* allocateLarge(std::lock_guard<StaticMutex>&, const Range&, size_t);
-    void allocateLarge(std::lock_guard<StaticMutex>&, const Range&, size_t, Range& leftover);
-    Range allocateLargeChunk();
+    void* allocateLarge(std::lock_guard<StaticMutex>&, LargeObject&, size_t);
+    void deallocateLarge(std::lock_guard<StaticMutex>&, const LargeObject&);
 
     void splitLarge(BeginTag*, size_t, EndTag*&, Range&);
     void mergeLarge(BeginTag*&, EndTag*&, Range&);
@@ -104,8 +103,8 @@
     Vector<SmallPage*> m_smallPages;
     Vector<MediumPage*> m_mediumPages;
 
-    SegregatedFreeList m_largeRanges;
-    Vector<Range> m_xLargeRanges;
+    SegregatedFreeList m_largeObjects;
+    Vector<Range> m_xLargeObjects;
 
     bool m_isAllocatingPages;
 

Added: trunk/Source/bmalloc/bmalloc/LargeObject.h (0 => 180576)


--- trunk/Source/bmalloc/bmalloc/LargeObject.h	                        (rev 0)
+++ trunk/Source/bmalloc/bmalloc/LargeObject.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -0,0 +1,242 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LargeObject_h
+#define LargeObject_h
+
+#include "BeginTag.h"
+#include "EndTag.h"
+#include "LargeChunk.h"
+
+namespace bmalloc {
+
+class LargeObject {
+public:
+    LargeObject();
+    LargeObject(void*);
+
+    enum DoNotValidateTag { DoNotValidate };
+    LargeObject(DoNotValidateTag, void*);
+    
+    bool operator!() { return !m_object; }
+
+    char* begin() const { return static_cast<char*>(m_object); }
+    size_t size() const { return m_beginTag->size(); }
+    Range range() const { return Range(m_object, size()); }
+
+    void setFree(bool) const;
+    bool isFree() const;
+    
+    bool hasPhysicalPages() const;
+    void setHasPhysicalPages(bool) const;
+    
+    bool isValidAndFree(size_t) const;
+
+    LargeObject merge() const;
+    std::pair<LargeObject, LargeObject> split(size_t) const;
+
+private:
+    LargeObject(BeginTag*, EndTag*, void*);
+
+    void validate() const;
+    void validateSelf() const;
+
+    BeginTag* m_beginTag;
+    EndTag* m_endTag;
+    void* m_object;
+};
+
+inline LargeObject::LargeObject()
+    : m_beginTag(nullptr)
+    , m_endTag(nullptr)
+    , m_object(nullptr)
+{
+}
+
+inline LargeObject::LargeObject(void* object)
+    : m_beginTag(LargeChunk::beginTag(object))
+    , m_endTag(LargeChunk::endTag(object, m_beginTag->size()))
+    , m_object(object)
+{
+    validate();
+}
+
+inline LargeObject::LargeObject(DoNotValidateTag, void* object)
+    : m_beginTag(LargeChunk::beginTag(object))
+    , m_endTag(LargeChunk::endTag(object, m_beginTag->size()))
+    , m_object(object)
+{
+}
+
+inline LargeObject::LargeObject(BeginTag* beginTag, EndTag* endTag, void* object)
+    : m_beginTag(beginTag)
+    , m_endTag(endTag)
+    , m_object(object)
+{
+}
+
+inline void LargeObject::setFree(bool isFree) const
+{
+    validate();
+    m_beginTag->setFree(isFree);
+    m_endTag->setFree(isFree);
+}
+
+inline bool LargeObject::isFree() const
+{
+    validate();
+    return m_beginTag->isFree();
+}
+
+inline bool LargeObject::hasPhysicalPages() const
+{
+    validate();
+    return m_beginTag->hasPhysicalPages();
+}
+
+inline void LargeObject::setHasPhysicalPages(bool hasPhysicalPages) const
+{
+    validate();
+    m_beginTag->setHasPhysicalPages(hasPhysicalPages);
+    m_endTag->setHasPhysicalPages(hasPhysicalPages);
+}
+
+inline bool LargeObject::isValidAndFree(size_t expectedSize) const
+{
+    if (!m_beginTag->isFree())
+        return false;
+    
+    if (m_beginTag->isEnd())
+        return false;
+
+    if (m_beginTag->size() != expectedSize)
+        return false;
+    
+    if (m_beginTag->compactBegin() != BoundaryTag::compactBegin(m_object))
+        return false;
+
+    return true;
+}
+
+inline LargeObject LargeObject::merge() const
+{
+    validate();
+    BASSERT(isFree());
+
+    bool hasPhysicalPages = m_beginTag->hasPhysicalPages();
+
+    BeginTag* beginTag = m_beginTag;
+    EndTag* endTag = m_endTag;
+    Range range = this->range();
+    
+    EndTag* prev = beginTag->prev();
+    if (prev->isFree()) {
+        Range left(range.begin() - prev->size(), prev->size());
+        range = Range(left.begin(), left.size() + range.size());
+        hasPhysicalPages &= prev->hasPhysicalPages();
+
+        prev->clear();
+        beginTag->clear();
+
+        beginTag = LargeChunk::beginTag(range.begin());
+    }
+
+    BeginTag* next = endTag->next();
+    if (next->isFree()) {
+        Range right(range.end(), next->size());
+        range = Range(range.begin(), range.size() + right.size());
+
+        hasPhysicalPages &= next->hasPhysicalPages();
+
+        endTag->clear();
+        next->clear();
+
+        endTag = LargeChunk::endTag(range.begin(), range.size());
+    }
+
+    beginTag->setRange(range);
+    beginTag->setFree(true);
+    beginTag->setHasPhysicalPages(hasPhysicalPages);
+    endTag->init(beginTag);
+
+    return LargeObject(beginTag, endTag, range.begin());
+}
+
+inline std::pair<LargeObject, LargeObject> LargeObject::split(size_t size) const
+{
+    BASSERT(isFree());
+
+    Range split(begin(), size);
+    Range leftover = Range(split.end(), this->size() - size);
+    BASSERT(leftover.size() >= largeMin);
+
+    BeginTag* splitBeginTag = m_beginTag;
+    EndTag* splitEndTag = LargeChunk::endTag(split.begin(), size);
+
+    BeginTag* leftoverBeginTag = LargeChunk::beginTag(leftover.begin());
+    EndTag* leftoverEndTag = m_endTag;
+
+    splitBeginTag->setRange(split);
+    splitEndTag->init(splitBeginTag);
+
+    *leftoverBeginTag = *splitBeginTag;
+    leftoverBeginTag->setRange(leftover);
+    leftoverEndTag->init(leftoverBeginTag);
+
+    return std::make_pair(
+        LargeObject(splitBeginTag, splitEndTag, split.begin()),
+        LargeObject(leftoverBeginTag, leftoverEndTag, leftover.begin()));
+}
+
+inline void LargeObject::validateSelf() const
+{
+    BASSERT(!m_beginTag->isEnd());
+    BASSERT(m_endTag->isEnd() || static_cast<BoundaryTag*>(m_endTag) == static_cast<BoundaryTag*>(m_beginTag));
+
+    BASSERT(size() >= largeMin);
+
+    BASSERT(m_beginTag->size() == m_endTag->size());
+    BASSERT(m_beginTag->isFree() == m_endTag->isFree());
+    BASSERT(m_beginTag->hasPhysicalPages() == m_endTag->hasPhysicalPages());
+}
+
+inline void LargeObject::validate() const
+{
+    if (!m_beginTag->prev()->isSentinel()) {
+        LargeObject prev(DoNotValidate, begin() - m_beginTag->prev()->size());
+        prev.validateSelf();
+    }
+
+    validateSelf();
+
+    if (!m_endTag->next()->isSentinel()) {
+        LargeObject next(DoNotValidate, begin() + size());
+        next.validateSelf();
+    }
+}
+
+} // namespace bmalloc
+
+#endif // LargeObject_h

Modified: trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,74 +35,68 @@
     BASSERT(static_cast<size_t>(&select(largeMax) - m_lists.begin()) == m_lists.size() - 1);
 }
 
-void SegregatedFreeList::insert(const Range& range)
+void SegregatedFreeList::insert(const LargeObject& largeObject)
 {
-IF_DEBUG(
-    BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-    BASSERT(beginTag->isInFreeList(range));
-)
-
-    auto& list = select(range.size());
-    list.push(range);
+    BASSERT(largeObject.isFree());
+    auto& list = select(largeObject.size());
+    list.push(largeObject.range());
 }
 
-Range SegregatedFreeList::takeGreedy(size_t size)
+LargeObject SegregatedFreeList::takeGreedy(size_t size)
 {
     for (size_t i = m_lists.size(); i-- > 0; ) {
-        Range range = takeGreedy(m_lists[i], size);
-        if (!range)
+        LargeObject largeObject = takeGreedy(m_lists[i], size);
+        if (!largeObject)
             continue;
 
-        return range;
+        return largeObject;
     }
-    return Range();
+    return LargeObject();
 }
 
-Range SegregatedFreeList::takeGreedy(List& list, size_t size)
+LargeObject SegregatedFreeList::takeGreedy(List& list, size_t size)
 {
     for (size_t i = list.size(); i-- > 0; ) {
-        Range range = list[i];
-
         // We don't eagerly remove items when we merge and/or split ranges,
         // so we need to validate each free list entry before using it.
-        BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-        if (!beginTag->isInFreeList(range)) {
+        LargeObject largeObject(LargeObject::DoNotValidate, list[i].begin());
+        if (!largeObject.isValidAndFree(list[i].size())) {
             list.pop(i);
             continue;
         }
 
-        if (range.size() < size)
+        if (largeObject.size() < size)
             continue;
 
         list.pop(i);
-        return range;
+        return largeObject;
     }
 
-    return Range();
+    return LargeObject();
 }
 
-Range SegregatedFreeList::take(size_t size)
+LargeObject SegregatedFreeList::take(size_t size)
 {
     for (auto* list = &select(size); list != m_lists.end(); ++list) {
-        Range range = take(*list, size);
-        if (!range)
+        LargeObject largeObject = take(*list, size);
+        if (!largeObject)
             continue;
 
-        return range;
+        return largeObject;
     }
-    return Range();
+    return LargeObject();
 }
 
-Range SegregatedFreeList::take(size_t alignment, size_t size, size_t unalignedSize)
+LargeObject SegregatedFreeList::take(size_t alignment, size_t size, size_t unalignedSize)
 {
     for (auto* list = &select(size); list != m_lists.end(); ++list) {
-        Range range = take(*list, alignment, size, unalignedSize);
-        if (!range)
+        LargeObject largeObject = take(*list, alignment, size, unalignedSize);
+        if (!largeObject)
             continue;
 
-        return range;
+        return largeObject;
     }
-    return Range();
+    return LargeObject();
 }
 
 INLINE auto SegregatedFreeList::select(size_t size) -> List&
@@ -116,61 +110,57 @@
     return m_lists[result];
 }
 
-INLINE Range SegregatedFreeList::take(List& list, size_t size)
+INLINE LargeObject SegregatedFreeList::take(List& list, size_t size)
 {
-    Range first;
+    LargeObject first;
     size_t end = list.size() > segregatedFreeListSearchDepth ? list.size() - segregatedFreeListSearchDepth : 0;
     for (size_t i = list.size(); i-- > end; ) {
-        Range range = list[i];
-
         // We don't eagerly remove items when we merge and/or split ranges, so
         // we need to validate each free list entry before using it.
-        BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-        if (!beginTag->isInFreeList(range)) {
+        LargeObject largeObject(LargeObject::DoNotValidate, list[i].begin());
+        if (!largeObject.isValidAndFree(list[i].size())) {
             list.pop(i);
             continue;
         }
 
-        if (range.size() < size)
+        if (largeObject.size() < size)
             continue;
 
-        if (!!first && first < range)
+        if (!!first && first.begin() < largeObject.begin())
             continue;
 
-        first = range;
+        first = largeObject;
     }
     
     return first;
 }
 
-INLINE Range SegregatedFreeList::take(List& list, size_t alignment, size_t size, size_t unalignedSize)
+INLINE LargeObject SegregatedFreeList::take(List& list, size_t alignment, size_t size, size_t unalignedSize)
 {
     BASSERT(isPowerOfTwo(alignment));
     size_t alignmentMask = alignment - 1;
 
-    Range first;
+    LargeObject first;
     size_t end = list.size() > segregatedFreeListSearchDepth ? list.size() - segregatedFreeListSearchDepth : 0;
     for (size_t i = list.size(); i-- > end; ) {
-        Range range = list[i];
-
         // We don't eagerly remove items when we merge and/or split ranges, so
         // we need to validate each free list entry before using it.
-        BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-        if (!beginTag->isInFreeList(range)) {
+        LargeObject largeObject(LargeObject::DoNotValidate, list[i].begin());
+        if (!largeObject.isValidAndFree(list[i].size())) {
             list.pop(i);
             continue;
         }
 
-        if (range.size() < size)
+        if (largeObject.size() < size)
             continue;
 
-        if (test(range.begin(), alignmentMask) && range.size() < unalignedSize)
+        if (test(largeObject.begin(), alignmentMask) && largeObject.size() < unalignedSize)
             continue;
 
-        if (!!first && first < range)
+        if (!!first && first.begin() < largeObject.begin())
             continue;
 
-        first = range;
+        first = largeObject;
     }
     
     return first;

Modified: trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -26,7 +26,7 @@
 #ifndef SegregatedFreeList_h
 #define SegregatedFreeList_h
 
-#include "Range.h"
+#include "LargeObject.h"
 #include "Vector.h"
 #include <array>
 
@@ -36,34 +36,34 @@
 public:
     SegregatedFreeList();
 
-    void insert(const Range&);
+    void insert(const LargeObject&);
 
-    // Returns a reasonable fit for the provided size, or Range() if no fit
-    // is found. May return Range() spuriously if searching takes too long.
+    // Returns a reasonable fit for the provided size, or LargeObject() if no fit
+    // is found. May return LargeObject() spuriously if searching takes too long.
     // Incrementally removes stale items from the free list while searching.
-    // Does not eagerly remove the returned range from the free list.
-    Range take(size_t);
+    // Does not eagerly remove the returned object from the free list.
+    LargeObject take(size_t);
 
     // Returns a reasonable fit for the provided alignment and size, or
-    // a reasonable fit for the provided unaligned size, or Range() if no fit
-    // is found. May return Range() spuriously if searching takes too long.
-    // Incrementally removes stale items from the free list while searching.
-    // Does not eagerly remove the returned range from the free list.
-    Range take(size_t alignment, size_t, size_t unalignedSize);
+    // a reasonable fit for the provided unaligned size, or LargeObject() if no
+    // fit is found. May return LargeObject() spuriously if searching takes too
+    // long. Incrementally removes stale items from the free list while
+    // searching. Does not eagerly remove the returned object from the free list.
+    LargeObject take(size_t alignment, size_t, size_t unalignedSize);
 
-    // Returns an unreasonable fit for the provided size, or Range() if no fit
-    // is found. Never returns Range() spuriously.
-    // Incrementally removes stale items from the free list while searching.
-    // Eagerly removes the returned range from the free list.
-    Range takeGreedy(size_t);
+    // Returns an unreasonable fit for the provided size, or LargeObject() if no
+    // fit is found. Never returns LargeObject() spuriously. Incrementally
+    // removes stale items from the free list while searching. Eagerly removes
+    // the returned object from the free list.
+    LargeObject takeGreedy(size_t);
     
 private:
     typedef Vector<Range> List;
 
     List& select(size_t);
-    Range take(List&, size_t);
-    Range take(List&, size_t alignment, size_t, size_t unalignedSize);
-    Range takeGreedy(List&, size_t);
+    LargeObject take(List&, size_t);
+    LargeObject take(List&, size_t alignment, size_t, size_t unalignedSize);
+    LargeObject takeGreedy(List&, size_t);
 
     std::array<List, 19> m_lists;
 };

Modified: trunk/Source/bmalloc/bmalloc/VMHeap.cpp (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/VMHeap.cpp	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/VMHeap.cpp	2015-02-24 19:12:20 UTC (rev 180576)
@@ -53,7 +53,7 @@
         m_mediumPages.push(it);
 
     LargeChunk* largeChunk = superChunk->largeChunk();
-    m_largeRanges.insert(BoundaryTag::init(largeChunk));
+    m_largeObjects.insert(LargeObject(BoundaryTag::init(largeChunk).begin()));
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/VMHeap.h (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc/VMHeap.h	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc/VMHeap.h	2015-02-24 19:12:20 UTC (rev 180576)
@@ -29,6 +29,7 @@
 #include "AsyncTask.h"
 #include "FixedVector.h"
 #include "LargeChunk.h"
+#include "LargeObject.h"
 #include "MediumChunk.h"
 #include "Range.h"
 #include "SegregatedFreeList.h"
@@ -51,19 +52,19 @@
 
     SmallPage* allocateSmallPage();
     MediumPage* allocateMediumPage();
-    Range allocateLargeRange(size_t);
-    Range allocateLargeRange(size_t alignment, size_t, size_t unalignedSize);
+    LargeObject allocateLargeRange(size_t);
+    LargeObject allocateLargeRange(size_t alignment, size_t, size_t unalignedSize);
 
     void deallocateSmallPage(std::unique_lock<StaticMutex>&, SmallPage*);
     void deallocateMediumPage(std::unique_lock<StaticMutex>&, MediumPage*);
-    void deallocateLargeRange(std::unique_lock<StaticMutex>&, Range);
+    void deallocateLargeRange(std::unique_lock<StaticMutex>&, LargeObject&);
 
 private:
     void grow();
 
     Vector<SmallPage*> m_smallPages;
     Vector<MediumPage*> m_mediumPages;
-    SegregatedFreeList m_largeRanges;
+    SegregatedFreeList m_largeObjects;
 #if BOS(DARWIN)
     Zone m_zone;
 #endif
@@ -85,26 +86,26 @@
     return m_mediumPages.pop();
 }
 
-inline Range VMHeap::allocateLargeRange(size_t size)
+inline LargeObject VMHeap::allocateLargeRange(size_t size)
 {
-    Range range = m_largeRanges.take(size);
-    if (!range) {
+    LargeObject largeObject = m_largeObjects.take(size);
+    if (!largeObject) {
         grow();
-        range = m_largeRanges.take(size);
-        BASSERT(range);
+        largeObject = m_largeObjects.take(size);
+        BASSERT(largeObject);
     }
-    return range;
+    return largeObject;
 }
 
-inline Range VMHeap::allocateLargeRange(size_t alignment, size_t size, size_t unalignedSize)
+inline LargeObject VMHeap::allocateLargeRange(size_t alignment, size_t size, size_t unalignedSize)
 {
-    Range range = m_largeRanges.take(alignment, size, unalignedSize);
-    if (!range) {
+    LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
+    if (!largeObject) {
         grow();
-        range = m_largeRanges.take(alignment, size, unalignedSize);
-        BASSERT(range);
+        largeObject = m_largeObjects.take(alignment, size, unalignedSize);
+        BASSERT(largeObject);
     }
-    return range;
+    return largeObject;
 }
 
 inline void VMHeap::deallocateSmallPage(std::unique_lock<StaticMutex>& lock, SmallPage* page)
@@ -125,27 +126,20 @@
     m_mediumPages.push(page);
 }
 
-inline void VMHeap::deallocateLargeRange(std::unique_lock<StaticMutex>& lock, Range range)
+inline void VMHeap::deallocateLargeRange(std::unique_lock<StaticMutex>& lock, LargeObject& largeObject)
 {
-    BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-    EndTag* endTag = LargeChunk::endTag(range.begin(), range.size());
-    
     // Temporarily mark this range as allocated to prevent clients from merging
     // with it and then reallocating it while we're messing with its physical pages.
-    beginTag->setFree(false);
-    endTag->setFree(false);
+    largeObject.setFree(false);
 
     lock.unlock();
-    vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
+    vmDeallocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
     lock.lock();
 
-    beginTag->setFree(true);
-    endTag->setFree(true);
+    largeObject.setFree(true);
+    largeObject.setHasPhysicalPages(false);
 
-    beginTag->setHasPhysicalPages(false);
-    endTag->setHasPhysicalPages(false);
-
-    m_largeRanges.insert(range);
+    m_largeObjects.insert(largeObject);
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj (180575 => 180576)


--- trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj	2015-02-24 18:55:10 UTC (rev 180575)
+++ trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj	2015-02-24 19:12:20 UTC (rev 180576)
@@ -22,6 +22,7 @@
 		1448C30118F3754C00502839 /* bmalloc.h in Headers */ = {isa = PBXBuildFile; fileRef = 1448C2FE18F3754300502839 /* bmalloc.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14895D911A3A319C0006235D /* Environment.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14895D8F1A3A319C0006235D /* Environment.cpp */; };
 		14895D921A3A319C0006235D /* Environment.h in Headers */ = {isa = PBXBuildFile; fileRef = 14895D901A3A319C0006235D /* Environment.h */; settings = {ATTRIBUTES = (Private, ); }; };
+		14C6216F1A9A9A6200E72293 /* LargeObject.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C6216E1A9A9A6200E72293 /* LargeObject.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14C919C918FCC59F0028DB43 /* BPlatform.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C919C818FCC59F0028DB43 /* BPlatform.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14CC394C18EA8858004AFE34 /* libbmalloc.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 14F271BE18EA3963008C152F /* libbmalloc.a */; };
 		14DD788C18F48CAE00950702 /* LargeChunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 147AAA8818CD17CE002201E4 /* LargeChunk.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -132,6 +133,7 @@
 		14B650C618F39F4800751968 /* bmalloc.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = bmalloc.xcconfig; sourceTree = "<group>"; };
 		14B650C718F39F4800751968 /* DebugRelease.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = DebugRelease.xcconfig; sourceTree = "<group>"; };
 		14B650C918F3A04200751968 /* mbmalloc.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = mbmalloc.xcconfig; sourceTree = "<group>"; };
+		14C6216E1A9A9A6200E72293 /* LargeObject.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LargeObject.h; path = bmalloc/LargeObject.h; sourceTree = "<group>"; };
 		14C919C818FCC59F0028DB43 /* BPlatform.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BPlatform.h; path = bmalloc/BPlatform.h; sourceTree = "<group>"; };
 		14CC394418EA8743004AFE34 /* libmbmalloc.dylib */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.dylib"; includeInIndex = 0; path = libmbmalloc.dylib; sourceTree = BUILT_PRODUCTS_DIR; };
 		14D9DB4517F2447100EAAB79 /* FixedVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = FixedVector.h; path = bmalloc/FixedVector.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
@@ -218,6 +220,7 @@
 				14105E7B18DBD7AF003A106E /* BoundaryTagInlines.h */,
 				1417F64618B54A700076FA3F /* EndTag.h */,
 				147AAA8818CD17CE002201E4 /* LargeChunk.h */,
+				14C6216E1A9A9A6200E72293 /* LargeObject.h */,
 				146BEE2118C845AE0002D5A2 /* SegregatedFreeList.cpp */,
 				146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */,
 			);
@@ -319,6 +322,7 @@
 				143CB81D19022BC900B16A45 /* StaticMutex.h in Headers */,
 				14DD78B918F48D6B00950702 /* MediumTraits.h in Headers */,
 				1448C30118F3754C00502839 /* bmalloc.h in Headers */,
+				14C6216F1A9A9A6200E72293 /* LargeObject.h in Headers */,
 				14DD789A18F48D4A00950702 /* Deallocator.h in Headers */,
 				1400274C18F89C3D00115C97 /* SegregatedFreeList.h in Headers */,
 				14DD788D18F48CC600950702 /* BeginTag.h in Headers */,
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to