Title: [173224] trunk/Source/bmalloc
Revision
173224
Author
gga...@apple.com
Date
2014-09-03 14:00:33 -0700 (Wed, 03 Sep 2014)

Log Message

bmalloc crashes on the EWS bots (due to bad large object allocation)
https://bugs.webkit.org/show_bug.cgi?id=136469

Reviewed by Andreas Kling.

It's possible to convince bmalloc to perform a bad large object allocation,
through these steps:

(1) Insert object A into freelist F0.

(2) Split, merge and split again A's neighbors such that object B is
inserted into freelist F0, with boundary tag and size equal to object A,
but pointer not completely equal to object A. Put object B at the head of F0.

(3) Allocate some other object from F0, swapping its position in the
freelist with object B, such that object A is now ahead of object B.

--> Now, the next allocation for size A/B will allocate object A, which
has a slightly wrong idea about where the object actually begins.
Immediately, you'll corrupt a little memory, and over time, you'll also
corrupt boundary tag metadata.

The solution is to store the begin pointer in the boundary tag. Luckily,
this doesn't make the tag any bigger, and it's not a noticeable slowdown
on MallocBench.

* bmalloc/Algorithm.h:
(bmalloc::rightShift):
* bmalloc/BeginTag.h:
(bmalloc::BeginTag::isInFreeList): This is the bug fix. Make sure to
validate the start pointer when popping off the free list. Through a
very uncommon set of steps, it is possible to have an item in the free
list that is valid by all accounts except for its start pointer.

* bmalloc/BoundaryTag.h:
(bmalloc::BoundaryTag::compactBegin):
(bmalloc::BoundaryTag::setRange):
(bmalloc::BoundaryTag::setSize): Deleted. Record a compact version of the
start pointer. We don't need the whole pointer -- just the offset, in
largeAlignment increments, into the relevant boundary tag bucket.

* bmalloc/BoundaryTagInlines.h:
(bmalloc::validateNext):
(bmalloc::BoundaryTag::init):
(bmalloc::BoundaryTag::mergeLarge):
(bmalloc::BoundaryTag::splitLarge):
* bmalloc/SegregatedFreeList.cpp:
(bmalloc::SegregatedFreeList::insert):
(bmalloc::SegregatedFreeList::takeGreedy):
(bmalloc::SegregatedFreeList::take): Provide the whole range instead of
the size when establishing a boundary tag, as required by the new
interface.

* bmalloc/Sizes.h:

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (173223 => 173224)


--- trunk/Source/bmalloc/ChangeLog	2014-09-03 20:59:58 UTC (rev 173223)
+++ trunk/Source/bmalloc/ChangeLog	2014-09-03 21:00:33 UTC (rev 173224)
@@ -1,3 +1,60 @@
+2014-09-02  Geoffrey Garen  <gga...@apple.com>
+
+        bmalloc crashes on the EWS bots (due to bad large object allocation)
+        https://bugs.webkit.org/show_bug.cgi?id=136469
+
+        Reviewed by Andreas Kling.
+
+        It's possible to convince bmalloc to perform a bad large object allocation,
+        through these steps:
+
+        (1) Insert object A into freelist F0.
+
+        (2) Split, merge and split again A's neighbors such that object B is
+        inserted into freelist F0, with boundary tag and size equal to object A,
+        but pointer not completely equal to object A. Put object B at the head of F0.
+
+        (3) Allocate some other object from F0, swapping its position in the
+        freelist with object B, such that object A is now ahead of object B.
+
+        --> Now, the next allocation for size A/B will allocate object A, which
+        has a slightly wrong idea about where the object actually begins.
+        Immediately, you'll corrupt a little memory, and over time, you'll also
+        corrupt boundary tag metadata.
+
+        The solution is to store the begin pointer in the boundary tag. Luckily,
+        this doesn't make the tag any bigger, and it's not a noticeable slowdown
+        on MallocBench.
+
+        * bmalloc/Algorithm.h:
+        (bmalloc::rightShift):
+        * bmalloc/BeginTag.h:
+        (bmalloc::BeginTag::isInFreeList): This is the bug fix. Make sure to
+        validate the start pointer when popping off the free list. Through a
+        very uncommon set of steps, it is possible to have an item in the free
+        list that is valid by all accounts except for its start pointer.
+
+        * bmalloc/BoundaryTag.h:
+        (bmalloc::BoundaryTag::compactBegin):
+        (bmalloc::BoundaryTag::setRange):
+        (bmalloc::BoundaryTag::setSize): Deleted. Record a compact version of the
+        start pointer. We don't need the whole pointer -- just the offset, in
+        largeAlignment increments, into the relevant boundary tag bucket.
+
+        * bmalloc/BoundaryTagInlines.h:
+        (bmalloc::validateNext):
+        (bmalloc::BoundaryTag::init):
+        (bmalloc::BoundaryTag::mergeLarge):
+        (bmalloc::BoundaryTag::splitLarge):
+        * bmalloc/SegregatedFreeList.cpp:
+        (bmalloc::SegregatedFreeList::insert):
+        (bmalloc::SegregatedFreeList::takeGreedy):
+        (bmalloc::SegregatedFreeList::take): Provide the whole range instead of
+        the size when establishing a boundary tag, as required by the new
+        interface.
+
+        * bmalloc/Sizes.h:
+
 2014-08-14  Geoffrey Garen  <gga...@apple.com>
 
         Fixed a bmalloc crash seen on the EWS bot

Modified: trunk/Source/bmalloc/bmalloc/Algorithm.h (173223 => 173224)


--- trunk/Source/bmalloc/bmalloc/Algorithm.h	2014-09-03 20:59:58 UTC (rev 173223)
+++ trunk/Source/bmalloc/bmalloc/Algorithm.h	2014-09-03 21:00:33 UTC (rev 173224)
@@ -46,12 +46,17 @@
 {
     return a < b ? a : b;
 }
-    
+
 template<typename T> inline constexpr T mask(T value, uintptr_t mask)
 {
     return reinterpret_cast<T>(reinterpret_cast<uintptr_t>(value) & mask);
 }
 
+template<typename T> inline constexpr T rightShift(T value, uintptr_t shift)
+{
+    return reinterpret_cast<T>(reinterpret_cast<uintptr_t>(value) >> shift);
+}
+
 template<typename T> inline constexpr bool test(T value, uintptr_t mask)
 {
     return !!(reinterpret_cast<uintptr_t>(value) & mask);

Modified: trunk/Source/bmalloc/bmalloc/BeginTag.h (173223 => 173224)


--- trunk/Source/bmalloc/bmalloc/BeginTag.h	2014-09-03 20:59:58 UTC (rev 173223)
+++ trunk/Source/bmalloc/bmalloc/BeginTag.h	2014-09-03 21:00:33 UTC (rev 173224)
@@ -32,12 +32,12 @@
 
 class BeginTag : public BoundaryTag {
 public:
-    bool isInFreeList(size_t);
+    bool isInFreeList(const Range&);
 };
 
-inline bool BeginTag::isInFreeList(size_t size)
+inline bool BeginTag::isInFreeList(const Range& range)
 {
-    return isFree() && !isEnd() && this->size() == size;
+    return isFree() && !isEnd() && this->size() == range.size() && this->compactBegin() == compactBegin(range);
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/BoundaryTag.h (173223 => 173224)


--- trunk/Source/bmalloc/bmalloc/BoundaryTag.h	2014-09-03 20:59:58 UTC (rev 173223)
+++ trunk/Source/bmalloc/bmalloc/BoundaryTag.h	2014-09-03 21:00:33 UTC (rev 173224)
@@ -27,6 +27,7 @@
 #define BoundaryTag_h
 
 #include "BAssert.h"
+#include "Range.h"
 #include "Sizes.h"
 
 namespace bmalloc {
@@ -41,6 +42,7 @@
     static Range init(LargeChunk*);
     static Range deallocate(void*);
     static void allocate(size_t, Range&, Range& leftover, bool& hasPhysicalPages);
+    static unsigned compactBegin(const Range&);
 
     bool isXLarge() { return m_size == xLargeMarker; }
     void setXLarge() { m_size = xLargeMarker; }
@@ -58,17 +60,21 @@
     void clear() { memset(this, 0, sizeof(*this)); }
     
     size_t size() { return m_size; }
-    void setSize(size_t);
+    unsigned compactBegin() { return m_compactBegin; }
+
+    void setRange(const Range&);
     
     EndTag* prev();
     BeginTag* next();
 
 private:
     static const size_t flagBits = 3;
-    static const size_t sizeBits = bitCount<unsigned>() - flagBits;
+    static const size_t compactBeginBits = 5;
+    static const size_t sizeBits = bitCount<unsigned>() - flagBits - compactBeginBits;
     static const size_t xLargeMarker = 1; // This size is unused because our minimum object size is greater than it.
 
     static_assert(largeMin > xLargeMarker, "largeMin must provide enough umbrella to fit xLargeMarker.");
+    static_assert((1 << compactBeginBits) - 1 >= largeMin / largeAlignment, "compactBegin must be encodable in a BoundaryTag.");
     static_assert((1 << sizeBits) - 1 >= largeMax, "largeMax must be encodable in a BoundaryTag.");
 
     static void splitLarge(BeginTag*, size_t size, EndTag*& endTag, Range&, Range& leftover);
@@ -79,13 +85,23 @@
     bool m_isFree: 1;
     bool m_isEnd: 1;
     bool m_hasPhysicalPages: 1;
+    unsigned m_compactBegin: compactBeginBits;
     unsigned m_size: sizeBits;
 };
 
-inline void BoundaryTag::setSize(size_t size)
+inline unsigned BoundaryTag::compactBegin(const Range& range)
 {
-    m_size = static_cast<unsigned>(size);
-    BASSERT(this->size() == size);
+    return static_cast<unsigned>(
+        reinterpret_cast<uintptr_t>(
+            rightShift(
+                mask(range.begin(), largeMin - 1), largeAlignmentShift)));
+}
+
+inline void BoundaryTag::setRange(const Range& range)
+{
+    m_compactBegin = compactBegin(range);
+    m_size = static_cast<unsigned>(range.size());
+    BASSERT(this->size() == range.size());
     BASSERT(!isXLarge());
 }
 

Modified: trunk/Source/bmalloc/bmalloc/BoundaryTagInlines.h (173223 => 173224)


--- trunk/Source/bmalloc/bmalloc/BoundaryTagInlines.h	2014-09-03 20:59:58 UTC (rev 173223)
+++ trunk/Source/bmalloc/bmalloc/BoundaryTagInlines.h	2014-09-03 21:00:33 UTC (rev 173224)
@@ -64,7 +64,7 @@
 
 static inline void validateNext(BeginTag* next, const Range& range)
 {
-    if (next->size() == largeMin && !next->isFree()) // Right sentinel tag.
+    if (next->size() == largeMin && !next->compactBegin() && !next->isFree()) // Right sentinel tag.
         return;
 
     void* nextObject = range.end();
@@ -84,7 +84,7 @@
     Range range(chunk->begin(), chunk->end() - chunk->begin());
 
     BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-    beginTag->setSize(range.size());
+    beginTag->setRange(range);
     beginTag->setFree(true);
     beginTag->setHasPhysicalPages(false);
 
@@ -97,12 +97,12 @@
     
     EndTag* leftSentinel = beginTag->prev();
     BASSERT(leftSentinel >= static_cast<void*>(chunk));
-    leftSentinel->setSize(largeMin);
+    leftSentinel->setRange(Range(nullptr, largeMin));
     leftSentinel->setFree(false);
 
     BeginTag* rightSentinel = endTag->next();
     BASSERT(rightSentinel < static_cast<void*>(range.begin()));
-    rightSentinel->setSize(largeMin);
+    rightSentinel->setRange(Range(nullptr, largeMin));
     rightSentinel->setFree(false);
     
     return range;
@@ -150,7 +150,7 @@
     if (next->isFree())
         mergeLargeRight(endTag, next, range, hasPhysicalPages);
 
-    beginTag->setSize(range.size());
+    beginTag->setRange(range);
     beginTag->setFree(true);
     beginTag->setHasPhysicalPages(hasPhysicalPages);
 
@@ -175,25 +175,26 @@
 
 INLINE void BoundaryTag::splitLarge(BeginTag* beginTag, size_t size, EndTag*& endTag, Range& range, Range& leftover)
 {
-    beginTag->setSize(size);
+    leftover = Range(range.begin() + size, range.size() - size);
+    range = Range(range.begin(), size);
 
+    beginTag->setRange(range);
+
     EndTag* splitEndTag = LargeChunk::endTag(range.begin(), size);
     if (splitEndTag != static_cast<BoundaryTag*>(beginTag))
         *splitEndTag = *beginTag;
 
-    leftover = Range(range.begin() + size, range.size() - size);
     BASSERT(leftover.size() >= largeMin);
     BeginTag* leftoverBeginTag = LargeChunk::beginTag(leftover.begin());
     *leftoverBeginTag = *beginTag;
-    leftoverBeginTag->setSize(leftover.size());
+    leftoverBeginTag->setRange(leftover);
 
     if (leftoverBeginTag != static_cast<BoundaryTag*>(endTag))
         *endTag = *leftoverBeginTag;
 
-    validate(beginTag->prev(), Range(range.begin(), size), leftoverBeginTag);
+    validate(beginTag->prev(), range, leftoverBeginTag);
     validate(leftoverBeginTag->prev(), leftover, endTag->next());
 
-    range = Range(range.begin(), size);
     endTag = splitEndTag;
 }
 

Modified: trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp (173223 => 173224)


--- trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp	2014-09-03 20:59:58 UTC (rev 173223)
+++ trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp	2014-09-03 21:00:33 UTC (rev 173224)
@@ -39,7 +39,7 @@
 {
 IF_DEBUG(
     BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-    BASSERT(beginTag->isInFreeList(range.size()));
+    BASSERT(beginTag->isInFreeList(range));
 )
 
     auto& list = select(range.size());
@@ -66,7 +66,7 @@
         // We don't eagerly remove items when we merge and/or split ranges,
         // so we need to validate each free list entry before using it.
         BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-        if (!beginTag->isInFreeList(range.size())) {
+        if (!beginTag->isInFreeList(range)) {
             list.pop(i);
             continue;
         }
@@ -114,7 +114,7 @@
         // We don't eagerly remove items when we merge and/or split ranges, so
         // we need to validate each free list entry before using it.
         BeginTag* beginTag = LargeChunk::beginTag(range.begin());
-        if (!beginTag->isInFreeList(range.size())) {
+        if (!beginTag->isInFreeList(range)) {
             list.pop(i);
             continue;
         }

Modified: trunk/Source/bmalloc/bmalloc/Sizes.h (173223 => 173224)


--- trunk/Source/bmalloc/bmalloc/Sizes.h	2014-09-03 20:59:58 UTC (rev 173223)
+++ trunk/Source/bmalloc/bmalloc/Sizes.h	2014-09-03 21:00:33 UTC (rev 173224)
@@ -68,6 +68,8 @@
     static const size_t largeChunkMask = ~(largeChunkSize - 1ul);
 
     static const size_t largeAlignment = 64;
+    static const size_t largeAlignmentShift = 6;
+    static_assert(1 << largeAlignmentShift == largeAlignment, "largeAlignmentShift be log2(largeAlignment).");
     static const size_t largeMax = largeChunkSize * 99 / 100; // Plenty of room for metadata.
     static const size_t largeMin = 1024;
 
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to