Title: [243948] trunk
Revision
243948
Author
ysuz...@apple.com
Date
2019-04-05 14:58:32 -0700 (Fri, 05 Apr 2019)

Log Message

SIGSEGV in JSC::BytecodeGenerator::addStringConstant
https://bugs.webkit.org/show_bug.cgi?id=196486

Reviewed by Saam Barati.

JSTests:

* stress/arrow-function-and-use-strict-directive.js: Added.
* stress/arrow-function-syntax.js: Added. Checking EOF token handling.
(checkSyntax):
(checkSyntaxError): Currently not using it. But it is useful for testing more things related to arrow function syntax.

Source/_javascript_Core:

When parsing a FunctionExpression / FunctionDeclaration etc., we use SyntaxChecker for the body of the function because we do not have any interest on the nodes of the body at that time.
The nodes will be parsed with the ASTBuilder when the function itself is parsed for code generation. This works well previously because all the function ends with "}" previously.
SyntaxChecker lexes this "}" token, and parser restores the context back to ASTBuilder and continues parsing.

But now, we have ArrowFunctionExpression without braces `arrow => expr`. Let's consider the following code.

        arrow => expr
        "string!"

We parse arrow function's body with SyntaxChecker. At that time, we lex "string!" token under the SyntaxChecker context. But this means that we may not build string content for this token
since SyntaxChecker may not have interest on string content itself in certain case. After the parser is back to ASTBuilder, we parse "string!" as ExpressionStatement with string constant,
generate StringNode with non-built identifier (nullptr), and we accidentally create StringNode with nullptr.

This patch fixes this problem. The root cause of this problem is that the last token lexed in the previous context is used. We add lexCurrentTokenAgainUnderCurrentContext which will re-lex
the current token under the current context (may be ASTBuilder). This should be done only when the caller's context is different from SyntaxChecker, which avoids unnecessary lexing.
We leverage existing SavePoint mechanism to implement lexCurrentTokenAgainUnderCurrentContext cleanly.

And we also fix the bug in the existing SavePoint mechanism, which is shown in the attached test script. When we save LexerState, we do not save line terminator status. This patch also introduces
lexWithoutClearingLineTerminator, which lex the token without clearing line terminator status.

* parser/ASTBuilder.h:
(JSC::ASTBuilder::createString):
* parser/Lexer.cpp:
(JSC::Lexer<T>::parseMultilineComment):
(JSC::Lexer<T>::lexWithoutClearingLineTerminator): EOF token also should record offset information. This offset information is correctly handled in Lexer::setOffset too.
(JSC::Lexer<T>::lex): Deleted.
* parser/Lexer.h:
(JSC::Lexer::hasLineTerminatorBeforeToken const):
(JSC::Lexer::setHasLineTerminatorBeforeToken):
(JSC::Lexer<T>::lex):
(JSC::Lexer::prevTerminator const): Deleted.
(JSC::Lexer::setTerminator): Deleted.
* parser/Parser.cpp:
(JSC::Parser<LexerType>::allowAutomaticSemicolon):
(JSC::Parser<LexerType>::parseSingleFunction):
(JSC::Parser<LexerType>::parseStatementListItem):
(JSC::Parser<LexerType>::maybeParseAsyncFunctionDeclarationStatement):
(JSC::Parser<LexerType>::parseFunctionInfo):
(JSC::Parser<LexerType>::parseClass):
(JSC::Parser<LexerType>::parseExportDeclaration):
(JSC::Parser<LexerType>::parseAssignmentExpression):
(JSC::Parser<LexerType>::parseYieldExpression):
(JSC::Parser<LexerType>::parseProperty):
(JSC::Parser<LexerType>::parsePrimaryExpression):
(JSC::Parser<LexerType>::parseMemberExpression):
* parser/Parser.h:
(JSC::Parser::nextWithoutClearingLineTerminator):
(JSC::Parser::lexCurrentTokenAgainUnderCurrentContext):
(JSC::Parser::internalSaveLexerState):
(JSC::Parser::restoreLexerState):

Modified Paths

Added Paths

Diff

Modified: trunk/JSTests/ChangeLog (243947 => 243948)


--- trunk/JSTests/ChangeLog	2019-04-05 21:52:51 UTC (rev 243947)
+++ trunk/JSTests/ChangeLog	2019-04-05 21:58:32 UTC (rev 243948)
@@ -1,3 +1,15 @@
+2019-04-05  Yusuke Suzuki  <ysuz...@apple.com>
+
+        SIGSEGV in JSC::BytecodeGenerator::addStringConstant
+        https://bugs.webkit.org/show_bug.cgi?id=196486
+
+        Reviewed by Saam Barati.
+
+        * stress/arrow-function-and-use-strict-directive.js: Added.
+        * stress/arrow-function-syntax.js: Added. Checking EOF token handling.
+        (checkSyntax):
+        (checkSyntaxError): Currently not using it. But it is useful for testing more things related to arrow function syntax.
+
 2019-04-05  Caitlin Potter  <ca...@igalia.com>
 
         [JSC] Filter DontEnum properties in ProxyObject::getOwnPropertyNames()

Added: trunk/JSTests/stress/arrow-function-and-use-strict-directive.js (0 => 243948)


--- trunk/JSTests/stress/arrow-function-and-use-strict-directive.js	                        (rev 0)
+++ trunk/JSTests/stress/arrow-function-and-use-strict-directive.js	2019-04-05 21:58:32 UTC (rev 243948)
@@ -0,0 +1,18 @@
+// This test should not crash.
+
+dispatch => accessible.children()
+"use strict";
+
+dispatch2 => accessible.children()
+"use strict"
+
+var protected = 42;
+
+dispatch3 => "use strict"
+protected;
+
+async dispatch4 => hey
+"use strict";
+
+async dispatch4 => "use strict"
+protected;

Added: trunk/JSTests/stress/arrow-function-syntax.js (0 => 243948)


--- trunk/JSTests/stress/arrow-function-syntax.js	                        (rev 0)
+++ trunk/JSTests/stress/arrow-function-syntax.js	2019-04-05 21:58:32 UTC (rev 243948)
@@ -0,0 +1,27 @@
+function checkSyntax(src) {
+    try {
+        eval(src);
+    } catch (error) {
+        if (error instanceof SyntaxError)
+            throw new Error("Syntax Error: " + String(error) + "\n script: `" + src + "`");
+    }
+}
+
+function checkSyntaxError(src, message) {
+    var bError = false;
+    try {
+        eval(src);
+    } catch (error) {
+        bError = error instanceof SyntaxError && (String(error) === message || typeof message === 'undefined');
+    }
+    if (!bError) {
+        throw new Error("Expected syntax Error: " + message + "\n in script: `" + src + "`");
+    }
+}
+
+checkSyntax(`()=>42`);
+checkSyntax(`()=>42
+`);
+checkSyntax(`()=>42//Hello`);
+checkSyntax(`()=>42//Hello
+`);

Modified: trunk/Source/_javascript_Core/ChangeLog (243947 => 243948)


--- trunk/Source/_javascript_Core/ChangeLog	2019-04-05 21:52:51 UTC (rev 243947)
+++ trunk/Source/_javascript_Core/ChangeLog	2019-04-05 21:58:32 UTC (rev 243948)
@@ -1,3 +1,61 @@
+2019-04-05  Yusuke Suzuki  <ysuz...@apple.com>
+
+        SIGSEGV in JSC::BytecodeGenerator::addStringConstant
+        https://bugs.webkit.org/show_bug.cgi?id=196486
+
+        Reviewed by Saam Barati.
+
+        When parsing a FunctionExpression / FunctionDeclaration etc., we use SyntaxChecker for the body of the function because we do not have any interest on the nodes of the body at that time.
+        The nodes will be parsed with the ASTBuilder when the function itself is parsed for code generation. This works well previously because all the function ends with "}" previously.
+        SyntaxChecker lexes this "}" token, and parser restores the context back to ASTBuilder and continues parsing.
+
+        But now, we have ArrowFunctionExpression without braces `arrow => expr`. Let's consider the following code.
+
+                arrow => expr
+                "string!"
+
+        We parse arrow function's body with SyntaxChecker. At that time, we lex "string!" token under the SyntaxChecker context. But this means that we may not build string content for this token
+        since SyntaxChecker may not have interest on string content itself in certain case. After the parser is back to ASTBuilder, we parse "string!" as ExpressionStatement with string constant,
+        generate StringNode with non-built identifier (nullptr), and we accidentally create StringNode with nullptr.
+
+        This patch fixes this problem. The root cause of this problem is that the last token lexed in the previous context is used. We add lexCurrentTokenAgainUnderCurrentContext which will re-lex
+        the current token under the current context (may be ASTBuilder). This should be done only when the caller's context is different from SyntaxChecker, which avoids unnecessary lexing.
+        We leverage existing SavePoint mechanism to implement lexCurrentTokenAgainUnderCurrentContext cleanly.
+
+        And we also fix the bug in the existing SavePoint mechanism, which is shown in the attached test script. When we save LexerState, we do not save line terminator status. This patch also introduces
+        lexWithoutClearingLineTerminator, which lex the token without clearing line terminator status.
+
+        * parser/ASTBuilder.h:
+        (JSC::ASTBuilder::createString):
+        * parser/Lexer.cpp:
+        (JSC::Lexer<T>::parseMultilineComment):
+        (JSC::Lexer<T>::lexWithoutClearingLineTerminator): EOF token also should record offset information. This offset information is correctly handled in Lexer::setOffset too.
+        (JSC::Lexer<T>::lex): Deleted.
+        * parser/Lexer.h:
+        (JSC::Lexer::hasLineTerminatorBeforeToken const):
+        (JSC::Lexer::setHasLineTerminatorBeforeToken):
+        (JSC::Lexer<T>::lex):
+        (JSC::Lexer::prevTerminator const): Deleted.
+        (JSC::Lexer::setTerminator): Deleted.
+        * parser/Parser.cpp:
+        (JSC::Parser<LexerType>::allowAutomaticSemicolon):
+        (JSC::Parser<LexerType>::parseSingleFunction):
+        (JSC::Parser<LexerType>::parseStatementListItem):
+        (JSC::Parser<LexerType>::maybeParseAsyncFunctionDeclarationStatement):
+        (JSC::Parser<LexerType>::parseFunctionInfo):
+        (JSC::Parser<LexerType>::parseClass):
+        (JSC::Parser<LexerType>::parseExportDeclaration):
+        (JSC::Parser<LexerType>::parseAssignmentExpression):
+        (JSC::Parser<LexerType>::parseYieldExpression):
+        (JSC::Parser<LexerType>::parseProperty):
+        (JSC::Parser<LexerType>::parsePrimaryExpression):
+        (JSC::Parser<LexerType>::parseMemberExpression):
+        * parser/Parser.h:
+        (JSC::Parser::nextWithoutClearingLineTerminator):
+        (JSC::Parser::lexCurrentTokenAgainUnderCurrentContext):
+        (JSC::Parser::internalSaveLexerState):
+        (JSC::Parser::restoreLexerState):
+
 2019-04-05  Caitlin Potter  <ca...@igalia.com>
 
         [JSC] Filter DontEnum properties in ProxyObject::getOwnPropertyNames()

Modified: trunk/Source/_javascript_Core/parser/ASTBuilder.h (243947 => 243948)


--- trunk/Source/_javascript_Core/parser/ASTBuilder.h	2019-04-05 21:52:51 UTC (rev 243947)
+++ trunk/Source/_javascript_Core/parser/ASTBuilder.h	2019-04-05 21:58:32 UTC (rev 243948)
@@ -241,6 +241,7 @@
 
     ExpressionNode* createString(const JSTokenLocation& location, const Identifier* string)
     {
+        ASSERT(string);
         incConstants();
         return new (m_parserArena) StringNode(location, *string);
     }

Modified: trunk/Source/_javascript_Core/parser/Lexer.cpp (243947 => 243948)


--- trunk/Source/_javascript_Core/parser/Lexer.cpp	2019-04-05 21:52:51 UTC (rev 243947)
+++ trunk/Source/_javascript_Core/parser/Lexer.cpp	2019-04-05 21:58:32 UTC (rev 243948)
@@ -1691,7 +1691,7 @@
 
         if (isLineTerminator(m_current)) {
             shiftLineTerminator();
-            m_terminator = true;
+            m_hasLineTerminatorBeforeToken = true;
         } else
             shift();
     }
@@ -1770,7 +1770,7 @@
 }
 
 template <typename T>
-JSTokenType Lexer<T>::lex(JSToken* tokenRecord, unsigned lexerFlags, bool strictMode)
+JSTokenType Lexer<T>::lexWithoutClearingLineTerminator(JSToken* tokenRecord, unsigned lexerFlags, bool strictMode)
 {
     JSTokenData* tokenData = &tokenRecord->m_data;
     JSTokenLocation* tokenLocation = &tokenRecord->m_location;
@@ -1781,18 +1781,19 @@
     ASSERT(m_buffer16.isEmpty());
 
     JSTokenType token = ERRORTOK;
-    m_terminator = false;
 
 start:
     skipWhitespace();
 
-    if (atEnd())
-        return EOFTOK;
-    
     tokenLocation->startOffset = currentOffset();
     ASSERT(currentOffset() >= currentLineStartOffset());
     tokenRecord->m_startPosition = currentPosition();
 
+    if (atEnd()) {
+        token = EOFTOK;
+        goto returnToken;
+    }
+
     CharacterType type;
     if (LIKELY(isLatin1(m_current)))
         type = static_cast<CharacterType>(typesOfLatin1Characters[m_current]);
@@ -1902,7 +1903,7 @@
         shift();
         if (m_current == '+') {
             shift();
-            token = (!m_terminator) ? PLUSPLUS : AUTOPLUSPLUS;
+            token = (!m_hasLineTerminatorBeforeToken) ? PLUSPLUS : AUTOPLUSPLUS;
             break;
         }
         if (m_current == '=') {
@@ -1916,13 +1917,13 @@
         shift();
         if (m_current == '-') {
             shift();
-            if ((m_atLineStart || m_terminator) && m_current == '>') {
+            if ((m_atLineStart || m_hasLineTerminatorBeforeToken) && m_current == '>') {
                 if (m_scriptMode == JSParserScriptMode::Classic) {
                     shift();
                     goto inSingleLineComment;
                 }
             }
-            token = (!m_terminator) ? MINUSMINUS : AUTOMINUSMINUS;
+            token = (!m_hasLineTerminatorBeforeToken) ? MINUSMINUS : AUTOMINUSMINUS;
             break;
         }
         if (m_current == '=') {
@@ -2293,7 +2294,7 @@
         ASSERT(isLineTerminator(m_current));
         shiftLineTerminator();
         m_atLineStart = true;
-        m_terminator = true;
+        m_hasLineTerminatorBeforeToken = true;
         m_lineStart = m_code;
         goto start;
     case CharacterPrivateIdentifierStart:
@@ -2333,13 +2334,16 @@
         auto endPosition = currentPosition();
 
         while (!isLineTerminator(m_current)) {
-            if (atEnd())
-                return EOFTOK;
+            if (atEnd()) {
+                token = EOFTOK;
+                fillTokenInfo(tokenRecord, token, lineNumber, endOffset, lineStartOffset, endPosition);
+                return token;
+            }
             shift();
         }
         shiftLineTerminator();
         m_atLineStart = true;
-        m_terminator = true;
+        m_hasLineTerminatorBeforeToken = true;
         m_lineStart = m_code;
         if (!lastTokenWasRestrKeyword())
             goto start;

Modified: trunk/Source/_javascript_Core/parser/Lexer.h (243947 => 243948)


--- trunk/Source/_javascript_Core/parser/Lexer.h	2019-04-05 21:52:51 UTC (rev 243947)
+++ trunk/Source/_javascript_Core/parser/Lexer.h	2019-04-05 21:58:32 UTC (rev 243948)
@@ -65,6 +65,7 @@
     bool isReparsingFunction() const { return m_isReparsingFunction; }
 
     JSTokenType lex(JSToken*, unsigned, bool strictMode);
+    JSTokenType lexWithoutClearingLineTerminator(JSToken*, unsigned, bool strictMode);
     bool nextTokenIsColon();
     int lineNumber() const { return m_lineNumber; }
     ALWAYS_INLINE int currentOffset() const { return offsetFromSourcePtr(m_code); }
@@ -77,7 +78,7 @@
     JSTokenLocation lastTokenLocation() const { return m_lastTokenLocation; }
     void setLastLineNumber(int lastLineNumber) { m_lastLineNumber = lastLineNumber; }
     int lastLineNumber() const { return m_lastLineNumber; }
-    bool prevTerminator() const { return m_terminator; }
+    bool hasLineTerminatorBeforeToken() const { return m_hasLineTerminatorBeforeToken; }
     JSTokenType scanRegExp(JSToken*, UChar patternPrefix = 0);
     enum class RawStringsBuildMode { BuildRawStrings, DontBuildRawStrings };
     JSTokenType scanTemplateString(JSToken*, RawStringsBuildMode);
@@ -110,9 +111,9 @@
     {
         m_lineNumber = line;
     }
-    void setTerminator(bool terminator)
+    void setHasLineTerminatorBeforeToken(bool terminator)
     {
-        m_terminator = terminator;
+        m_hasLineTerminatorBeforeToken = terminator;
     }
 
     JSTokenType lexExpectIdentifier(JSToken*, unsigned, bool strictMode);
@@ -202,7 +203,7 @@
     Vector<LChar> m_buffer8;
     Vector<UChar> m_buffer16;
     Vector<UChar> m_bufferForRawTemplateString16;
-    bool m_terminator;
+    bool m_hasLineTerminatorBeforeToken;
     int m_lastToken;
 
     const SourceCode* m_source;
@@ -403,4 +404,11 @@
     return lex(tokenRecord, lexerFlags, strictMode);
 }
 
+template <typename T>
+ALWAYS_INLINE JSTokenType Lexer<T>::lex(JSToken* tokenRecord, unsigned lexerFlags, bool strictMode)
+{
+    m_hasLineTerminatorBeforeToken = false;
+    return lexWithoutClearingLineTerminator(tokenRecord, lexerFlags, strictMode);
+}
+
 } // namespace JSC

Modified: trunk/Source/_javascript_Core/parser/Parser.cpp (243947 => 243948)


--- trunk/Source/_javascript_Core/parser/Parser.cpp	2019-04-05 21:52:51 UTC (rev 243947)
+++ trunk/Source/_javascript_Core/parser/Parser.cpp	2019-04-05 21:58:32 UTC (rev 243948)
@@ -345,7 +345,7 @@
 template <typename LexerType>
 bool Parser<LexerType>::allowAutomaticSemicolon()
 {
-    return match(CLOSEBRACE) || match(EOFTOK) || m_lexer->prevTerminator();
+    return match(CLOSEBRACE) || match(EOFTOK) || m_lexer->hasLineTerminatorBeforeToken();
 }
 
 template <typename LexerType>
@@ -625,7 +625,7 @@
     case IDENT:
         if (*m_token.m_data.ident == m_vm->propertyNames->async && !m_token.m_data.escaped) {
             next();
-            failIfFalse(match(FUNCTION) && !m_lexer->prevTerminator(), "Cannot parse the async function");
+            failIfFalse(match(FUNCTION) && !m_lexer->hasLineTerminatorBeforeToken(), "Cannot parse the async function");
             statement = parseAsyncFunctionDeclaration(context, ExportType::NotExported, DeclarationDefaultContext::Standard, functionConstructorParametersEndPosition);
             break;
         }
@@ -696,7 +696,7 @@
             // but could be mistakenly parsed as an AsyncFunctionExpression.
             SavePoint savePoint = createSavePoint();
             next();
-            if (UNLIKELY(match(FUNCTION) && !m_lexer->prevTerminator())) {
+            if (UNLIKELY(match(FUNCTION) && !m_lexer->hasLineTerminatorBeforeToken())) {
                 result = parseAsyncFunctionDeclaration(context);
                 break;
             }
@@ -2026,7 +2026,7 @@
     ASSERT(matchContextualKeyword(m_vm->propertyNames->async));
     SavePoint savePoint = createSavePoint();
     next();
-    if (match(FUNCTION) && !m_lexer->prevTerminator()) {
+    if (match(FUNCTION) && !m_lexer->hasLineTerminatorBeforeToken()) {
         const bool isAsync = true;
         result = parseFunctionDeclarationStatement(context, isAsync, parentAllowsFunctionDeclarationAsStatement);
         return true;
@@ -2421,7 +2421,7 @@
 
         matchOrFail(ARROWFUNCTION, "Expected a '=>' after arrow function parameter declaration");
 
-        if (m_lexer->prevTerminator())
+        if (m_lexer->hasLineTerminatorBeforeToken())
             failDueToUnexpectedToken();
 
         ASSERT(constructorKind == ConstructorKind::None);
@@ -2612,6 +2612,8 @@
         functionScope->fillParametersForSourceProviderCache(parameters, nonLocalCapturesFromParameterExpressions);
         newInfo = SourceProviderCacheItem::create(parameters);
     }
+
+    bool functionScopeWasStrictMode = functionScope->strictMode();
     
     popScope(functionScope, TreeBuilder::NeedsFreeVariableInfo);
     
@@ -2618,6 +2620,15 @@
     if (functionBodyType != ArrowFunctionBodyExpression) {
         matchOrFail(CLOSEBRACE, "Expected a closing '}' after a ", stringForFunctionMode(mode), " body");
         next();
+    } else {
+        // We need to lex the last token again because the last token is lexed under the different context because of the following possibilities.
+        // 1. which may have different strict mode.
+        // 2. which may not build strings for tokens.
+        // But (1) is not possible because we do not recognize the string literal in ArrowFunctionBodyExpression as directive and this is correct in terms of the spec (`value => "use strict"`).
+        // So we only check TreeBuilder's type here.
+        ASSERT_UNUSED(functionScopeWasStrictMode, functionScopeWasStrictMode == currentScope()->strictMode());
+        if (!std::is_same<TreeBuilder, SyntaxChecker>::value)
+            lexCurrentTokenAgainUnderCurrentContext();
     }
 
     if (newInfo)
@@ -2876,7 +2887,7 @@
                 if (!isGeneratorMethodParseMode(parseMode) && !isAsyncMethodParseMode(parseMode)) {
                     ident = m_token.m_data.ident;
                     next();
-                    if (match(OPENPAREN) || match(COLON) || match(EQUAL) || m_lexer->prevTerminator())
+                    if (match(OPENPAREN) || match(COLON) || match(EQUAL) || m_lexer->hasLineTerminatorBeforeToken())
                         break;
                     if (UNLIKELY(consume(TIMES)))
                         parseMode = SourceParseMode::AsyncGeneratorWrapperMethodMode;
@@ -3396,7 +3407,7 @@
         } else if (matchContextualKeyword(m_vm->propertyNames->async)) {
             SavePoint savePoint = createSavePoint();
             next();
-            if (match(FUNCTION) && !m_lexer->prevTerminator()) {
+            if (match(FUNCTION) && !m_lexer->hasLineTerminatorBeforeToken()) {
                 next();
                 if (match(IDENT))
                     localName = m_token.m_data.ident;
@@ -3541,7 +3552,7 @@
         case IDENT:
             if (*m_token.m_data.ident == m_vm->propertyNames->async && !m_token.m_data.escaped) {
                 next();
-                semanticFailIfFalse(match(FUNCTION) && !m_lexer->prevTerminator(), "Expected 'function' keyword following 'async' keyword with no preceding line terminator");
+                semanticFailIfFalse(match(FUNCTION) && !m_lexer->hasLineTerminatorBeforeToken(), "Expected 'function' keyword following 'async' keyword with no preceding line terminator");
                 DepthManager statementDepth(&m_statementDepth);
                 m_statementDepth = 1;
                 result = parseAsyncFunctionDeclaration(context, ExportType::Exported);
@@ -3659,7 +3670,7 @@
             if (UNLIKELY(classifier.indicatesPossibleAsyncArrowFunction())) {
                 if (matchContextualKeyword(m_vm->propertyNames->async)) {
                     next();
-                    isAsyncArrow = !m_lexer->prevTerminator();
+                    isAsyncArrow = !m_lexer->hasLineTerminatorBeforeToken();
                 }
             }
             if (isArrowFunctionParameters()) {
@@ -3776,7 +3787,7 @@
     ASSERT(match(YIELD));
     SavePoint savePoint = createSavePoint();
     next();
-    if (m_lexer->prevTerminator())
+    if (m_lexer->hasLineTerminatorBeforeToken())
         return context.createYield(location);
 
     bool delegate = consume(TIMES);
@@ -3936,7 +3947,7 @@
                     goto namedProperty;
                 }
 
-                failIfTrue(m_lexer->prevTerminator(), "Expected a property name following keyword 'async'");
+                failIfTrue(m_lexer->hasLineTerminatorBeforeToken(), "Expected a property name following keyword 'async'");
                 if (UNLIKELY(consume(TIMES)))
                     parseMode = SourceParseMode::AsyncGeneratorWrapperMethodMode;
                 else
@@ -4485,7 +4496,7 @@
             const Identifier* ident = m_token.m_data.ident;
             JSTokenLocation location(tokenLocation());
             next();
-            if (match(FUNCTION) && !m_lexer->prevTerminator())
+            if (match(FUNCTION) && !m_lexer->hasLineTerminatorBeforeToken())
                 return parseAsyncFunctionExpression(context);
 
             // Avoid using variable if it is an arrow function parameter
@@ -4751,7 +4762,7 @@
 
         base = parsePrimaryExpression(context);
         failIfFalse(base, "Cannot parse base _expression_");
-        if (UNLIKELY(isAsync && context.isResolve(base) && !m_lexer->prevTerminator())) {
+        if (UNLIKELY(isAsync && context.isResolve(base) && !m_lexer->hasLineTerminatorBeforeToken())) {
             if (matchSpecIdentifier()) {
                 // AsyncArrowFunction
                 forceClassifyExpressionError(ErrorIndicatesAsyncArrowFunction);

Modified: trunk/Source/_javascript_Core/parser/Parser.h (243947 => 243948)


--- trunk/Source/_javascript_Core/parser/Parser.h	2019-04-05 21:52:51 UTC (rev 243947)
+++ trunk/Source/_javascript_Core/parser/Parser.h	2019-04-05 21:58:32 UTC (rev 243948)
@@ -1363,6 +1363,16 @@
         m_token.m_type = m_lexer->lex(&m_token, lexerFlags, strictMode());
     }
 
+    ALWAYS_INLINE void nextWithoutClearingLineTerminator(unsigned lexerFlags = 0)
+    {
+        int lastLine = m_token.m_location.line;
+        int lastTokenEnd = m_token.m_location.endOffset;
+        int lastTokenLineStart = m_token.m_location.lineStartOffset;
+        m_lastTokenEndPosition = JSTextPosition(lastLine, lastTokenEnd, lastTokenLineStart);
+        m_lexer->setLastLineNumber(lastLine);
+        m_token.m_type = m_lexer->lexWithoutClearingLineTerminator(&m_token, lexerFlags, strictMode());
+    }
+
     ALWAYS_INLINE void nextExpectIdentifier(unsigned lexerFlags = 0)
     {
         int lastLine = m_token.m_location.line;
@@ -1373,6 +1383,12 @@
         m_token.m_type = m_lexer->lexExpectIdentifier(&m_token, lexerFlags, strictMode());
     }
 
+    ALWAYS_INLINE void lexCurrentTokenAgainUnderCurrentContext()
+    {
+        auto savePoint = createSavePoint();
+        restoreSavePoint(savePoint);
+    }
+
     ALWAYS_INLINE bool nextTokenIsColon()
     {
         return m_lexer->nextTokenIsColon();
@@ -1762,6 +1778,7 @@
         unsigned oldLineStartOffset;
         unsigned oldLastLineNumber;
         unsigned oldLineNumber;
+        bool hasLineTerminatorBeforeToken;
     };
 
     // If you're using this directly, you probably should be using
@@ -1775,6 +1792,7 @@
         result.oldLineStartOffset = m_token.m_location.lineStartOffset;
         result.oldLastLineNumber = m_lexer->lastLineNumber();
         result.oldLineNumber = m_lexer->lineNumber();
+        result.hasLineTerminatorBeforeToken = m_lexer->hasLineTerminatorBeforeToken();
         ASSERT(static_cast<unsigned>(result.startOffset) >= result.oldLineStartOffset);
         return result;
     }
@@ -1784,7 +1802,8 @@
         // setOffset clears lexer errors.
         m_lexer->setOffset(lexerState.startOffset, lexerState.oldLineStartOffset);
         m_lexer->setLineNumber(lexerState.oldLineNumber);
-        next();
+        m_lexer->setHasLineTerminatorBeforeToken(lexerState.hasLineTerminatorBeforeToken);
+        nextWithoutClearingLineTerminator();
         m_lexer->setLastLineNumber(lexerState.oldLastLineNumber);
     }
 
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to