https://github.com/andykaylor created https://github.com/llvm/llvm-project/pull/191479
This adds CIR support for handling full expression cleanups in conditional branches. Because CIR uses structured control flow, it was necessary to handle these cleanups differently than is done in classic codegen. CIR speculatively creates a cleanup scope when an ExprWithCleanups contains a conditional operator and maintains a dedicated stack of these deferred cleanups, which is added to the cleanup scope at the end of the full expression with an active flag used to control whether the cleanup should be executed based on any branches that may have been taken during the conditional expression evaluation. This is similar to the mechanism used for lifetime extended cleanups, but the timing of when the cleanups are moved to the main EH stack is different, so we need to maintain two different pending cleanup stacks. We are able to use the same PendingCleanupEntry class for both. Assisted-by: Cursor / claude-4.6-opus-high >From 52455de146e1a9d56d3e9548d055824f942169cd Mon Sep 17 00:00:00 2001 From: Andy Kaylor <[email protected]> Date: Mon, 23 Mar 2026 17:06:45 -0700 Subject: [PATCH] [CIR] Handle full expression cleanups in conditional branches This adds CIR support for handling full expression cleanups in conditional branches. Because CIR uses structured control flow, it was necessary to handle these cleanups differently than is done in classic codegen. CIR speculatively creates a cleanup scope when an ExprWithCleanups contains a conditional operator and maintains a dedicated stack of these deferred cleanups, which is added to the cleanup scope at the end of the full expression with an active flag used to control whether the cleanup should be executed based on any branches that may have been taken during the conditional expression evaluation. This is similar to the mechanism used for lifetime extended cleanups, but the timing of when the cleanups are moved to the main EH stack is different, so we need to maintain two different pending cleanup stacks. We are able to use the same PendingCleanupEntry class for both. Assisted-by: Cursor / claude-4.6-opus-high --- clang/lib/CIR/CodeGen/CIRGenCleanup.cpp | 157 +++++- clang/lib/CIR/CodeGen/CIRGenDecl.cpp | 14 +- clang/lib/CIR/CodeGen/CIRGenExpr.cpp | 10 +- clang/lib/CIR/CodeGen/CIRGenExprAggregate.cpp | 3 + clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp | 29 +- clang/lib/CIR/CodeGen/CIRGenFunction.h | 70 ++- clang/lib/CIR/CodeGen/CIRGenStmt.cpp | 2 + .../CIR/CodeGen/cleanup-conditional-eh.cpp | 494 ++++++++++++++++++ .../test/CIR/CodeGen/cleanup-conditional.cpp | 419 +++++++++++++++ 9 files changed, 1174 insertions(+), 24 deletions(-) create mode 100644 clang/test/CIR/CodeGen/cleanup-conditional-eh.cpp create mode 100644 clang/test/CIR/CodeGen/cleanup-conditional.cpp diff --git a/clang/lib/CIR/CodeGen/CIRGenCleanup.cpp b/clang/lib/CIR/CodeGen/CIRGenCleanup.cpp index d8d440a60110e..576b957612289 100644 --- a/clang/lib/CIR/CodeGen/CIRGenCleanup.cpp +++ b/clang/lib/CIR/CodeGen/CIRGenCleanup.cpp @@ -19,11 +19,37 @@ #include "CIRGenCleanup.h" #include "CIRGenFunction.h" +#include "clang/AST/RecursiveASTVisitor.h" #include "clang/CIR/MissingFeatures.h" using namespace clang; using namespace clang::CIRGen; +namespace { +/// Return true if the expression tree contains an AbstractConditionalOperator +/// (ternary ?:), which is the only construct whose CIR codegen calls +/// ConditionalEvaluation::beginEvaluation() and thus causes cleanups to be +/// deferred via pushFullExprCleanup. Logical &&/|| do NOT call +/// beginEvaluation(); their branch-local cleanups are handled by LexicalScope. +class ConditionalEvaluationFinder + : public RecursiveASTVisitor<ConditionalEvaluationFinder> { + bool foundConditional = false; + +public: + bool found() const { return foundConditional; } + + bool VisitAbstractConditionalOperator(AbstractConditionalOperator *) { + foundConditional = true; + return false; + } + + // Don't cross evaluation-context boundaries. + bool TraverseLambdaExpr(LambdaExpr *) { return true; } + bool TraverseBlockExpr(BlockExpr *) { return true; } + bool TraverseStmtExpr(StmtExpr *) { return true; } +}; +} // namespace + //===----------------------------------------------------------------------===// // CIRGenFunction cleanup related //===----------------------------------------------------------------------===// @@ -34,6 +60,133 @@ void CIRGenFunction::emitCXXTemporary(const CXXTemporary *temporary, pushDestroy(NormalAndEHCleanup, ptr, tempType, destroyCXXObject); } +Address CIRGenFunction::createCleanupActiveFlag() { + assert(isInConditionalBranch()); + mlir::Location loc = builder.getUnknownLoc(); + + // Place the alloca in the function entry block so it dominates everything, + // including both regions of any enclosing cir.cleanup.scope. We can't rely + // on the default curLexScope path because we may be inside a ternary branch + // whose LexicalScope would capture the alloca. + Address active = createTempAllocaWithoutCast( + builder.getBoolTy(), CharUnits::One(), loc, "cleanup.cond", + /*arraySize=*/nullptr, + builder.getBestAllocaInsertPoint(getCurFunctionEntryBlock())); + + // Initialize to false before the outermost conditional. + { + mlir::OpBuilder::InsertionGuard guard(builder); + builder.restoreInsertionPoint(outermostConditional->getInsertPoint()); + builder.createFlagStore(loc, false, active.getPointer()); + } + + // Set to true at the current location (inside the conditional branch). + builder.createFlagStore(loc, true, active.getPointer()); + + return active; +} + +void CIRGenFunction::enterFullExprCleanupScope(const Expr *subExpr) { + inFullExprCleanupScope = true; + // Eagerly create the CleanupScopeOp only when the expression contains a + // construct that will enter a ConditionalEvaluation. This ensures the scope + // structurally wraps the entire expression. For expressions without + // conditionals we skip it entirely so that values defined inside nested + // cleanup scopes remain directly accessible. + ConditionalEvaluationFinder finder; + finder.TraverseStmt(const_cast<Expr *>(subExpr)); + if (finder.found()) + createFullExprCleanupScope(); +} + +void CIRGenFunction::createFullExprCleanupScope() { + assert(inFullExprCleanupScope && "not in a full-expression cleanup scope"); + assert(!fullExprCleanupScope && "scope already created"); + + mlir::Location loc = builder.getUnknownLoc(); + cir::CleanupKind cleanupKind = getLangOpts().Exceptions + ? cir::CleanupKind::All + : cir::CleanupKind::Normal; + fullExprCleanupScope = cir::CleanupScopeOp::create( + builder, loc, cleanupKind, + /*bodyBuilder=*/ + [&](mlir::OpBuilder &b, mlir::Location loc) {}, + /*cleanupBuilder=*/ + [&](mlir::OpBuilder &b, mlir::Location loc) {}); + + mlir::Block &bodyBlock = fullExprCleanupScope.getBodyRegion().front(); + builder.setInsertionPointToEnd(&bodyBlock); +} + +void CIRGenFunction::exitFullExprCleanupScope() { + inFullExprCleanupScope = false; + + if (!fullExprCleanupScope) + return; + + cir::CleanupScopeOp scope = fullExprCleanupScope; + fullExprCleanupScope = nullptr; + if (!deferredConditionalCleanupStack.empty()) { + // Terminate the body region. + mlir::Block &bodyBlock = scope.getBodyRegion().front(); + { + mlir::OpBuilder::InsertionGuard guard(builder); + builder.setInsertionPointToEnd(&bodyBlock); + if (bodyBlock.empty() || + !bodyBlock.back().hasTrait<mlir::OpTrait::IsTerminator>()) + builder.createYield(scope.getLoc()); + } + + // Emit each deferred cleanup directly into the pre-created scope's + // cleanup region rather than going through the EH stack (which would + // create a second CleanupScopeOp). + { + mlir::OpBuilder::InsertionGuard guard(builder); + mlir::Block &cleanupBlock = scope.getCleanupRegion().front(); + builder.setInsertionPointToEnd(&cleanupBlock); + + for (const PendingCleanupEntry &entry : + llvm::reverse(deferredConditionalCleanupStack)) { + if (entry.activeFlag.isValid()) { + mlir::Value flag = + builder.createLoad(scope.getLoc(), entry.activeFlag); + cir::IfOp::create( + builder, scope.getLoc(), flag, /*withElseRegion=*/false, + [&](mlir::OpBuilder &b, mlir::Location loc) { + emitDestroy(entry.addr, entry.type, entry.destroyer); + builder.createYield(loc); + }); + } else { + emitDestroy(entry.addr, entry.type, entry.destroyer); + } + } + builder.createYield(scope.getLoc()); + } + deferredConditionalCleanupStack.clear(); + + // Move the builder to after the scope in the parent block so that + // subsequent code (e.g. value reloads) lands outside the scope. + builder.setInsertionPointAfter(scope); + return; + } + + // The scope was created (because the AST contained a conditional) but no + // conditional cleanups were actually deferred. Inline the body back into + // the parent block and erase the empty scope. + mlir::Block *parentBlock = scope->getBlock(); + mlir::Block &bodyBlock = scope.getBodyRegion().front(); + + if (!bodyBlock.empty() && + bodyBlock.back().hasTrait<mlir::OpTrait::IsTerminator>()) + bodyBlock.back().erase(); + + mlir::Block::iterator afterScope = std::next(scope->getIterator()); + parentBlock->getOperations().splice(scope->getIterator(), + bodyBlock.getOperations()); + scope->erase(); + builder.setInsertionPoint(parentBlock, afterScope); +} + //===----------------------------------------------------------------------===// // EHScopeStack //===----------------------------------------------------------------------===// @@ -425,9 +578,9 @@ void CIRGenFunction::popCleanupBlocks( popCleanupBlocks(oldCleanupStackDepth, valuesToReload); // Promote deferred lifetime-extended cleanups onto the EH scope stack. - for (const LifetimeExtendedCleanupEntry &cleanup : llvm::make_range( + for (const PendingCleanupEntry &cleanup : llvm::make_range( lifetimeExtendedCleanupStack.begin() + oldLifetimeExtendedSize, lifetimeExtendedCleanupStack.end())) - pushLifetimeExtendedCleanupToEHStack(cleanup); + pushPendingCleanupToEHStack(cleanup); lifetimeExtendedCleanupStack.truncate(oldLifetimeExtendedSize); } diff --git a/clang/lib/CIR/CodeGen/CIRGenDecl.cpp b/clang/lib/CIR/CodeGen/CIRGenDecl.cpp index b96b822609c10..38edc6d6bcb84 100644 --- a/clang/lib/CIR/CodeGen/CIRGenDecl.cpp +++ b/clang/lib/CIR/CodeGen/CIRGenDecl.cpp @@ -10,6 +10,7 @@ // //===----------------------------------------------------------------------===// +#include "CIRGenCleanup.h" #include "CIRGenConstantEmitter.h" #include "CIRGenFunction.h" #include "mlir/IR/Location.h" @@ -1032,10 +1033,19 @@ void CIRGenFunction::pushLifetimeExtendedDestroy(CleanupKind cleanupKind, pushCleanupAfterFullExpr(cleanupKind, addr, type, destroyer); } -void CIRGenFunction::pushLifetimeExtendedCleanupToEHStack( - const LifetimeExtendedCleanupEntry &entry) { +void CIRGenFunction::pushPendingCleanupToEHStack( + const PendingCleanupEntry &entry) { ehStack.pushCleanup<DestroyObject>(entry.kind, entry.addr, entry.type, entry.destroyer); + + if (entry.activeFlag.isValid()) { + EHCleanupScope &scope = cast<EHCleanupScope>(*ehStack.begin()); + scope.setActiveFlag(entry.activeFlag); + if (scope.isNormalCleanup()) + scope.setTestFlagInNormalCleanup(); + if (scope.isEHCleanup()) + scope.setTestFlagInEHCleanup(); + } } /// Destroys all the elements of the given array, beginning from last to first. diff --git a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp index a306cc68dff8e..646255faeb347 100644 --- a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp +++ b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp @@ -1710,8 +1710,16 @@ static Address createReferenceTemporary(CIRGenFunction &cgf, extDeclAlloca = extDeclAddrIter->second.getDefiningOp<cir::AllocaOp>(); } mlir::OpBuilder::InsertPoint ip; - if (extDeclAlloca) + if (extDeclAlloca) { ip = {extDeclAlloca->getBlock(), extDeclAlloca->getIterator()}; + } else if (cgf.isInConditionalBranch() && + m->getStorageDuration() == SD_FullExpression) { + // Place in the function entry block so the alloca dominates both + // regions of any enclosing cir.cleanup.scope. The default path + // would use curLexScope which may be a ternary branch. + ip = cgf.getBuilder().getBestAllocaInsertPoint( + cgf.getCurFunctionEntryBlock()); + } return cgf.createMemTemp(ty, cgf.getLoc(m->getSourceRange()), cgf.getCounterRefTmpAsString(), /*alloca=*/nullptr, ip); diff --git a/clang/lib/CIR/CodeGen/CIRGenExprAggregate.cpp b/clang/lib/CIR/CodeGen/CIRGenExprAggregate.cpp index ffce8a6bf86a7..2a21b274103cb 100644 --- a/clang/lib/CIR/CodeGen/CIRGenExprAggregate.cpp +++ b/clang/lib/CIR/CodeGen/CIRGenExprAggregate.cpp @@ -980,7 +980,10 @@ void AggExprEmitter::VisitExprWithCleanups(ExprWithCleanups *e) { builder.restoreInsertionPoint(scopeBegin); CIRGenFunction::LexicalScope lexScope{cgf, scopeLoc, builder.getInsertionBlock()}; + + cgf.enterFullExprCleanupScope(e->getSubExpr()); Visit(e->getSubExpr()); + cgf.exitFullExprCleanupScope(); } } diff --git a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp index b1498f376725d..47ef4f814b531 100644 --- a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp +++ b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp @@ -1597,10 +1597,33 @@ mlir::Value ScalarExprEmitter::emitCompoundAssign( mlir::Value ScalarExprEmitter::VisitExprWithCleanups(ExprWithCleanups *e) { CIRGenFunction::RunCleanupsScope cleanups(cgf); + + cgf.enterFullExprCleanupScope(e->getSubExpr()); mlir::Value v = Visit(e->getSubExpr()); - // Defend against dominance problems caused by jumps out of expression - // evaluation through the shared cleanup block. - cleanups.forceCleanup({&v}); + + bool hasDeferredCleanups = !cgf.deferredConditionalCleanupStack.empty(); + + // When deferred conditional cleanups exist, the expression result lives + // inside the cleanup scope body (an MLIR region). Spill it to a temporary + // (allocated in the function entry block) while the builder is still inside + // the body; we reload it after exitFullExprCleanupScope moves the builder + // outside the scope. + Address spill = Address::invalid(); + if (v && hasDeferredCleanups) { + spill = cgf.createDefaultAlignTempAlloca(v.getType(), v.getLoc(), + "tmp.exprcleanup"); + cgf.getBuilder().createStore(v.getLoc(), v, spill); + } + + cgf.exitFullExprCleanupScope(); + + if (hasDeferredCleanups) { + cleanups.forceCleanup({}); + if (spill.isValid()) + v = cgf.getBuilder().createLoad(v.getLoc(), spill); + } else { + cleanups.forceCleanup({&v}); + } return v; } diff --git a/clang/lib/CIR/CodeGen/CIRGenFunction.h b/clang/lib/CIR/CodeGen/CIRGenFunction.h index 88c7996eab569..597626ddc76a8 100644 --- a/clang/lib/CIR/CodeGen/CIRGenFunction.h +++ b/clang/lib/CIR/CodeGen/CIRGenFunction.h @@ -94,22 +94,36 @@ class CIRGenFunction : public CIRGenTypeCache { typedef void Destroyer(CIRGenFunction &cgf, Address addr, QualType ty); - /// An entry in the lifetime-extended cleanup stack. Each entry represents a - /// cleanup that was deferred past a full-expression boundary (e.g., - /// destroying a temporary bound to a local reference). When the enclosing - /// scope exits, these entries are promoted to the EH scope stack. + /// A cleanup entry that will be promoted onto the EH scope stack at a later + /// point. Used by both the lifetime-extended cleanup stack (promoted when + /// the enclosing scope exits) and the deferred conditional cleanup stack + /// (promoted at the enclosing full-expression level). /// - /// Currently only DestroyObject cleanups are lifetime-extended. When other - /// cleanup types are needed (e.g., CallLifetimeEnd), this struct can be - /// extended with a std::variant of cleanup data types. - struct LifetimeExtendedCleanupEntry { + /// Currently only DestroyObject cleanups use this. When other cleanup types + /// are needed (e.g., CallLifetimeEnd), this struct can be extended with a + /// std::variant of cleanup data types. + struct PendingCleanupEntry { CleanupKind kind; Address addr; QualType type; Destroyer *destroyer; + Address activeFlag = Address::invalid(); }; - llvm::SmallVector<LifetimeExtendedCleanupEntry> lifetimeExtendedCleanupStack; + llvm::SmallVector<PendingCleanupEntry> lifetimeExtendedCleanupStack; + + /// Deferred cleanup entries from conditional branches within the current + /// full-expression. Emitted into the cleanup region of fullExprCleanupScope + /// by exitFullExprCleanupScope. + llvm::SmallVector<PendingCleanupEntry> deferredConditionalCleanupStack; + + /// True while between enterFullExprCleanupScope/exitFullExprCleanupScope. + bool inFullExprCleanupScope = false; + + /// A lazily-created CleanupScopeOp for the current full-expression. Created + /// by createFullExprCleanupScope() when the first conditional cleanup is + /// deferred, so expressions without conditional cleanups pay no cost. + cir::CleanupScopeOp fullExprCleanupScope = nullptr; GlobalDecl curSEHParent; @@ -1009,17 +1023,45 @@ class CIRGenFunction : public CIRGenTypeCache { void deactivateCleanupBlock(EHScopeStack::stable_iterator cleanup, mlir::Operation *dominatingIP); + /// Create an active flag variable for use with conditional cleanups. The + /// flag is initialized to false before the outermost conditional and set to + /// true at the current insertion point (inside the conditional branch). + Address createCleanupActiveFlag(); + + /// Enter a full-expression cleanup scope. If \p subExpr contains a + /// conditional (ternary, &&, ||), eagerly creates a CleanupScopeOp that + /// will wrap the entire expression. Otherwise defers scope creation. + void enterFullExprCleanupScope(const Expr *subExpr); + + /// Create the CleanupScopeOp for the current full-expression. + /// Called from enterFullExprCleanupScope when a conditional is detected. + void createFullExprCleanupScope(); + + /// Finalize the full-expression cleanup scope after the sub-expression. + void exitFullExprCleanupScope(); + + /// Promote a single pending cleanup entry onto the EH scope stack. If the + /// entry has a valid activeFlag, the cleanup is configured as conditional. + /// Defined in CIRGenDecl.cpp where the concrete cleanup types are visible. + void pushPendingCleanupToEHStack(const PendingCleanupEntry &entry); + /// Push a cleanup to be run at the end of the current full-expression. Safe /// against the possibility that we're currently inside a /// conditionally-evaluated expression. template <class T, class... As> void pushFullExprCleanup(CleanupKind kind, As... a) { - // If we're not in a conditional branch, or if none of the - // arguments requires saving, then use the unconditional cleanup. if (!isInConditionalBranch()) return ehStack.pushCleanup<T>(kind, a...); - cgm.errorNYI("pushFullExprCleanup in conditional branch"); + // Defer the cleanup until exitFullExprCleanupScope. We can't push to + // the EH stack now because the ternary's inner LexicalScope would pop + // it prematurely. The scope must have been eagerly created by + // enterFullExprCleanupScope (which detected the conditional in the AST). + assert(fullExprCleanupScope && + "conditional cleanup pushed but no full-expression scope created"); + Address activeFlag = createCleanupActiveFlag(); + deferredConditionalCleanupStack.push_back( + PendingCleanupEntry{kind, a..., activeFlag}); } /// Queue a cleanup to be pushed after finishing the current full-expression. @@ -1271,10 +1313,6 @@ class CIRGenFunction : public CIRGenTypeCache { QualType type, Destroyer *destroyer, bool useEHCleanupForArray); - /// Promote a single lifetime-extended cleanup entry onto the EH scope stack. - /// Defined in CIRGenDecl.cpp where the concrete cleanup types are visible. - void pushLifetimeExtendedCleanupToEHStack( - const LifetimeExtendedCleanupEntry &entry); Destroyer *getDestroyer(clang::QualType::DestructionKind kind); diff --git a/clang/lib/CIR/CodeGen/CIRGenStmt.cpp b/clang/lib/CIR/CodeGen/CIRGenStmt.cpp index 07d1d62053ea6..93059d769816f 100644 --- a/clang/lib/CIR/CodeGen/CIRGenStmt.cpp +++ b/clang/lib/CIR/CodeGen/CIRGenStmt.cpp @@ -667,7 +667,9 @@ mlir::LogicalResult CIRGenFunction::emitReturnStmt(const ReturnStmt &s) { builder.restoreInsertionPoint(scopeBody); CIRGenFunction::LexicalScope lexScope{*this, scopeLoc, builder.getInsertionBlock()}; + enterFullExprCleanupScope(rv); handleReturnVal(); + exitFullExprCleanupScope(); } } diff --git a/clang/test/CIR/CodeGen/cleanup-conditional-eh.cpp b/clang/test/CIR/CodeGen/cleanup-conditional-eh.cpp new file mode 100644 index 0000000000000..a6fd88ddaa7cd --- /dev/null +++ b/clang/test/CIR/CodeGen/cleanup-conditional-eh.cpp @@ -0,0 +1,494 @@ +// Exceptions-enabled variant of cleanup-conditional.cpp. +// When -fcxx-exceptions is active, cleanup scopes use "cleanup all" (both +// normal and exception paths) and the LLVM lowering emits invoke/landingpad +// instead of plain calls for operations that can throw. +// +// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -fclangir -emit-cir -fcxx-exceptions -fexceptions %s -o %t.cir +// RUN: FileCheck --input-file=%t.cir %s --check-prefix=CIR +// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -fclangir -emit-llvm -fcxx-exceptions -fexceptions %s -o %t-cir.ll +// RUN: FileCheck --input-file=%t-cir.ll %s --check-prefix=LLVM +// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -emit-llvm -fcxx-exceptions -fexceptions %s -o %t.ll +// RUN: FileCheck --input-file=%t.ll %s --check-prefix=OGCG + +struct S { + S(); + ~S(); + int get(); +}; + +struct A { + A(); + ~A(); + int get(); +}; + +struct B { + B(); + ~B(); + int get(); +}; + +void test_ternary_temporary(bool c, int x) { + int result = c ? S().get() : x; +} +// CIR-LABEL: @_Z22test_ternary_temporarybi +// CIR: %[[TMP:.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>, ["ref.tmp0"] +// CIR: %[[ACTIVE:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: cir.cleanup.scope { +// CIR: %[[COND:.*]] = cir.load {{.*}} : !cir.ptr<!cir.bool>, !cir.bool +// CIR: %[[FALSE:.*]] = cir.const #false +// CIR: cir.store %[[FALSE]], %[[ACTIVE]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %{{.*}} = cir.ternary(%[[COND]], true { +// CIR: cir.call @_ZN1SC1Ev(%[[TMP]]) +// CIR: %[[TRUE:.*]] = cir.const #true +// CIR: cir.store %[[TRUE]], %[[ACTIVE]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_RESULT:.*]] = cir.call @_ZN1S3getEv(%[[TMP]]) +// CIR: cir.yield %[[GET_RESULT]] : !s32i +// CIR: }, false { +// CIR: cir.yield +// CIR: cir.yield +// With exceptions, the cleanup runs on both normal and EH paths. +// CIR: } cleanup all { +// CIR: %[[IS_ACTIVE:.*]] = cir.load {{.*}} %[[ACTIVE]] +// CIR: cir.if %[[IS_ACTIVE]] { +// CIR: cir.call @_ZN1SD1Ev(%[[TMP]]) +// CIR: } +// CIR: cir.yield +// CIR: } + +// LLVM-LABEL: define dso_local void @_Z22test_ternary_temporarybi( +// LLVM-SAME: personality ptr @__gxx_personality_v0 +// LLVM: %[[TMP:.*]] = alloca %struct.S +// LLVM: %[[ACTIVE:.*]] = alloca i8 +// LLVM: store i8 0, ptr %[[ACTIVE]] +// LLVM: br i1 %{{.*}}, label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// Constructor and get() become invoke, unwinding to the landing pad. +// LLVM: [[TRUE_BR]]: +// LLVM: invoke void @_ZN1SC1Ev({{.*}} %[[TMP]]) +// LLVM-NEXT: to label %[[CTOR_CONT:.*]] unwind label %[[PAD:.*]] +// LLVM: [[CTOR_CONT]]: +// LLVM: store i8 1, ptr %[[ACTIVE]] +// LLVM: %[[GET_RESULT:.*]] = invoke noundef i32 @_ZN1S3getEv({{.*}} %[[TMP]]) +// LLVM-NEXT: to label %[[GET_CONT:.*]] unwind label %[[PAD]] +// LLVM: [[GET_CONT]]: +// LLVM: br label %[[MERGE:.*]] +// LLVM: [[FALSE_BR]]: +// LLVM: %[[XVAL:.*]] = load i32, ptr %{{.*}} +// LLVM: br label %[[MERGE]] +// LLVM: [[MERGE]]: +// LLVM: %[[PHI:.*]] = phi i32 [ %[[XVAL]], %[[FALSE_BR]] ], [ %[[GET_RESULT]], %[[GET_CONT]] ] +// Normal cleanup: check active flag, conditionally run destructor. +// LLVM: %{{.*}} = load i8, ptr %[[ACTIVE]] +// LLVM: br i1 %{{.*}}, label %[[DTOR:.*]], label %[[SKIP_DTOR:.*]] +// LLVM: [[DTOR]]: +// LLVM: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// LLVM: br label %[[SKIP_DTOR]] +// EH cleanup: landingpad runs the same active-flag-guarded destructor. +// LLVM: [[PAD]]: +// LLVM: landingpad { ptr, i32 } +// LLVM-NEXT: cleanup +// LLVM: %{{.*}} = load i8, ptr %[[ACTIVE]] +// LLVM: br i1 %{{.*}}, label %[[EH_DTOR:.*]], label %[[EH_SKIP_DTOR:.*]] +// LLVM: [[EH_DTOR]]: +// LLVM: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// LLVM: br label %[[EH_SKIP_DTOR]] +// LLVM: [[EH_SKIP_DTOR]]: +// LLVM: resume { ptr, i32 } + +// OGCG-LABEL: define dso_local void @_Z22test_ternary_temporarybi( +// OGCG-SAME: personality ptr @__gxx_personality_v0 +// OGCG: entry: +// OGCG: store i1 false, ptr %[[ACTIVE:.*]] +// OGCG: br i1 %{{.*}}, label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// The constructor is a plain call — no active cleanup exists yet. +// OGCG: [[TRUE_BR]]: +// OGCG: call void @_ZN1SC1Ev({{.*}} %[[TMP:.*]]) +// OGCG: store i1 true, ptr %[[ACTIVE]] +// With the destructor now active, get() becomes invoke. +// OGCG: %[[GET_RESULT:.*]] = invoke {{.*}} i32 @_ZN1S3getEv({{.*}} %[[TMP]]) +// OGCG-NEXT: to label %[[INVOKE_CONT:.*]] unwind label %[[LPAD:.*]] +// OGCG: [[INVOKE_CONT]]: +// OGCG: br label %[[MERGE:.*]] +// OGCG: [[FALSE_BR]]: +// OGCG: br label %[[MERGE]] +// OGCG: [[MERGE]]: +// OGCG: %[[COND:.*]] = phi i32 [ %[[GET_RESULT]], %[[INVOKE_CONT]] ], [ %{{.*}}, %[[FALSE_BR]] ] +// Normal cleanup. +// OGCG: br i1 %{{.*}}, label %[[CLEANUP_ACT:.*]], label %[[CLEANUP_DONE:.*]] +// OGCG: [[CLEANUP_ACT]]: +// OGCG: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// OGCG: br label %[[CLEANUP_DONE]] +// OGCG: [[CLEANUP_DONE]]: +// OGCG: store i32 %[[COND]], ptr %{{.*}} +// EH cleanup: landing pad + same destructor check + resume. +// OGCG: [[LPAD]]: +// OGCG: landingpad { ptr, i32 } +// OGCG-NEXT: cleanup +// OGCG: br i1 %{{.*}}, label %[[EH_DTOR:.*]], label %[[EH_AFTER_DTOR:.*]] +// OGCG: [[EH_DTOR]]: +// OGCG: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// OGCG: br label %[[EH_AFTER_DTOR]] +// OGCG: [[EH_AFTER_DTOR]]: +// OGCG: resume { ptr, i32 } + +void test_ternary_both_branches(bool c) { + int result = c ? A().get() : B().get(); +} +// CIR-LABEL: @_Z26test_ternary_both_branchesb +// CIR: %[[TMPA:.*]] = cir.alloca !rec_A, !cir.ptr<!rec_A>, ["ref.tmp0"] +// CIR: %[[ACTA:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: %[[TMPB:.*]] = cir.alloca !rec_B, !cir.ptr<!rec_B>, ["ref.tmp1"] +// CIR: %[[ACTB:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: cir.cleanup.scope { +// CIR: %[[COND:.*]] = cir.load {{.*}} : !cir.ptr<!cir.bool>, !cir.bool +// CIR: %[[FALSE_A:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[FALSE_B:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %{{.*}} = cir.ternary(%[[COND]], true { +// CIR: cir.call @_ZN1AC1Ev(%[[TMPA]]) +// CIR: %[[TRUE_A:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_A:.*]] = cir.call @_ZN1A3getEv(%[[TMPA]]) +// CIR: cir.yield %[[GET_A]] : !s32i +// CIR: }, false { +// CIR: cir.call @_ZN1BC1Ev(%[[TMPB]]) +// CIR: %[[TRUE_B:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_B:.*]] = cir.call @_ZN1B3getEv(%[[TMPB]]) +// CIR: cir.yield %[[GET_B]] : !s32i +// CIR: cir.yield +// CIR: } cleanup all { +// CIR: %[[FLAG_B:.*]] = cir.load {{.*}} %[[ACTB]] +// CIR: cir.if %[[FLAG_B]] { +// CIR: cir.call @_ZN1BD1Ev(%[[TMPB]]) +// CIR: } +// CIR: %[[FLAG_A:.*]] = cir.load {{.*}} %[[ACTA]] +// CIR: cir.if %[[FLAG_A]] { +// CIR: cir.call @_ZN1AD1Ev(%[[TMPA]]) +// CIR: } +// CIR: cir.yield +// CIR: } + +// LLVM-LABEL: define dso_local void @_Z26test_ternary_both_branchesb( +// LLVM-SAME: personality ptr @__gxx_personality_v0 +// LLVM: %[[TMPA:.*]] = alloca %struct.A +// LLVM: %[[ACTA:.*]] = alloca i8 +// LLVM: %[[TMPB:.*]] = alloca %struct.B +// LLVM: %[[ACTB:.*]] = alloca i8 +// LLVM: store i8 0, ptr %[[ACTA]] +// LLVM: store i8 0, ptr %[[ACTB]] +// LLVM: br i1 %{{.*}}, label %[[CONSTRUCT_A:.*]], label %[[CONSTRUCT_B:.*]] +// LLVM: [[CONSTRUCT_A]]: +// LLVM: invoke void @_ZN1AC1Ev({{.*}} %[[TMPA]]) +// LLVM-NEXT: to label %[[A_CTOR_CONT:.*]] unwind label %[[PAD:.*]] +// LLVM: [[A_CTOR_CONT]]: +// LLVM: store i8 1, ptr %[[ACTA]], align 1 +// LLVM: %[[CALLA:.*]] = invoke {{.*}} i32 @_ZN1A3getEv({{.*}} %[[TMPA]]) +// LLVM-NEXT: to label %[[A_GET_CONT:.*]] unwind label %[[LPAD:.*]] +// LLVM: [[CONSTRUCT_B]]: +// LLVM: invoke void @_ZN1BC1Ev({{.*}} %[[TMPB]]) +// LLVM-NEXT: to label %[[B_CTOR_CONT:.*]] unwind label %[[LPAD]] +// LLVM: [[B_CTOR_CONT]]: +// LLVM: store i8 1, ptr %[[ACTB]] +// LLVM: %[[CALLB:.*]] = invoke {{.*}} i32 @_ZN1B3getEv({{.*}} %[[TMPB]]) +// LLVM-NEXT: to label %{{.*}} unwind label %[[PAD]] +// Normal cleanup: check both active flags. +// LLVM: %{{.*}} = load i8, ptr %[[ACTB]] +// LLVM: br i1 %{{.*}}, label %[[DTOR_B:.*]], label %[[SKIP_DTOR_B:.*]] +// LLVM: [[DTOR_B]]: +// LLVM: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// LLVM: [[SKIP_DTOR_B]]: +// LLVM: %{{.*}} = load i8, ptr %[[ACTA]] +// LLVM: br i1 %{{.*}}, label %[[DTOR_A:.*]], label %[[SKIP_DTOR_A:.*]] +// LLVM: [[DTOR_A]]: +// LLVM: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// EH cleanup: single landingpad, then same active-flag checks for both. +// LLVM: [[LPAD]]: +// LLVM: landingpad { ptr, i32 } +// LLVM-NEXT: cleanup +// LLVM: %{{.*}} = load i8, ptr %[[ACTB]] +// LLVM: br i1 %{{.*}}, label %[[EH_DTOR_B:.*]], label %[[EH_SKIP_DTOR_B:.*]] +// LLVM: [[EH_DTOR_B]]: +// LLVM: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// LLVM: [[EH_SKIP_DTOR_B]]: +// LLVM: %{{.*}} = load i8, ptr %[[ACTA]] +// LLVM: br i1 %{{.*}}, label %[[EH_DTOR_A:.*]], label %[[EH_SKIP_DTOR_A:.*]] +// LLVM: [[EH_DTOR_A]]: +// LLVM: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// LLVM: [[EH_SKIP_DTOR_A]]: +// LLVM: resume { ptr, i32 } + +// OGCG-LABEL: define dso_local void @_Z26test_ternary_both_branchesb( +// OGCG-SAME: personality ptr @__gxx_personality_v0 +// OGCG: entry: +// OGCG: store i1 false, ptr %[[ACTA:.*]] +// OGCG: store i1 false, ptr %[[ACTB:.*]] +// OGCG: br i1 %{{.*}}, label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// A constructor is call (no active cleanup yet); A get() is invoke. +// OGCG: [[TRUE_BR]]: +// OGCG: call void @_ZN1AC1Ev({{.*}} %[[TMPA:.*]]) +// OGCG: store i1 true, ptr %[[ACTA]] +// OGCG: %[[CALLA:.*]] = invoke {{.*}} i32 @_ZN1A3getEv({{.*}} %[[TMPA]]) +// OGCG-NEXT: to label %{{.*}} unwind label %[[LPAD1:.*]] +// B constructor is invoke (A cleanup is active); B get() invokes to a second pad. +// OGCG: [[FALSE_BR]]: +// OGCG: invoke void @_ZN1BC1Ev({{.*}} %[[TMPB:.*]]) +// OGCG-NEXT: to label %{{.*}} unwind label %[[LPAD1]] +// OGCG: store i1 true, ptr %[[ACTB]] +// OGCG: invoke {{.*}} i32 @_ZN1B3getEv({{.*}} %[[TMPB]]) +// OGCG-NEXT: to label %{{.*}} unwind label %[[LPAD2:.*]] +// Normal cleanup: B first, then A (reverse construction order). +// OGCG: [[MERGE:.*]]: +// OGCG: br i1 %{{.*}}, label %[[DTOR_B:.*]], label %[[AFTER_DTOR_B:.*]] +// OGCG: [[DTOR_B]]: +// OGCG: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// OGCG: br label %[[AFTER_DTOR_B]] +// OGCG: [[AFTER_DTOR_B]]: +// OGCG: br i1 %{{.*}}, label %[[DTOR_A:.*]], label %[[AFTER_DTOR_A:.*]] +// OGCG: [[DTOR_A]]: +// OGCG: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// OGCG: br label %[[AFTER_DTOR_A]] +// First landing pad: from A.get() or B ctor — only A cleanup needed. +// OGCG: [[LPAD1]]: +// OGCG: landingpad { ptr, i32 } +// OGCG-NEXT: cleanup +// OGCG: br label %[[EH_CLEANUP:.*]] +// Second landing pad: from B.get() — B cleanup, then A cleanup. +// OGCG: [[LPAD2]]: +// OGCG: landingpad { ptr, i32 } +// OGCG-NEXT: cleanup +// OGCG: br i1 %{{.*}}, label %[[EH_DTOR_B:.*]], label %[[EH_AFTER_DTOR_B:.*]] +// OGCG: [[EH_DTOR_B]]: +// OGCG: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// OGCG: br label %[[EH_AFTER_DTOR_B]] +// OGCG: [[EH_AFTER_DTOR_B]]: +// OGCG: br label %[[EH_CLEANUP]] +// Shared EH cleanup for A's destructor, then resume. +// OGCG: [[EH_CLEANUP]]: +// OGCG: br i1 %{{.*}}, label %[[EH_DTOR_A:.*]], label %[[EH_AFTER_DTOR_A:.*]] +// OGCG: [[EH_DTOR_A]]: +// OGCG: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// OGCG: br label %[[EH_AFTER_DTOR_A]] +// OGCG: [[EH_AFTER_DTOR_A]]: +// OGCG: resume { ptr, i32 } + +int test_return_ternary(bool c) { + return c ? A().get() : B().get(); +} +// CIR-LABEL: @_Z19test_return_ternaryb +// CIR: %[[TMPA:.*]] = cir.alloca !rec_A, !cir.ptr<!rec_A>, ["ref.tmp0"] +// CIR: %[[ACTA:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: %[[TMPB:.*]] = cir.alloca !rec_B, !cir.ptr<!rec_B>, ["ref.tmp1"] +// CIR: %[[ACTB:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: cir.scope { +// CIR: cir.cleanup.scope { +// CIR: %[[COND:.*]] = cir.load {{.*}} : !cir.ptr<!cir.bool>, !cir.bool +// CIR: %[[FALSE_A:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[FALSE_B:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %{{.*}} = cir.ternary(%[[COND]], true { +// CIR: cir.call @_ZN1AC1Ev(%[[TMPA]]) +// CIR: %[[TRUE_A:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_A:.*]] = cir.call @_ZN1A3getEv(%[[TMPA]]) +// CIR: cir.yield %[[GET_A]] : !s32i +// CIR: }, false { +// CIR: cir.call @_ZN1BC1Ev(%[[TMPB]]) +// CIR: %[[TRUE_B:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_B:.*]] = cir.call @_ZN1B3getEv(%[[TMPB]]) +// CIR: cir.yield %[[GET_B]] : !s32i +// CIR: }) +// CIR: cir.store %{{.*}}, %{{.*}} : !s32i, !cir.ptr<!s32i> +// CIR: cir.yield +// CIR: } cleanup all { +// CIR: %[[FLAG_B:.*]] = cir.load {{.*}} %[[ACTB]] +// CIR: cir.if %[[FLAG_B]] { +// CIR: cir.call @_ZN1BD1Ev(%[[TMPB]]) +// CIR: } +// CIR: %[[FLAG_A:.*]] = cir.load {{.*}} %[[ACTA]] +// CIR: cir.if %[[FLAG_A]] { +// CIR: cir.call @_ZN1AD1Ev(%[[TMPA]]) +// CIR: } +// CIR: cir.yield +// CIR: } +// CIR: } +// CIR: %[[RET:.*]] = cir.load %{{.*}} : !cir.ptr<!s32i>, !s32i +// CIR: cir.return %[[RET]] : !s32i + +// LLVM-LABEL: define dso_local noundef i32 @_Z19test_return_ternaryb( +// LLVM-SAME: personality ptr @__gxx_personality_v0 +// LLVM: %[[RETVAL:.*]] = alloca i32 +// LLVM: %[[TMPA:.*]] = alloca %struct.A +// LLVM: %[[ACTA:.*]] = alloca i8 +// LLVM: %[[TMPB:.*]] = alloca %struct.B +// LLVM: %[[ACTB:.*]] = alloca i8 +// LLVM: store i8 0, ptr %[[ACTA]] +// LLVM: store i8 0, ptr %[[ACTB]] +// LLVM: br i1 %{{.*}}, label %[[CONSTRUCT_A:.*]], label %[[CONSTRUCT_B:.*]] +// LLVM: [[CONSTRUCT_A]]: +// LLVM: invoke void @_ZN1AC1Ev({{.*}} %[[TMPA]]) +// LLVM-NEXT: to label %[[A_CTOR_CONT:.*]] unwind label %[[LPAD:.*]] +// LLVM: [[A_CTOR_CONT]]: +// LLVM: store i8 1, ptr %[[ACTA]], align 1 +// LLVM: %[[CALLA:.*]] = invoke {{.*}} i32 @_ZN1A3getEv({{.*}} %[[TMPA]]) +// LLVM-NEXT: to label %{{.*}} unwind label %[[LPAD]] +// LLVM: [[CONSTRUCT_B]]: +// LLVM: invoke void @_ZN1BC1Ev({{.*}} %[[TMPB]]) +// LLVM-NEXT: to label %[[B_CTOR_CONT:.*]] unwind label %[[LPAD]] +// LLVM: [[B_CTOR_CONT]]: +// LLVM: store i8 1, ptr %[[ACTB]], align 1 +// LLVM: %[[CALLB:.*]] = invoke {{.*}} i32 @_ZN1B3getEv({{.*}} %[[TMPB]]) +// LLVM-NEXT: to label %{{.*}} unwind label %[[LPAD]] +// LLVM: store i32 %{{.*}}, ptr %[[RETVAL]] +// Normal cleanup: check both active flags. +// LLVM: %{{.*}} = load i8, ptr %[[ACTB]] +// LLVM: br i1 %{{.*}}, label %[[DTOR_B:.*]], label %[[SKIP_DTOR_B:.*]] +// LLVM: [[DTOR_B]]: +// LLVM: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// LLVM: [[SKIP_DTOR_B]]: +// LLVM: %{{.*}} = load i8, ptr %[[ACTA]] +// LLVM: br i1 %{{.*}}, label %[[DTOR_A:.*]], label %[[SKIP_DTOR_A:.*]] +// LLVM: [[DTOR_A]]: +// LLVM: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// EH cleanup: same active-flag checks, then resume. +// LLVM: [[LPAD]]: +// LLVM: landingpad { ptr, i32 } +// LLVM-NEXT: cleanup +// LLVM: %{{.*}} = load i8, ptr %[[ACTB]] +// LLVM: br i1 %{{.*}}, label %[[EH_DTOR_B:.*]], label %[[EH_SKIP_DTOR_B:.*]] +// LLVM: [[EH_DTOR_B]]: +// LLVM: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// LLVM: [[EH_SKIP_DTOR_B]]: +// LLVM: %{{.*}} = load i8, ptr %[[ACTA]] +// LLVM: br i1 %{{.*}}, label %[[EH_DTOR_A:.*]], label %[[EH_SKIP_DTOR_A:.*]] +// LLVM: [[EH_DTOR_A]]: +// LLVM: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// LLVM: [[EH_SKIP_DTOR_A]]: +// LLVM: resume { ptr, i32 } +// LLVM: %[[RET:.*]] = load i32, ptr %[[RETVAL]] +// LLVM: ret i32 %[[RET]] + +// OGCG-LABEL: define dso_local noundef i32 @_Z19test_return_ternaryb( +// OGCG-SAME: personality ptr @__gxx_personality_v0 +// OGCG: entry: +// OGCG: store i1 false, ptr %[[ACTA:.*]] +// OGCG: store i1 false, ptr %[[ACTB:.*]] +// OGCG: br i1 %{{.*}}, label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// OGCG: [[TRUE_BR]]: +// OGCG: call void @_ZN1AC1Ev({{.*}} %[[TMPA:.*]]) +// OGCG: store i1 true, ptr %[[ACTA]] +// OGCG: %[[CALLA:.*]] = invoke {{.*}} i32 @_ZN1A3getEv({{.*}} %[[TMPA]]) +// OGCG-NEXT: to label %{{.*}} unwind label %[[LPAD1:.*]] +// OGCG: [[FALSE_BR]]: +// OGCG: invoke void @_ZN1BC1Ev({{.*}} %[[TMPB:.*]]) +// OGCG-NEXT: to label %{{.*}} unwind label %[[LPAD1]] +// OGCG: store i1 true, ptr %[[ACTB]] +// OGCG: invoke {{.*}} i32 @_ZN1B3getEv({{.*}} %[[TMPB]]) +// OGCG-NEXT: to label %{{.*}} unwind label %[[LPAD2:.*]] +// Normal cleanup: B first, then A. +// OGCG: [[MERGE:.*]]: +// OGCG: br i1 %{{.*}}, label %[[DTOR_B:.*]], label %[[AFTER_DTOR_B:.*]] +// OGCG: [[DTOR_B]]: +// OGCG: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// OGCG: br label %[[AFTER_DTOR_B]] +// OGCG: [[AFTER_DTOR_B]]: +// OGCG: br i1 %{{.*}}, label %[[DTOR_A:.*]], label %[[AFTER_DTOR_A:.*]] +// OGCG: [[DTOR_A]]: +// OGCG: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// OGCG: br label %[[AFTER_DTOR_A]] +// OGCG: [[AFTER_DTOR_A]]: +// OGCG: ret i32 %{{.*}} +// First landing pad: from A.get() or B ctor. +// OGCG: [[LPAD1]]: +// OGCG: landingpad { ptr, i32 } +// OGCG-NEXT: cleanup +// OGCG: br label %[[EH_CLEANUP:.*]] +// Second landing pad: from B.get() — B cleanup, then A cleanup. +// OGCG: [[LPAD2]]: +// OGCG: landingpad { ptr, i32 } +// OGCG-NEXT: cleanup +// OGCG: br i1 %{{.*}}, label %[[EH_DTOR_B:.*]], label %[[EH_AFTER_DTOR_B:.*]] +// OGCG: [[EH_DTOR_B]]: +// OGCG: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// OGCG: br label %[[EH_AFTER_DTOR_B]] +// OGCG: [[EH_AFTER_DTOR_B]]: +// OGCG: br label %[[EH_CLEANUP]] +// OGCG: [[EH_CLEANUP]]: +// OGCG: br i1 %{{.*}}, label %[[EH_DTOR_A:.*]], label %[[EH_AFTER_DTOR_A:.*]] +// OGCG: [[EH_DTOR_A]]: +// OGCG: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// OGCG: br label %[[EH_AFTER_DTOR_A]] +// OGCG: [[EH_AFTER_DTOR_A]]: +// OGCG: resume { ptr, i32 } + +// False positive: ExprWithCleanups wraps a ternary, but S() is constructed +// outside the conditional so no cleanup is deferred. The cleanup.scope still +// uses "cleanup all" for the unconditional destructor. +int test_false_positive_conditional(bool c) { + return S().get() ? 1 : 2; +} +// CIR-LABEL: @_Z31test_false_positive_conditionalb +// CIR-NOT: cir.alloca {{.*}} ["cleanup.cond"] +// CIR: cir.scope { +// CIR: %[[TMP:.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>, ["ref.tmp0"] +// CIR: cir.call @_ZN1SC1Ev(%[[TMP]]) +// CIR: cir.cleanup.scope { +// CIR: %[[VAL:.*]] = cir.call @_ZN1S3getEv(%[[TMP]]) +// CIR: %[[BOOL:.*]] = cir.cast int_to_bool %[[VAL]] +// CIR: %[[ONE:.*]] = cir.const #cir.int<1> : !s32i +// CIR: %[[TWO:.*]] = cir.const #cir.int<2> : !s32i +// CIR: %[[SEL:.*]] = cir.select if %[[BOOL]] then %[[ONE]] else %[[TWO]] +// CIR: cir.store %[[SEL]], %{{.*}} : !s32i, !cir.ptr<!s32i> +// CIR: cir.yield +// CIR: } cleanup all { +// CIR: cir.call @_ZN1SD1Ev(%[[TMP]]) +// CIR: cir.yield +// CIR: } +// CIR: } + +// LLVM-LABEL: define dso_local noundef i32 @_Z31test_false_positive_conditionalb( +// LLVM-SAME: personality ptr @__gxx_personality_v0 +// LLVM: %[[TMP:.*]] = alloca %struct.S +// LLVM: %[[RETVAL:.*]] = alloca i32 +// The constructor is call — no active EH cleanup yet. +// LLVM: call void @_ZN1SC1Ev({{.*}} %[[TMP]]) +// get() becomes invoke because the destructor cleanup is active. +// LLVM: %[[VAL:.*]] = invoke {{.*}} i32 @_ZN1S3getEv({{.*}} %[[TMP]]) +// LLVM-NEXT: to label %[[GET_CONT:.*]] unwind label %[[LPAD:.*]] +// LLVM: [[GET_CONT]]: +// LLVM: %[[CMP:.*]] = icmp ne i32 %[[VAL]], 0 +// LLVM: %[[SEL:.*]] = select i1 %[[CMP]], i32 1, i32 2 +// LLVM: store i32 %[[SEL]], ptr %[[RETVAL]] +// Normal path: unconditional destructor. +// LLVM: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// EH path: landingpad + unconditional destructor + resume. +// LLVM: [[LPAD]]: +// LLVM: landingpad { ptr, i32 } +// LLVM-NEXT: cleanup +// LLVM: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// LLVM: resume { ptr, i32 } +// LLVM: %[[RET:.*]] = load i32, ptr %[[RETVAL]] +// LLVM: ret i32 %[[RET]] + +// OGCG-LABEL: define dso_local noundef i32 @_Z31test_false_positive_conditionalb( +// OGCG-SAME: personality ptr @__gxx_personality_v0 +// OGCG: entry: +// The constructor is call; get() is invoke. +// OGCG: call void @_ZN1SC1Ev({{.*}} %[[TMP:.*]]) +// OGCG: %[[VAL:.*]] = invoke {{.*}} i32 @_ZN1S3getEv({{.*}} %[[TMP]]) +// OGCG-NEXT: to label %[[INVOKE_CONT:.*]] unwind label %[[LPAD:.*]] +// OGCG: [[INVOKE_CONT]]: +// OGCG: %[[CMP:.*]] = icmp ne i32 %[[VAL]], 0 +// OGCG: %[[SEL:.*]] = select i1 %[[CMP]], i32 1, i32 2 +// Normal path: unconditional destructor + return. +// OGCG: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// OGCG: ret i32 %[[SEL]] +// EH path: unconditional destructor + resume. +// OGCG: [[LPAD]]: +// OGCG: landingpad { ptr, i32 } +// OGCG-NEXT: cleanup +// OGCG: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// OGCG: resume { ptr, i32 } diff --git a/clang/test/CIR/CodeGen/cleanup-conditional.cpp b/clang/test/CIR/CodeGen/cleanup-conditional.cpp new file mode 100644 index 0000000000000..62fc887f3dbc4 --- /dev/null +++ b/clang/test/CIR/CodeGen/cleanup-conditional.cpp @@ -0,0 +1,419 @@ +// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -fclangir -emit-cir %s -o %t.cir +// RUN: FileCheck --input-file=%t.cir %s --check-prefix=CIR +// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -fclangir -emit-llvm %s -o %t-cir.ll +// RUN: FileCheck --input-file=%t-cir.ll %s --check-prefix=LLVM +// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -emit-llvm %s -o %t.ll +// RUN: FileCheck --input-file=%t.ll %s --check-prefix=OGCG + +struct S { + S(); + ~S(); + int get(); +}; + +void test_ternary_temporary(bool c, int x) { + int result = c ? S().get() : x; +} +// CIR-LABEL: @_Z22test_ternary_temporarybi +// CIR: %[[TMP:.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>, ["ref.tmp0"] +// CIR: %[[ACTIVE:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// The cleanup scope wraps the full expression so cleanups run on all exits. +// CIR: cir.cleanup.scope { +// Load condition, then active flag false before the ternary (destructor guard). +// CIR: %[[COND:.*]] = cir.load {{.*}} : !cir.ptr<!cir.bool>, !cir.bool +// CIR: %[[FALSE:.*]] = cir.const #false +// CIR: cir.store %[[FALSE]], %[[ACTIVE]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %{{.*}} = cir.ternary(%[[COND]], true { +// True branch: mark active before calling get() so cleanup runs. +// CIR: cir.call @_ZN1SC1Ev(%[[TMP]]) +// CIR: %[[SET_TRUE:.*]] = cir.const #true +// CIR: cir.store %[[SET_TRUE]], %[[ACTIVE]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_RESULT:.*]] = cir.call @_ZN1S3getEv(%[[TMP]]) +// CIR: cir.yield %[[GET_RESULT]] : !s32i +// CIR: }, false { +// CIR: cir.yield +// CIR: cir.yield +// CIR: } cleanup normal { +// CIR: %[[IS_ACTIVE:.*]] = cir.load {{.*}} %[[ACTIVE]] +// CIR: cir.if %[[IS_ACTIVE]] { +// CIR: cir.call @_ZN1SD1Ev(%[[TMP]]) +// CIR: } +// CIR: cir.yield +// CIR: } + +// LLVM-LABEL: define dso_local void @_Z22test_ternary_temporarybi( +// LLVM: %[[TMP:.*]] = alloca %struct.S +// LLVM: %[[ACTIVE:.*]] = alloca i8 +// LLVM: %[[RESULT_TMP:.*]] = alloca i32 +// LLVM: br label %[[INIT:.*]] +// LLVM: [[INIT]]: +// LLVM: %[[COND_BYTE:.*]] = load i8, ptr %{{.*}} +// LLVM: %[[COND_BOOL:.*]] = trunc i8 %[[COND_BYTE]] to i1 +// LLVM: store i8 0, ptr %[[ACTIVE]] +// LLVM: br i1 %[[COND_BOOL]], label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// LLVM: [[TRUE_BR]]: +// LLVM: call void @_ZN1SC1Ev(ptr {{.*}} %[[TMP]]) +// LLVM: store i8 1, ptr %[[ACTIVE]] +// LLVM: %[[GET_RESULT:.*]] = call {{.*}} i32 @_ZN1S3getEv(ptr {{.*}} %[[TMP]]) +// LLVM: br label %[[MERGE:.*]] +// LLVM: [[FALSE_BR]]: +// LLVM: %[[XVAL:.*]] = load i32, ptr %{{.*}} +// LLVM: br label %[[MERGE]] +// LLVM: [[MERGE]]: +// LLVM: %[[PHI:.*]] = phi i32 [ %[[XVAL]], %[[FALSE_BR]] ], [ %[[GET_RESULT]], %[[TRUE_BR]] ] +// LLVM: br label %[[STORE:.*]] +// LLVM: [[STORE]]: +// LLVM: store i32 %[[PHI]], ptr %[[RESULT_TMP]] +// LLVM: br label %[[CLEANUP:.*]] +// LLVM: [[CLEANUP]]: +// LLVM: %[[ACTIVE_BYTE:.*]] = load i8, ptr %[[ACTIVE]] +// LLVM: %[[ACTIVE_BOOL:.*]] = trunc i8 %[[ACTIVE_BYTE]] to i1 +// LLVM: br i1 %[[ACTIVE_BOOL]], label %[[DTOR:.*]], label %[[SKIP_DTOR:.*]] +// LLVM: [[DTOR]]: +// LLVM: call void @_ZN1SD1Ev(ptr {{.*}} %[[TMP]]) +// LLVM: br label %[[SKIP_DTOR]] +// LLVM: [[SKIP_DTOR]]: +// LLVM: br label %[[EXIT:.*]] +// LLVM: [[EXIT]]: +// LLVM: %[[RESULT:.*]] = load i32, ptr %[[RESULT_TMP]] +// LLVM: store i32 %[[RESULT]], ptr %{{.*}} + +// OGCG-LABEL: define dso_local void @_Z22test_ternary_temporarybi( +// OGCG: entry: +// OGCG: store i1 false, ptr %[[ACTIVE:.*]] +// OGCG: br i1 %[[COND_BOOL:.*]], label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// OGCG: [[TRUE_BR]]: +// OGCG: call void @_ZN1SC1Ev(ptr {{.*}} %[[TMP:.*]]) +// OGCG: store i1 true, ptr %[[ACTIVE]] +// OGCG: %[[GET_RESULT:.*]] = call {{.*}} i32 @_ZN1S3getEv(ptr {{.*}} %[[TMP]]) +// OGCG: br label %[[MERGE:.*]] +// OGCG: [[FALSE_BR]]: +// OGCG: %[[XVAL:.*]] = load i32, ptr %{{.*}} +// OGCG: br label %[[MERGE]] +// OGCG: [[MERGE]]: +// OGCG: %[[COND:.*]] = phi i32 [ %[[GET_RESULT]], %[[TRUE_BR]] ], [ %[[XVAL]], %[[FALSE_BR]] ] +// OGCG: br i1 %[[NEED_DTOR:.*]], label %[[CLEANUP_ACT:.*]], label %[[CLEANUP_DONE:.*]] +// OGCG: [[CLEANUP_ACT]]: +// OGCG: call void @_ZN1SD1Ev(ptr {{.*}} %[[TMP]]) +// OGCG: br label %[[CLEANUP_DONE]] +// OGCG: [[CLEANUP_DONE]]: +// OGCG: store i32 %[[COND]], ptr %{{.*}} + +struct A { + A(); + ~A(); + int get(); +}; + +struct B { + B(); + ~B(); + int get(); +}; + +// Both branches of the ternary create different temporaries (A vs B). +// Each gets its own active flag; both are checked in the cleanup region. +void test_ternary_both_branches(bool c) { + int result = c ? A().get() : B().get(); +} +// CIR-LABEL: @_Z26test_ternary_both_branchesb +// CIR: %[[TMPA:.*]] = cir.alloca !rec_A, !cir.ptr<!rec_A>, ["ref.tmp0"] +// CIR: %[[ACTA:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: %[[TMPB:.*]] = cir.alloca !rec_B, !cir.ptr<!rec_B>, ["ref.tmp1"] +// CIR: %[[ACTB:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: cir.cleanup.scope { +// Both active flags start false; each branch sets its own to true when it runs. +// CIR: %[[COND:.*]] = cir.load {{.*}} : !cir.ptr<!cir.bool>, !cir.bool +// CIR: %[[FALSE_A:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[FALSE_B:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %{{.*}} = cir.ternary(%[[COND]], true { +// CIR: cir.call @_ZN1AC1Ev(%[[TMPA]]) +// CIR: %[[TRUE_A:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_A:.*]] = cir.call @_ZN1A3getEv(%[[TMPA]]) +// CIR: cir.yield %[[GET_A]] : !s32i +// CIR: }, false { +// CIR: cir.call @_ZN1BC1Ev(%[[TMPB]]) +// CIR: %[[TRUE_B:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_B:.*]] = cir.call @_ZN1B3getEv(%[[TMPB]]) +// CIR: cir.yield %[[GET_B]] : !s32i +// CIR: cir.yield +// CIR: } cleanup normal { +// CIR: %[[FLAG_B:.*]] = cir.load {{.*}} %[[ACTB]] +// CIR: cir.if %[[FLAG_B]] { +// CIR: cir.call @_ZN1BD1Ev(%[[TMPB]]) +// CIR: } +// CIR: %[[FLAG_A:.*]] = cir.load {{.*}} %[[ACTA]] +// CIR: cir.if %[[FLAG_A]] { +// CIR: cir.call @_ZN1AD1Ev(%[[TMPA]]) +// CIR: } +// CIR: cir.yield +// CIR: } + +// LLVM-LABEL: define dso_local void @_Z26test_ternary_both_branchesb( +// LLVM: %{{.*}} = alloca i8 +// LLVM: %{{.*}} = alloca i32 +// LLVM: %[[TMPA:.*]] = alloca %struct.A +// LLVM: %[[ACTA:.*]] = alloca i8 +// LLVM: %[[TMPB:.*]] = alloca %struct.B +// LLVM: %[[ACTB:.*]] = alloca i8 +// LLVM: %[[RESULT_TMP:.*]] = alloca i32 +// LLVM: br label %[[INIT:.*]] +// LLVM: [[INIT]]: +// LLVM: %[[COND_BYTE:.*]] = load i8, ptr %{{.*}} +// LLVM: %[[COND_BOOL:.*]] = trunc i8 %[[COND_BYTE]] to i1 +// LLVM: store i8 0, ptr %[[ACTA]] +// LLVM: store i8 0, ptr %[[ACTB]] +// LLVM: br i1 %[[COND_BOOL]], label %[[CONSTRUCT_A:.*]], label %[[CONSTRUCT_B:.*]] +// LLVM: [[CONSTRUCT_A]]: +// LLVM: call void @_ZN1AC1Ev({{.*}} %[[TMPA]]) +// LLVM: store i8 1, ptr %[[ACTA]] +// LLVM: %[[CALLA:.*]] = call noundef i32 @_ZN1A3getEv({{.*}} %[[TMPA]]) +// LLVM: br label %[[MERGE:.*]] +// LLVM: [[CONSTRUCT_B]]: +// LLVM: call void @_ZN1BC1Ev({{.*}} %[[TMPB]]) +// LLVM: store i8 1, ptr %[[ACTB]] +// LLVM: %[[CALLB:.*]] = call {{.*}} i32 @_ZN1B3getEv({{.*}} %[[TMPB]]) +// LLVM: br label %[[MERGE]] +// LLVM: [[MERGE]]: +// LLVM: %[[PHI:.*]] = phi i32 [ %[[CALLB]], %[[CONSTRUCT_B]] ], [ %[[CALLA]], %[[CONSTRUCT_A]] ] +// LLVM: br label %[[STORE:.*]] +// LLVM: [[STORE]]: +// LLVM: store i32 %[[PHI]], ptr %[[RESULT_TMP]] +// LLVM: br label %[[CLEANUP_B:.*]] +// LLVM: [[CLEANUP_B]]: +// LLVM: %[[ACTIVE_BYTE_B:.*]] = load i8, ptr %[[ACTB]] +// LLVM: %[[ACTIVE_BOOL_B:.*]] = trunc i8 %[[ACTIVE_BYTE_B]] to i1 +// LLVM: br i1 %[[ACTIVE_BOOL_B]], label %[[DTOR_B:.*]], label %[[SKIP_DTOR_B:.*]] +// LLVM: [[DTOR_B]]: +// LLVM: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// LLVM: br label %[[SKIP_DTOR_B]] +// LLVM: [[SKIP_DTOR_B]]: +// LLVM: %[[ACTIVE_BYTE_A:.*]] = load i8, ptr %[[ACTA]] +// LLVM: %[[ACTIVE_BOOL_A:.*]] = trunc i8 %[[ACTIVE_BYTE_A]] to i1 +// LLVM: br i1 %[[ACTIVE_BOOL_A]], label %[[DTOR_A:.*]], label %[[SKIP_DTOR_A:.*]] +// LLVM: [[DTOR_A]]: +// LLVM: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// LLVM: br label %[[SKIP_DTOR_A]] +// LLVM: [[SKIP_DTOR_A]]: +// LLVM: br label %{{.*}} + +// OGCG-LABEL: define dso_local void @_Z26test_ternary_both_branchesb( +// OGCG: entry: +// OGCG: store i1 false, ptr %[[ACTA:.*]] +// OGCG: store i1 false, ptr %[[ACTB:.*]] +// OGCG: br i1 %[[COND_BOOL:.*]], label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// OGCG: [[TRUE_BR]]: +// OGCG: call void @_ZN1AC1Ev({{.*}} %[[TMPA:.*]]) +// OGCG: store i1 true, ptr %[[ACTA]] +// OGCG: br label %[[MERGE:.*]] +// OGCG: [[FALSE_BR]]: +// OGCG: call void @_ZN1BC1Ev({{.*}} %[[TMPB:.*]]) +// OGCG: store i1 true, ptr %[[ACTB]] +// OGCG: br label %[[MERGE]] +// OGCG: [[MERGE]]: +// OGCG: %[[COND:.*]] = phi i32 [ %{{.*}}, %[[TRUE_BR]] ], [ %{{.*}}, %[[FALSE_BR]] ] +// OGCG: br i1 %[[ACTB:.*]], label %[[DTOR_B:.*]], label %[[AFTER_DTOR_B:.*]] +// OGCG: [[DTOR_B]]: +// OGCG: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// OGCG: br label %[[AFTER_DTOR_B]] +// OGCG: [[AFTER_DTOR_B]]: +// OGCG: br i1 %[[ACTA:.*]], label %[[DTOR_A:.*]], label %[[AFTER_DTOR_A:.*]] +// OGCG: [[DTOR_A]]: +// OGCG: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// OGCG: br label %[[AFTER_DTOR_A]] +// OGCG: [[AFTER_DTOR_A]]: +// OGCG: store i32 %[[COND]], ptr %{{.*}} + +// Return expression with ternary: emitReturnStmt strips ExprWithCleanups but +// must still enter a full-expression cleanup scope for the conditional. +int test_return_ternary(bool c) { + return c ? A().get() : B().get(); +} +// CIR-LABEL: @_Z19test_return_ternaryb +// CIR: %[[TMPA:.*]] = cir.alloca !rec_A, !cir.ptr<!rec_A>, ["ref.tmp0"] +// CIR: %[[ACTA:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: %[[TMPB:.*]] = cir.alloca !rec_B, !cir.ptr<!rec_B>, ["ref.tmp1"] +// CIR: %[[ACTB:.*]] = cir.alloca !cir.bool, !cir.ptr<!cir.bool>, ["cleanup.cond"] +// CIR: cir.scope { +// CIR: cir.cleanup.scope { +// CIR: %[[COND:.*]] = cir.load {{.*}} : !cir.ptr<!cir.bool>, !cir.bool +// CIR: %[[FALSE_A:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[FALSE_B:.*]] = cir.const #false +// CIR: cir.store %[[FALSE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %{{.*}} = cir.ternary(%[[COND]], true { +// CIR: cir.call @_ZN1AC1Ev(%[[TMPA]]) +// CIR: %[[TRUE_A:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_A]], %[[ACTA]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_A:.*]] = cir.call @_ZN1A3getEv(%[[TMPA]]) +// CIR: cir.yield %[[GET_A]] : !s32i +// CIR: }, false { +// CIR: cir.call @_ZN1BC1Ev(%[[TMPB]]) +// CIR: %[[TRUE_B:.*]] = cir.const #true +// CIR: cir.store %[[TRUE_B]], %[[ACTB]] : !cir.bool, !cir.ptr<!cir.bool> +// CIR: %[[GET_B:.*]] = cir.call @_ZN1B3getEv(%[[TMPB]]) +// CIR: cir.yield %[[GET_B]] : !s32i +// CIR: }) +// The result is stored to __retval inside the cleanup scope body. +// CIR: cir.store %{{.*}}, %{{.*}} : !s32i, !cir.ptr<!s32i> +// CIR: cir.yield +// CIR: } cleanup normal { +// CIR: %[[FLAG_B:.*]] = cir.load {{.*}} %[[ACTB]] +// CIR: cir.if %[[FLAG_B]] { +// CIR: cir.call @_ZN1BD1Ev(%[[TMPB]]) +// CIR: } +// CIR: %[[FLAG_A:.*]] = cir.load {{.*}} %[[ACTA]] +// CIR: cir.if %[[FLAG_A]] { +// CIR: cir.call @_ZN1AD1Ev(%[[TMPA]]) +// CIR: } +// CIR: cir.yield +// CIR: } +// CIR: } +// Value loaded from __retval after the scope and returned. +// CIR: %[[RET:.*]] = cir.load %{{.*}} : !cir.ptr<!s32i>, !s32i +// CIR: cir.return %[[RET]] : !s32i + +// LLVM-LABEL: define dso_local noundef i32 @_Z19test_return_ternaryb( +// LLVM: %{{.*}} = alloca i8 +// LLVM: %[[RETVAL:.*]] = alloca i32 +// LLVM: %[[TMPA:.*]] = alloca %struct.A +// LLVM: %[[ACTA:.*]] = alloca i8 +// LLVM: %[[TMPB:.*]] = alloca %struct.B +// LLVM: %[[ACTB:.*]] = alloca i8 +// LLVM: br label %[[SCOPE:.*]] +// LLVM: [[SCOPE]]: +// LLVM: br label %[[INIT:.*]] +// LLVM: [[INIT]]: +// LLVM: %[[COND_BYTE:.*]] = load i8, ptr %{{.*}} +// LLVM: %[[COND_BOOL:.*]] = trunc i8 %[[COND_BYTE]] to i1 +// LLVM: store i8 0, ptr %[[ACTA]] +// LLVM: store i8 0, ptr %[[ACTB]] +// LLVM: br i1 %[[COND_BOOL]], label %[[CONSTRUCT_A:.*]], label %[[CONSTRUCT_B:.*]] +// LLVM: [[CONSTRUCT_A]]: +// LLVM: call void @_ZN1AC1Ev({{.*}} %[[TMPA]]) +// LLVM: store i8 1, ptr %[[ACTA]] +// LLVM: %[[CALLA:.*]] = call noundef i32 @_ZN1A3getEv({{.*}} %[[TMPA]]) +// LLVM: br label %[[MERGE:.*]] +// LLVM: [[CONSTRUCT_B]]: +// LLVM: call void @_ZN1BC1Ev({{.*}} %[[TMPB]]) +// LLVM: store i8 1, ptr %[[ACTB]] +// LLVM: %[[CALLB:.*]] = call noundef i32 @_ZN1B3getEv({{.*}} %[[TMPB]]) +// LLVM: br label %[[MERGE]] +// LLVM: [[MERGE]]: +// LLVM: %[[PHI:.*]] = phi i32 [ %[[CALLB]], %[[CONSTRUCT_B]] ], [ %[[CALLA]], %[[CONSTRUCT_A]] ] +// LLVM: br label %[[STORE_RET:.*]] +// LLVM: [[STORE_RET]]: +// LLVM: store i32 %[[PHI]], ptr %[[RETVAL]] +// LLVM: br label %[[CLEANUP_B:.*]] +// LLVM: [[CLEANUP_B]]: +// LLVM: %[[ACTIVE_BYTE_B:.*]] = load i8, ptr %[[ACTB]] +// LLVM: %[[ACTIVE_BOOL_B:.*]] = trunc i8 %[[ACTIVE_BYTE_B]] to i1 +// LLVM: br i1 %[[ACTIVE_BOOL_B]], label %[[DTOR_B:.*]], label %[[SKIP_DTOR_B:.*]] +// LLVM: [[DTOR_B]]: +// LLVM: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// LLVM: br label %[[SKIP_DTOR_B]] +// LLVM: [[SKIP_DTOR_B]]: +// LLVM: %[[ACTIVE_BYTE_A:.*]] = load i8, ptr %[[ACTA]] +// LLVM: %[[ACTIVE_BOOL_A:.*]] = trunc i8 %[[ACTIVE_BYTE_A]] to i1 +// LLVM: br i1 %[[ACTIVE_BOOL_A]], label %[[DTOR_A:.*]], label %[[SKIP_DTOR_A:.*]] +// LLVM: [[DTOR_A]]: +// LLVM: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// LLVM: br label %[[SKIP_DTOR_A]] +// LLVM: [[SKIP_DTOR_A]]: +// LLVM: br label %[[EXIT:.*]] +// LLVM: [[EXIT]]: +// LLVM: %[[RET:.*]] = load i32, ptr %[[RETVAL]] +// LLVM: ret i32 %[[RET]] + +// OGCG-LABEL: define dso_local noundef i32 @_Z19test_return_ternaryb( +// OGCG: entry: +// OGCG: store i1 false, ptr %[[ACTA:.*]] +// OGCG: store i1 false, ptr %[[ACTB:.*]] +// OGCG: br i1 %[[COND_BOOL:.*]], label %[[TRUE_BR:.*]], label %[[FALSE_BR:.*]] +// OGCG: [[TRUE_BR]]: +// OGCG: call void @_ZN1AC1Ev({{.*}} %[[TMPA:.*]]) +// OGCG: store i1 true, ptr %[[ACTA]] +// OGCG: %[[CALLA:.*]] = call noundef i32 @_ZN1A3getEv({{.*}} %[[TMPA]]) +// OGCG: br label %[[MERGE:.*]] +// OGCG: [[FALSE_BR]]: +// OGCG: call void @_ZN1BC1Ev({{.*}} %[[TMPB:.*]]) +// OGCG: store i1 true, ptr %[[ACTB]] +// OGCG: %[[CALLB:.*]] = call noundef i32 @_ZN1B3getEv({{.*}} %[[TMPB]]) +// OGCG: br label %[[MERGE]] +// OGCG: [[MERGE]]: +// OGCG: %[[COND:.*]] = phi i32 [ %[[CALLA]], %[[TRUE_BR]] ], [ %[[CALLB]], %[[FALSE_BR]] ] +// OGCG: store i32 %[[COND]], ptr %{{.*}} +// OGCG: br i1 %[[ACTB:.*]], label %[[DTOR_B:.*]], label %[[AFTER_DTOR_B:.*]] +// OGCG: [[DTOR_B]]: +// OGCG: call void @_ZN1BD1Ev({{.*}} %[[TMPB]]) +// OGCG: br label %[[AFTER_DTOR_B]] +// OGCG: [[AFTER_DTOR_B]]: +// OGCG: br i1 %[[ACTA:.*]], label %[[DTOR_A:.*]], label %[[AFTER_DTOR_A:.*]] +// OGCG: [[DTOR_A]]: +// OGCG: call void @_ZN1AD1Ev({{.*}} %[[TMPA]]) +// OGCG: br label %[[AFTER_DTOR_A]] +// OGCG: [[AFTER_DTOR_A]]: +// OGCG: %{{.*}} = load i32, ptr %{{.*}} +// OGCG: ret i32 %{{.*}} + +// False positive: ExprWithCleanups wraps a ternary, but S() is constructed +// outside the conditional so no cleanup is deferred. The eagerly-created +// full-expression cir.cleanup.scope is inlined and erased, leaving only +// the LexicalScope cleanup for S()'s destructor. +// CIR-LABEL: @_Z31test_false_positive_conditionalb +int test_false_positive_conditional(bool c) { + return S().get() ? 1 : 2; +} +// No cleanup.cond alloca — the destructor is unconditional. +// CIR-NOT: cir.alloca {{.*}} ["cleanup.cond"] +// CIR: cir.scope { +// CIR: %[[TMP:.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>, ["ref.tmp0"] +// CIR: cir.call @_ZN1SC1Ev(%[[TMP]]) +// The LexicalScope's cleanup scope wraps the get() + select + store. +// CIR: cir.cleanup.scope { +// CIR: %[[VAL:.*]] = cir.call @_ZN1S3getEv(%[[TMP]]) +// CIR: %[[BOOL:.*]] = cir.cast int_to_bool %[[VAL]] +// No cir.ternary — both arms are constants, so this lowers to cir.select. +// CIR: %[[ONE:.*]] = cir.const #cir.int<1> : !s32i +// CIR: %[[TWO:.*]] = cir.const #cir.int<2> : !s32i +// CIR: %[[SEL:.*]] = cir.select if %[[BOOL]] then %[[ONE]] else %[[TWO]] +// CIR: cir.store %[[SEL]], %{{.*}} : !s32i, !cir.ptr<!s32i> +// CIR: cir.yield +// S destructor runs unconditionally — no active-flag guard. +// CIR: } cleanup normal { +// CIR: cir.call @_ZN1SD1Ev(%[[TMP]]) +// CIR: cir.yield +// CIR: } +// CIR: } + +// LLVM-LABEL: define dso_local noundef i32 @_Z31test_false_positive_conditionalb( +// LLVM: %[[TMP:.*]] = alloca %struct.S +// LLVM: %[[RETVAL:.*]] = alloca i32 +// LLVM: br label %[[SCOPE:.*]] +// LLVM: [[SCOPE]]: +// LLVM: call void @_ZN1SC1Ev({{.*}} %[[TMP]]) +// LLVM: br label %[[BODY:.*]] +// LLVM: [[BODY]]: +// LLVM: %[[VAL:.*]] = call {{.*}} i32 @_ZN1S3getEv({{.*}} %[[TMP]]) +// LLVM: %[[CMP:.*]] = icmp ne i32 %[[VAL]], 0 +// LLVM: %[[SEL:.*]] = select i1 %[[CMP]], i32 1, i32 2 +// LLVM: store i32 %[[SEL]], ptr %[[RETVAL]] +// LLVM: br label %[[DTOR:.*]] +// LLVM: [[DTOR]]: +// LLVM: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// LLVM: br label %[[EXIT:.*]] +// LLVM: [[EXIT]]: +// LLVM: %[[RET:.*]] = load i32, ptr %[[RETVAL]] +// LLVM: ret i32 %[[RET]] + +// OGCG-LABEL: define dso_local noundef i32 @_Z31test_false_positive_conditionalb( +// OGCG: call void @_ZN1SC1Ev({{.*}} %[[TMP:.*]]) +// OGCG: %[[VAL:.*]] = call {{.*}} i32 @_ZN1S3getEv({{.*}} %[[TMP]]) +// OGCG: %[[CMP:.*]] = icmp ne i32 %[[VAL]], 0 +// OGCG: %[[SEL:.*]] = select i1 %[[CMP]], i32 1, i32 2 +// OGCG: call void @_ZN1SD1Ev({{.*}} %[[TMP]]) +// OGCG: ret i32 %[[SEL]] _______________________________________________ cfe-commits mailing list [email protected] https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
