codeant-ai-for-open-source[bot] commented on code in PR #36529:
URL: https://github.com/apache/superset/pull/36529#discussion_r2615305556


##########
superset/sql/execution/executor.py:
##########
@@ -0,0 +1,999 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+SQL Executor implementation for Database.execute() and execute_async().
+
+This module provides the SQLExecutor class that implements the query execution
+methods defined in superset_core.api.models.Database.
+
+Implementation Features
+-----------------------
+
+Query Preparation (applies to both sync and async):
+- Jinja2 template rendering (via template_params in QueryOptions)
+- SQL mutation via SQL_QUERY_MUTATOR config hook
+- DML permission checking (requires database.allow_dml=True for DML)
+- Disallowed functions checking via DISALLOWED_SQL_FUNCTIONS config
+- Row-level security (RLS) via AST transformation (always applied)
+- Result limit application via SQL_MAX_ROW config
+- Catalog/schema resolution and validation
+
+Synchronous Execution (execute):
+- Multi-statement SQL parsing and execution
+- Progress tracking via Query model
+- Result caching via cache_manager.data_cache
+- Query logging via QUERY_LOGGER config hook
+- Timeout protection via SQLLAB_TIMEOUT config
+- Dry run mode (returns transformed SQL without execution)
+
+Asynchronous Execution (execute_async):
+- Celery task submission for background execution
+- Security validation before submission
+- Query model creation with PENDING status
+- Result caching check (returns cached if available)
+- Background execution with timeout via SQLLAB_ASYNC_TIME_LIMIT_SEC
+- Results stored in results backend for retrieval
+- Handle-based progress tracking and cancellation
+
+See Database.execute() and Database.execute_async() docstrings in
+superset_core.api.models for the public API contract.
+"""
+
+from __future__ import annotations
+
+import logging
+import time
+from datetime import datetime
+from typing import Any, TYPE_CHECKING
+
+from flask import current_app as app, g, has_app_context
+
+from superset import db
+from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
+from superset.exceptions import (
+    SupersetSecurityException,
+    SupersetTimeoutException,
+)
+from superset.extensions import cache_manager
+from superset.sql.parse import SQLScript
+from superset.utils import core as utils
+
+if TYPE_CHECKING:
+    from superset_core.api.types import (
+        AsyncQueryHandle,
+        QueryOptions,
+        QueryResult,
+    )
+
+    from superset.models.core import Database
+    from superset.result_set import SupersetResultSet
+
+logger = logging.getLogger(__name__)
+
+
+def execute_sql_with_cursor(
+    database: Database,
+    cursor: Any,
+    statements: list[str],
+    query: Any,
+    log_query_fn: Any | None = None,
+    check_stopped_fn: Any | None = None,
+    execute_fn: Any | None = None,
+) -> SupersetResultSet | None:
+    """
+    Execute SQL statements with a cursor and return result set.
+
+    This is the shared execution logic used by both sync (SQLExecutor) and
+    async (celery_task) execution paths. It handles multi-statement execution
+    with progress tracking via the Query model.
+
+    :param database: Database model to execute against
+    :param cursor: Database cursor to use for execution
+    :param statements: List of SQL statements to execute
+    :param query: Query model for progress tracking
+    :param log_query_fn: Optional function to log queries, called as fn(sql, 
schema)
+    :param check_stopped_fn: Optional function to check if query was stopped.
+        Should return True if stopped. Used by async execution for 
cancellation.
+    :param execute_fn: Optional custom execute function. If not provided, uses
+        database.db_engine_spec.execute(cursor, sql, database). Custom function
+        should accept (cursor, sql) and handle execution.
+    :returns: SupersetResultSet from last statement, or None if stopped
+    """
+    from superset.result_set import SupersetResultSet
+
+    total = len(statements)
+    if total == 0:
+        return None
+
+    rows = None
+    description = None
+
+    for i, statement in enumerate(statements):
+        # Check if query was stopped (async cancellation)
+        if check_stopped_fn and check_stopped_fn():
+            return None
+
+        # Apply SQL mutation
+        stmt_sql = database.mutate_sql_based_on_config(
+            statement,
+            is_split=True,
+        )
+
+        # Log query
+        if log_query_fn:
+            log_query_fn(stmt_sql, query.schema)
+
+        # Execute - use custom function or default
+        if execute_fn:
+            execute_fn(cursor, stmt_sql)
+        else:
+            database.db_engine_spec.execute(cursor, stmt_sql, database)
+
+        # Fetch results from last statement only
+        if i == total - 1:
+            description = cursor.description
+            rows = database.db_engine_spec.fetch_data(cursor)
+        else:
+            cursor.fetchall()

Review Comment:
   **Suggestion:** Calling cursor.fetchall() for non-last statements can raise 
driver-specific exceptions when the statement does not produce a result set; 
swallow/guard the fetch to avoid raising on drivers that error when fetching 
from statements with no rows. [possible bug]
   
   **Severity Level:** Critical 🚨
   ```suggestion
               try:
                   cursor.fetchall()
               except Exception:
                   # Some DB drivers raise when fetching from a statement that
                   # doesn't produce a result set (e.g. DDL/DML). Ignore such
                   # errors for non-last statements since we only need to 
discard
                   # their results.
                   pass
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   Some DB-API drivers raise when calling fetch* on statements that don't 
return a resultset.
   Wrapping this discard in a try/except to ignore those driver-specific errors 
is pragmatic:
   execution errors would have surfaced at execute() time, so ignoring 
fetch-only errors for
   non-last statements avoids false positives. Ideally the catch would target 
the specific
   driver exception, but a broad catch here is acceptable to maintain 
robustness.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/sql/execution/executor.py
   **Line:** 152:152
   **Comment:**
        *Possible Bug: Calling cursor.fetchall() for non-last statements can 
raise driver-specific exceptions when the statement does not produce a result 
set; swallow/guard the fetch to avoid raising on drivers that error when 
fetching from statements with no rows.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/sql/execution/executor.py:
##########
@@ -0,0 +1,999 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+SQL Executor implementation for Database.execute() and execute_async().
+
+This module provides the SQLExecutor class that implements the query execution
+methods defined in superset_core.api.models.Database.
+
+Implementation Features
+-----------------------
+
+Query Preparation (applies to both sync and async):
+- Jinja2 template rendering (via template_params in QueryOptions)
+- SQL mutation via SQL_QUERY_MUTATOR config hook
+- DML permission checking (requires database.allow_dml=True for DML)
+- Disallowed functions checking via DISALLOWED_SQL_FUNCTIONS config
+- Row-level security (RLS) via AST transformation (always applied)
+- Result limit application via SQL_MAX_ROW config
+- Catalog/schema resolution and validation
+
+Synchronous Execution (execute):
+- Multi-statement SQL parsing and execution
+- Progress tracking via Query model
+- Result caching via cache_manager.data_cache
+- Query logging via QUERY_LOGGER config hook
+- Timeout protection via SQLLAB_TIMEOUT config
+- Dry run mode (returns transformed SQL without execution)
+
+Asynchronous Execution (execute_async):
+- Celery task submission for background execution
+- Security validation before submission
+- Query model creation with PENDING status
+- Result caching check (returns cached if available)
+- Background execution with timeout via SQLLAB_ASYNC_TIME_LIMIT_SEC
+- Results stored in results backend for retrieval
+- Handle-based progress tracking and cancellation
+
+See Database.execute() and Database.execute_async() docstrings in
+superset_core.api.models for the public API contract.
+"""
+
+from __future__ import annotations
+
+import logging
+import time
+from datetime import datetime
+from typing import Any, TYPE_CHECKING
+
+from flask import current_app as app, g, has_app_context
+
+from superset import db
+from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
+from superset.exceptions import (
+    SupersetSecurityException,
+    SupersetTimeoutException,
+)
+from superset.extensions import cache_manager
+from superset.sql.parse import SQLScript
+from superset.utils import core as utils
+
+if TYPE_CHECKING:
+    from superset_core.api.types import (
+        AsyncQueryHandle,
+        QueryOptions,
+        QueryResult,
+    )
+
+    from superset.models.core import Database
+    from superset.result_set import SupersetResultSet
+
+logger = logging.getLogger(__name__)
+
+
+def execute_sql_with_cursor(
+    database: Database,
+    cursor: Any,
+    statements: list[str],
+    query: Any,
+    log_query_fn: Any | None = None,
+    check_stopped_fn: Any | None = None,
+    execute_fn: Any | None = None,
+) -> SupersetResultSet | None:
+    """
+    Execute SQL statements with a cursor and return result set.
+
+    This is the shared execution logic used by both sync (SQLExecutor) and
+    async (celery_task) execution paths. It handles multi-statement execution
+    with progress tracking via the Query model.
+
+    :param database: Database model to execute against
+    :param cursor: Database cursor to use for execution
+    :param statements: List of SQL statements to execute
+    :param query: Query model for progress tracking
+    :param log_query_fn: Optional function to log queries, called as fn(sql, 
schema)
+    :param check_stopped_fn: Optional function to check if query was stopped.
+        Should return True if stopped. Used by async execution for 
cancellation.
+    :param execute_fn: Optional custom execute function. If not provided, uses
+        database.db_engine_spec.execute(cursor, sql, database). Custom function
+        should accept (cursor, sql) and handle execution.
+    :returns: SupersetResultSet from last statement, or None if stopped
+    """
+    from superset.result_set import SupersetResultSet
+
+    total = len(statements)
+    if total == 0:
+        return None
+
+    rows = None
+    description = None
+
+    for i, statement in enumerate(statements):
+        # Check if query was stopped (async cancellation)
+        if check_stopped_fn and check_stopped_fn():
+            return None
+
+        # Apply SQL mutation
+        stmt_sql = database.mutate_sql_based_on_config(
+            statement,
+            is_split=True,
+        )
+
+        # Log query
+        if log_query_fn:
+            log_query_fn(stmt_sql, query.schema)
+
+        # Execute - use custom function or default
+        if execute_fn:
+            execute_fn(cursor, stmt_sql)
+        else:
+            database.db_engine_spec.execute(cursor, stmt_sql, database)
+
+        # Fetch results from last statement only
+        if i == total - 1:
+            description = cursor.description
+            rows = database.db_engine_spec.fetch_data(cursor)
+        else:
+            cursor.fetchall()
+
+        # Update progress on Query model
+        progress_pct = int(((i + 1) / total) * 100)
+        query.progress = progress_pct
+        query.set_extra_json_key(
+            "progress",
+            f"Running statement {i + 1} of {total}",
+        )
+        db.session.commit()  # pylint: disable=consider-using-transaction
+
+    # Build result set
+    if rows is not None and description is not None:
+        return SupersetResultSet(
+            rows,
+            description,
+            database.db_engine_spec,
+        )
+
+    return None
+
+
+class SQLExecutor:
+    """
+    SQL query executor implementation.
+
+    Implements Database.execute() and execute_async() methods.
+    See superset_core.api.models.Database for the full public API 
documentation.
+    """
+
+    def __init__(self, database: Database) -> None:
+        """
+        Initialize the executor with a database.
+
+        :param database: Database model instance to execute queries against
+        """
+        self.database = database
+
+    def execute(
+        self,
+        sql: str,
+        options: QueryOptions | None = None,
+    ) -> QueryResult:
+        """
+        Execute SQL synchronously.
+
+        If options.dry_run=True, returns the transformed SQL without execution.
+        All transformations (RLS, templates, limits) are still applied.
+
+        See superset_core.api.models.Database.execute() for full documentation.
+        """
+        from superset_core.api.types import (
+            QueryOptions as QueryOptionsType,
+            QueryResult as QueryResultType,
+            QueryStatus,
+        )
+
+        opts: QueryOptionsType = options or QueryOptionsType()
+        start_time = time.time()
+
+        try:
+            # 1. Prepare SQL (assembly only, no security checks)
+            script, catalog, schema = self._prepare_sql(sql, opts)
+
+            # 2. Security checks
+            self._check_security(script)
+
+            # 3. Get mutation status and format SQL
+            has_mutation = script.has_mutation()
+            final_sql = script.format()
+
+            # DRY RUN: Return transformed SQL without execution
+            if opts.dry_run:
+                execution_time_ms = (time.time() - start_time) * 1000
+                return QueryResultType(
+                    status=QueryStatus.SUCCESS,
+                    data=None,
+                    row_count=0,
+                    query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                    query_id=None,  # No Query model created
+                    execution_time_ms=execution_time_ms,
+                    is_cached=False,
+                )
+
+            # 4. Check cache
+            cached_result = self._try_get_cached_result(has_mutation, 
final_sql, opts)
+            if cached_result:
+                return cached_result
+
+            # 5. Create Query model for audit
+            query = self._create_query_record(
+                final_sql, opts, catalog, schema, status="running"
+            )
+
+            # 6. Execute with timeout
+            timeout = opts.timeout_seconds or app.config.get("SQLLAB_TIMEOUT", 
30)
+            timeout_msg = f"Query exceeded the {timeout} seconds timeout."
+
+            with utils.timeout(seconds=timeout, error_message=timeout_msg):
+                df = self._execute_statements(
+                    final_sql,
+                    catalog,
+                    schema,
+                    query,
+                )
+
+            execution_time_ms = (time.time() - start_time) * 1000
+
+            # 5. Update query record
+            query.status = "success"
+            query.rows = len(df)
+            query.progress = 100
+            db.session.commit()  # pylint: disable=consider-using-transaction
+
+            result = QueryResultType(
+                status=QueryStatus.SUCCESS,
+                data=df,
+                row_count=len(df),
+                query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                query_id=query.id,
+                execution_time_ms=execution_time_ms,
+            )
+
+            # 6. Store in cache (if SELECT and caching enabled)
+            if not has_mutation:
+                self._store_in_cache(result, final_sql, opts)
+
+            return result
+
+        except SupersetTimeoutException:
+            return self._create_error_result(
+                QueryStatus.TIMED_OUT,
+                "Query exceeded the timeout limit",
+                sql,
+                start_time,
+            )
+        except SupersetSecurityException as ex:
+            return self._create_error_result(
+                QueryStatus.FAILED, str(ex), sql, start_time
+            )
+        except Exception as ex:
+            error_msg = self.database.db_engine_spec.extract_error_message(ex)
+            return self._create_error_result(
+                QueryStatus.FAILED, error_msg, sql, start_time
+            )
+
+    def execute_async(
+        self,
+        sql: str,
+        options: QueryOptions | None = None,
+    ) -> AsyncQueryHandle:
+        """
+        Execute SQL asynchronously via Celery.
+
+        If options.dry_run=True, returns the transformed SQL as a completed
+        AsyncQueryHandle without submitting to Celery.
+
+        See superset_core.api.models.Database.execute_async() for full 
documentation.
+        """
+        from superset_core.api.types import (
+            QueryOptions as QueryOptionsType,
+            QueryResult as QueryResultType,
+            QueryStatus,
+        )
+
+        opts: QueryOptionsType = options or QueryOptionsType()
+
+        # 1. Prepare SQL (assembly only, no security checks)
+        script, catalog, schema = self._prepare_sql(sql, opts)
+
+        # 2. Security checks
+        self._check_security(script)
+
+        # 3. Get mutation status and format SQL
+        has_mutation = script.has_mutation()
+        final_sql = script.format()
+
+        # DRY RUN: Return transformed SQL as completed async handle
+        if opts.dry_run:
+            dry_run_result = QueryResultType(
+                status=QueryStatus.SUCCESS,
+                data=None,
+                row_count=0,
+                query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                query_id=None,
+                execution_time_ms=0,
+                is_cached=False,
+            )
+            return self._create_cached_handle(dry_run_result)
+
+        # 4. Check cache
+        if cached_result := self._try_get_cached_result(has_mutation, 
final_sql, opts):
+            return self._create_cached_handle(cached_result)
+
+        # 5. Create Query model for audit
+        query = self._create_query_record(
+            final_sql, opts, catalog, schema, status="pending"
+        )
+
+        # 6. Submit to Celery
+        self._submit_query_to_celery(query, final_sql, opts)
+
+        # 7. Create and return handle with bound methods
+        return self._create_async_handle(query.id)
+
+    def _prepare_sql(
+        self,
+        sql: str,
+        opts: QueryOptions,
+    ) -> tuple[SQLScript, str | None, str | None]:
+        """
+        Prepare SQL for execution (no side effects, no security checks).
+
+        This method performs SQL preprocessing:
+        1. Template rendering
+        2. SQL parsing
+        3. Catalog/schema resolution
+        4. RLS application
+        5. Limit application (if not mutation)
+
+        Security checks (disallowed functions, DML permission) are performed
+        by the caller after receiving the prepared script.
+
+        :param sql: Original SQL query
+        :param opts: Query options
+        :returns: Tuple of (prepared SQLScript, catalog, schema)
+        """
+        # 1. Render Jinja2 templates
+        rendered_sql = self._render_sql_template(sql, opts.template_params)
+
+        # 2. Parse SQL with SQLScript
+        script = SQLScript(rendered_sql, self.database.db_engine_spec.engine)
+
+        # 3. Get catalog and schema
+        catalog = opts.catalog or self.database.get_default_catalog()
+        schema = opts.schema or self.database.get_default_schema(catalog)
+
+        # 4. Apply RLS directly to script statements
+        self._apply_rls_to_script(script, catalog, schema)
+
+        # 5. Apply limit only if not a mutation
+        if not script.has_mutation():
+            self._apply_limit_to_script(script, opts)
+
+        return script, catalog, schema
+
+    def _check_security(self, script: SQLScript) -> None:
+        """
+        Perform security checks on prepared SQL script.
+
+        :param script: Prepared SQLScript
+        :raises SupersetSecurityException: If security checks fail
+        """
+        # Check disallowed functions
+        if disallowed := self._check_disallowed_functions(script):
+            raise SupersetSecurityException(
+                SupersetError(
+                    message=f"Disallowed SQL functions: {', 
'.join(disallowed)}",
+                    error_type=SupersetErrorType.INVALID_SQL_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            )
+
+        # Check DML permission
+        if script.has_mutation() and not self.database.allow_dml:
+            raise SupersetSecurityException(
+                SupersetError(
+                    message="DML queries are not allowed on this database",
+                    error_type=SupersetErrorType.DML_NOT_ALLOWED_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            )
+
+    def _execute_statements(
+        self,
+        sql: str,
+        catalog: str | None,
+        schema: str | None,
+        query: Any,
+    ) -> Any:
+        """
+        Execute SQL statements with progress tracking.
+
+        Progress is tracked via Query.progress field, matching SQL Lab 
behavior.
+        Uses the same execution path for both single and multi-statement 
queries.
+
+        :param sql: Final SQL to execute (with RLS and all transformations 
applied)
+        :param catalog: Catalog name
+        :param schema: Schema name
+        :param query: Query model for progress tracking
+        :returns: DataFrame with results from last statement
+        """
+        import pandas as pd
+
+        # Parse the final SQL (with RLS applied) to get statements
+        script = SQLScript(sql, self.database.db_engine_spec.engine)
+        statements = script.statements
+
+        # Handle empty script
+        if not statements:
+            return pd.DataFrame()
+
+        # Use consistent execution path for all queries
+        with self.database.get_raw_connection(catalog=catalog, schema=schema) 
as conn:
+            cursor = conn.cursor()
+            result_set = execute_sql_with_cursor(
+                database=self.database,
+                cursor=cursor,
+                statements=[stmt.format() for stmt in statements],
+                query=query,
+                log_query_fn=self._log_query,
+            )
+
+            if result_set is not None:
+                return result_set.to_pandas_df()
+
+        return pd.DataFrame()
+
+    def _log_query(
+        self,
+        sql: str,
+        schema: str | None,
+    ) -> None:
+        """
+        Log query using QUERY_LOGGER config.
+
+        :param sql: SQL to log
+        :param schema: Schema name
+        """
+        from superset import security_manager
+
+        if log_query := app.config.get("QUERY_LOGGER"):
+            log_query(
+                self.database,
+                sql,
+                schema,
+                __name__,
+                security_manager,
+                {},
+            )
+
+    def _create_error_result(
+        self,
+        status: Any,
+        error_message: str,
+        sql: str,
+        start_time: float,
+    ) -> QueryResult:
+        """
+        Create a QueryResult for error cases.
+
+        :param status: QueryStatus enum value
+        :param error_message: Error message to include
+        :param sql: SQL query (original if error occurred before 
transformation)
+        :param start_time: Start time for calculating execution duration
+        :returns: QueryResult with error status
+        """
+        from superset_core.api.types import QueryResult as QueryResultType
+
+        return QueryResultType(
+            status=status,
+            error_message=error_message,
+            query=sql,
+            execution_time_ms=(time.time() - start_time) * 1000,
+        )
+
+    def _render_sql_template(
+        self, sql: str, template_params: dict[str, Any] | None
+    ) -> str:
+        """
+        Render Jinja2 template with params.
+
+        :param sql: SQL string potentially containing Jinja2 templates
+        :param template_params: Parameters to pass to the template
+        :returns: Rendered SQL string
+        """
+        if not template_params:
+            return sql
+
+        from superset.jinja_context import get_template_processor
+
+        tp = get_template_processor(database=self.database)
+        return tp.process_template(sql, **template_params)
+
+    def _apply_limit_to_script(self, script: SQLScript, opts: QueryOptions) -> 
None:
+        """
+        Apply limit to the last statement in the script in place.
+
+        :param script: SQLScript object to modify
+        :param opts: Query options
+        """
+        # Skip if no limit requested
+        if not opts.limit:

Review Comment:
   **Suggestion:** The limit application treats falsy numeric limits (for 
example 0) as "no limit" because it checks `if not opts.limit:`; this makes a 
requested limit of 0 be ignored. Use an explicit None check so numeric 0 is 
honored. [logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
           # Skip if no limit requested (None indicates no limit; 0 is a valid 
numeric limit)
           if opts.limit is None:
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   Treating falsy values as "no limit" wrongly ignores an explicit numeric 0 
limit. Using
   an explicit None check preserves semantics for 'no limit' while allowing 0 
as a valid
   (albeit unusual) limit value. This fixes a real logic gap.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/sql/execution/executor.py
   **Line:** 543:544
   **Comment:**
        *Logic Error: The limit application treats falsy numeric limits (for 
example 0) as "no limit" because it checks `if not opts.limit:`; this makes a 
requested limit of 0 be ignored. Use an explicit None check so numeric 0 is 
honored.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/sql/execution/celery_task.py:
##########
@@ -0,0 +1,434 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Celery task for async SQL execution.
+
+This module provides the Celery task for executing SQL queries asynchronously.
+It is used by SQLExecutor.execute_async() to run queries in the background.
+"""
+
+from __future__ import annotations
+
+import dataclasses
+import logging
+import sys
+import uuid
+from sys import getsizeof
+from typing import Any, TYPE_CHECKING
+
+import msgpack
+from celery.exceptions import SoftTimeLimitExceeded
+from flask import current_app as app, has_app_context
+from flask_babel import gettext as __
+
+from superset import (
+    db,
+    results_backend,
+    security_manager,
+)
+from superset.common.db_query_status import QueryStatus
+from superset.constants import QUERY_CANCEL_KEY
+from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
+from superset.exceptions import (
+    SupersetErrorException,
+    SupersetErrorsException,
+)
+from superset.extensions import celery_app
+from superset.models.sql_lab import Query
+from superset.result_set import SupersetResultSet
+from superset.sql.execution.executor import execute_sql_with_cursor
+from superset.sql.parse import SQLScript
+from superset.sqllab.utils import write_ipc_buffer
+from superset.utils import json
+from superset.utils.core import override_user, zlib_compress
+from superset.utils.dates import now_as_float
+from superset.utils.decorators import stats_timing
+
+if TYPE_CHECKING:
+    pass
+
+logger = logging.getLogger(__name__)
+
+BYTES_IN_MB = 1024 * 1024
+
+
+def _get_query(query_id: int) -> Query:
+    """Get the query by ID."""
+    return db.session.query(Query).filter_by(id=query_id).one()
+
+
+def _handle_query_error(
+    ex: Exception,
+    query: Query,
+    payload: dict[str, Any] | None = None,
+    prefix_message: str = "",
+) -> dict[str, Any]:
+    """Handle error while processing the SQL query."""
+    payload = payload or {}
+    msg = f"{prefix_message} {str(ex)}".strip()
+    query.error_message = msg
+    query.tmp_table_name = None
+    query.status = QueryStatus.FAILED
+
+    if not query.end_time:
+        query.end_time = now_as_float()
+
+    # Extract DB-specific errors
+    if isinstance(ex, SupersetErrorException):
+        errors = [ex.error]
+    elif isinstance(ex, SupersetErrorsException):
+        errors = ex.errors
+    else:
+        errors = query.database.db_engine_spec.extract_errors(
+            str(ex), database_name=query.database.unique_name
+        )
+
+    errors_payload = [dataclasses.asdict(error) for error in errors]
+    if errors:
+        query.set_extra_json_key("errors", errors_payload)
+
+    db.session.commit()  # pylint: disable=consider-using-transaction
+    payload.update({"status": query.status, "error": msg, "errors": 
errors_payload})
+    if troubleshooting_link := app.config["TROUBLESHOOTING_LINK"]:

Review Comment:
   **Suggestion:** Runtime error: accessing app.config with bracket indexing 
inside a walrus expression (`app.config["TROUBLESHOOTING_LINK"]`) will raise 
KeyError if the key is not present; use .get(...) to safely fetch optional 
config values. [possible bug]
   
   **Severity Level:** Critical 🚨
   ```suggestion
       if troubleshooting_link := app.config.get("TROUBLESHOOTING_LINK"):
           if troubleshooting_link:
   ```
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/sql/execution/celery_task.py
   **Line:** 106:106
   **Comment:**
        *Possible Bug: Runtime error: accessing app.config with bracket 
indexing inside a walrus expression (`app.config["TROUBLESHOOTING_LINK"]`) will 
raise KeyError if the key is not present; use .get(...) to safely fetch 
optional config values.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-core/src/superset_core/api/types.py:
##########
@@ -0,0 +1,162 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Query execution types for superset-core.
+
+Provides type definitions for query execution that are partially aligned
+with frontend types in superset-ui-core/src/query/types/.
+"""
+
+from __future__ import annotations
+
+from dataclasses import dataclass, field
+from datetime import datetime
+from enum import Enum
+from typing import Any, TYPE_CHECKING
+
+if TYPE_CHECKING:
+    import pandas as pd
+
+
+class QueryStatus(Enum):
+    """
+    Status of query execution.
+    """
+
+    PENDING = "pending"
+    RUNNING = "running"
+    SUCCESS = "success"
+    FAILED = "failed"
+    TIMED_OUT = "timed_out"
+    STOPPED = "stopped"
+
+
+@dataclass
+class CacheOptions:
+    """
+    Options for query result caching.
+    """
+
+    timeout: int | None = None  # Override default cache timeout (seconds)
+    force_refresh: bool = False  # Bypass cache and re-execute query
+
+
+@dataclass
+class QueryOptions:
+    """
+    Options for query execution via Database.execute() and execute_async().
+
+    Supports customization of:
+    - Basic: catalog, schema, limit, timeout
+    - Templates: Jinja2 template parameters
+    - Caching: Cache timeout and refresh control
+    - Dry run: Return transformed SQL without execution
+    """
+
+    # Basic options
+    catalog: str | None = None
+    schema: str | None = None
+    limit: int | None = None
+    timeout_seconds: int | None = None
+
+    # Template options
+    template_params: dict[str, Any] | None = None  # For Jinja2 rendering
+
+    # Caching options
+    cache: CacheOptions | None = None
+
+    # Dry run option
+    dry_run: bool = False  # Return transformed SQL without executing
+
+
+@dataclass
+class QueryResult:
+    """
+    Result of a query execution.
+
+    Aligned with frontend ChartDataResponseResult structure.
+    For column information, use df.columns and df.dtypes on the data DataFrame.

Review Comment:
   **Suggestion:** Documentation mismatch: the QueryResult docstring references 
`df.columns` and `df.dtypes`, but the field is named `data`; this will confuse 
readers and maintainers—update the docstring to reference `data.columns` and 
`data.dtypes`. [logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
       For column information, use data.columns and data.dtypes on the data 
DataFrame.
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   Correcting the docstring to reference the actual field name (`data.columns` 
/ `data.dtypes`) removes confusion and matches the code. This is a clear, 
accurate, and low-risk documentation fix.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset-core/src/superset_core/api/types.py
   **Line:** 93:93
   **Comment:**
        *Logic Error: Documentation mismatch: the QueryResult docstring 
references `df.columns` and `df.dtypes`, but the field is named `data`; this 
will confuse readers and maintainers—update the docstring to reference 
`data.columns` and `data.dtypes`.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/sql/execution/__init__.py:
##########
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.

Review Comment:
   **Suggestion:** Turning a test directory into a regular package by adding an 
__init__.py can change pytest collection and import semantics (breaking tests 
that rely on namespace-package behavior); preserve namespace semantics by 
making this __init__ a pkgutil-style namespace package. [possible bug]
   
   **Severity Level:** Critical 🚨
   ```suggestion
   import pkgutil
   __path__ = pkgutil.extend_path(__path__, __name__)
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The suggestion is valid: a bare __init__.py turns the directory into a 
regular package and can change import and pytest discovery behavior.
   Converting this file to a pkgutil-style namespace package (import pkgutil; 
__path__ = pkgutil.extend_path(...)) preserves namespace semantics while 
keeping a file for the license header.
   The proposed improved code is syntactically correct and directly addresses 
the potential test-discovery/backwards-compatibility issue.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/sql/execution/__init__.py
   **Line:** 16:16
   **Comment:**
        *Possible Bug: Turning a test directory into a regular package by 
adding an __init__.py can change pytest collection and import semantics 
(breaking tests that rely on namespace-package behavior); preserve namespace 
semantics by making this __init__ a pkgutil-style namespace package.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/sql/execution/conftest.py:
##########
@@ -0,0 +1,309 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Shared fixtures and helpers for SQL execution tests.
+
+This module provides common mocks, fixtures, and helper functions used across
+test_celery_task.py and test_executor.py to reduce code duplication.
+"""
+
+from contextlib import contextmanager
+from typing import Any
+from unittest.mock import MagicMock
+
+import pandas as pd
+import pytest
+from flask import current_app
+from pytest_mock import MockerFixture
+
+from superset.common.db_query_status import QueryStatus as QueryStatusEnum
+from superset.models.core import Database
+
+# =============================================================================
+# Core Fixtures
+# =============================================================================
+
+
[email protected](autouse=True)
+def mock_db_session(mocker: MockerFixture) -> MagicMock:
+    """Mock database session for all tests to avoid foreign key constraints."""
+    mock_session = MagicMock()
+    mocker.patch("superset.sql.execution.executor.db.session", mock_session)
+    mocker.patch("superset.sql.execution.celery_task.db.session", mock_session)
+    return mock_session
+
+
[email protected]
+def mock_query() -> MagicMock:
+    """Create a mock Query model."""
+    query = MagicMock()
+    query.id = 123
+    query.database_id = 1
+    query.sql = "SELECT * FROM users"
+    query.status = QueryStatusEnum.PENDING
+    query.error_message = None
+    query.progress = 0
+    query.end_time = None
+    query.start_running_time = None
+    query.executed_sql = None
+    query.tmp_table_name = None
+    query.catalog = None
+    query.schema = "public"
+    query.extra = {}
+    query.set_extra_json_key = MagicMock()
+    query.results_key = None
+    query.select_as_cta = False
+    query.rows = 0
+    query.to_dict = MagicMock(return_value={"id": 123})
+    query.database = MagicMock()
+    query.database.db_engine_spec.extract_errors.return_value = []
+    query.database.unique_name = "test_db"
+    query.database.cache_timeout = 300
+    return query
+
+
[email protected]
+def mock_database() -> MagicMock:
+    """Create a mock Database."""
+    database = MagicMock()
+    database.id = 1
+    database.unique_name = "test_db"
+    database.cache_timeout = 300
+    database.sqlalchemy_uri = "postgresql://localhost/test"
+    database.db_engine_spec = MagicMock()
+    database.db_engine_spec.engine = "postgresql"
+    database.db_engine_spec.run_multiple_statements_as_one = False
+    database.db_engine_spec.allows_sql_comments = True
+    database.db_engine_spec.extract_errors = MagicMock(return_value=[])
+    database.db_engine_spec.execute_with_cursor = MagicMock()
+    database.db_engine_spec.get_cancel_query_id = MagicMock(return_value=None)
+    database.db_engine_spec.patch = MagicMock()
+    database.db_engine_spec.fetch_data = MagicMock(return_value=[])
+    return database
+
+
[email protected]
+def mock_result_set() -> MagicMock:
+    """Create a mock SupersetResultSet."""
+    result_set = MagicMock()
+    result_set.size = 2
+    result_set.columns = [{"name": "id"}, {"name": "name"}]
+    result_set.pa_table = MagicMock()
+    result_set.to_pandas_df = MagicMock(
+        return_value=pd.DataFrame({"id": [1, 2], "name": ["Alice", "Bob"]})
+    )
+    return result_set
+
+
[email protected]
+def database() -> Database:
+    """Create a real test database instance."""
+    return Database(
+        id=1,
+        database_name="test_db",
+        sqlalchemy_uri="sqlite://",
+        allow_dml=False,
+    )
+
+
[email protected]
+def database_with_dml() -> Database:
+    """Create a real test database instance with DML allowed."""
+    return Database(
+        id=2,
+        database_name="test_db_dml",
+        sqlalchemy_uri="sqlite://",
+        allow_dml=True,
+    )
+
+
+# =============================================================================
+# Helper Functions for Mock Creation
+# =============================================================================
+
+
+def create_mock_cursor(
+    column_names: list[str],
+    data: list[tuple[Any, ...]] | None = None,
+) -> MagicMock:
+    """
+    Create a mock database cursor with column description.
+
+    Args:
+        column_names: List of column names
+        data: Optional data to return from fetchall()
+
+    Returns:
+        Configured MagicMock cursor
+    """
+    mock_cursor = MagicMock()
+    mock_cursor.description = [(name,) for name in column_names]
+    if data is not None:
+        mock_cursor.fetchall.return_value = data
+    return mock_cursor
+
+
+def create_mock_connection(mock_cursor: MagicMock | None = None) -> MagicMock:
+    """
+    Create a mock database connection.
+
+    Args:
+        mock_cursor: Optional cursor to return from cursor()
+
+    Returns:
+        Configured MagicMock connection with context manager support
+    """
+    if mock_cursor is None:
+        mock_cursor = create_mock_cursor([])
+
+    mock_conn = MagicMock()
+    mock_conn.cursor.return_value = mock_cursor
+    mock_conn.__enter__ = MagicMock(return_value=mock_conn)
+    mock_conn.__exit__ = MagicMock(return_value=False)
+    return mock_conn
+
+
+def setup_mock_raw_connection(
+    mock_database: MagicMock,
+    mock_connection: MagicMock | None = None,
+) -> MagicMock:
+    """
+    Setup get_raw_connection as a context manager on a mock database.
+
+    Args:
+        mock_database: The database mock to configure
+        mock_connection: Optional connection to yield
+
+    Returns:
+        The configured mock connection
+    """
+    if mock_connection is None:
+        mock_connection = create_mock_connection()
+
+    @contextmanager
+    def _raw_connection(**kwargs):

Review Comment:
   **Suggestion:** The `_raw_connection` context manager is defined with a 
generic `**kwargs` signature which may mask mismatched call semantics and makes 
the mock less explicit; define `_raw_connection` with the same parameters as 
the real `get_raw_connection` (catalog, schema, nullpool, source) so calls with 
positional/keyword args match the production signature and avoid unexpected 
TypeErrors. [possible bug]
   
   **Severity Level:** Critical 🚨
   ```suggestion
       def _raw_connection(
           catalog: str | None = None,
           schema: str | None = None,
           nullpool: bool = True,
           source: Any = None,
       ):
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   Making the mocked contextmanager accept the same explicit parameters as the 
real get_raw_connection (catalog, schema, nullpool, source) improves signature 
fidelity and prevents accidental TypeError when callers pass positional/keyword 
args.
   It's a harmless, clarifying improvement that makes the mock behave like the 
real API.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/sql/execution/conftest.py
   **Line:** 199:199
   **Comment:**
        *Possible Bug: The `_raw_connection` context manager is defined with a 
generic `**kwargs` signature which may mask mismatched call semantics and makes 
the mock less explicit; define `_raw_connection` with the same parameters as 
the real `get_raw_connection` (catalog, schema, nullpool, source) so calls with 
positional/keyword args match the production signature and avoid unexpected 
TypeErrors.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/sql/execution/executor.py:
##########
@@ -0,0 +1,999 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+SQL Executor implementation for Database.execute() and execute_async().
+
+This module provides the SQLExecutor class that implements the query execution
+methods defined in superset_core.api.models.Database.
+
+Implementation Features
+-----------------------
+
+Query Preparation (applies to both sync and async):
+- Jinja2 template rendering (via template_params in QueryOptions)
+- SQL mutation via SQL_QUERY_MUTATOR config hook
+- DML permission checking (requires database.allow_dml=True for DML)
+- Disallowed functions checking via DISALLOWED_SQL_FUNCTIONS config
+- Row-level security (RLS) via AST transformation (always applied)
+- Result limit application via SQL_MAX_ROW config
+- Catalog/schema resolution and validation
+
+Synchronous Execution (execute):
+- Multi-statement SQL parsing and execution
+- Progress tracking via Query model
+- Result caching via cache_manager.data_cache
+- Query logging via QUERY_LOGGER config hook
+- Timeout protection via SQLLAB_TIMEOUT config
+- Dry run mode (returns transformed SQL without execution)
+
+Asynchronous Execution (execute_async):
+- Celery task submission for background execution
+- Security validation before submission
+- Query model creation with PENDING status
+- Result caching check (returns cached if available)
+- Background execution with timeout via SQLLAB_ASYNC_TIME_LIMIT_SEC
+- Results stored in results backend for retrieval
+- Handle-based progress tracking and cancellation
+
+See Database.execute() and Database.execute_async() docstrings in
+superset_core.api.models for the public API contract.
+"""
+
+from __future__ import annotations
+
+import logging
+import time
+from datetime import datetime
+from typing import Any, TYPE_CHECKING
+
+from flask import current_app as app, g, has_app_context
+
+from superset import db
+from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
+from superset.exceptions import (
+    SupersetSecurityException,
+    SupersetTimeoutException,
+)
+from superset.extensions import cache_manager
+from superset.sql.parse import SQLScript
+from superset.utils import core as utils
+
+if TYPE_CHECKING:
+    from superset_core.api.types import (
+        AsyncQueryHandle,
+        QueryOptions,
+        QueryResult,
+    )
+
+    from superset.models.core import Database
+    from superset.result_set import SupersetResultSet
+
+logger = logging.getLogger(__name__)
+
+
+def execute_sql_with_cursor(
+    database: Database,
+    cursor: Any,
+    statements: list[str],
+    query: Any,
+    log_query_fn: Any | None = None,
+    check_stopped_fn: Any | None = None,
+    execute_fn: Any | None = None,
+) -> SupersetResultSet | None:
+    """
+    Execute SQL statements with a cursor and return result set.
+
+    This is the shared execution logic used by both sync (SQLExecutor) and
+    async (celery_task) execution paths. It handles multi-statement execution
+    with progress tracking via the Query model.
+
+    :param database: Database model to execute against
+    :param cursor: Database cursor to use for execution
+    :param statements: List of SQL statements to execute
+    :param query: Query model for progress tracking
+    :param log_query_fn: Optional function to log queries, called as fn(sql, 
schema)
+    :param check_stopped_fn: Optional function to check if query was stopped.
+        Should return True if stopped. Used by async execution for 
cancellation.
+    :param execute_fn: Optional custom execute function. If not provided, uses
+        database.db_engine_spec.execute(cursor, sql, database). Custom function
+        should accept (cursor, sql) and handle execution.
+    :returns: SupersetResultSet from last statement, or None if stopped
+    """
+    from superset.result_set import SupersetResultSet
+
+    total = len(statements)
+    if total == 0:
+        return None
+
+    rows = None
+    description = None
+
+    for i, statement in enumerate(statements):
+        # Check if query was stopped (async cancellation)
+        if check_stopped_fn and check_stopped_fn():
+            return None
+
+        # Apply SQL mutation
+        stmt_sql = database.mutate_sql_based_on_config(
+            statement,
+            is_split=True,
+        )
+
+        # Log query
+        if log_query_fn:
+            log_query_fn(stmt_sql, query.schema)
+
+        # Execute - use custom function or default
+        if execute_fn:
+            execute_fn(cursor, stmt_sql)
+        else:
+            database.db_engine_spec.execute(cursor, stmt_sql, database)
+
+        # Fetch results from last statement only
+        if i == total - 1:
+            description = cursor.description
+            rows = database.db_engine_spec.fetch_data(cursor)
+        else:
+            cursor.fetchall()
+
+        # Update progress on Query model
+        progress_pct = int(((i + 1) / total) * 100)
+        query.progress = progress_pct
+        query.set_extra_json_key(
+            "progress",
+            f"Running statement {i + 1} of {total}",
+        )
+        db.session.commit()  # pylint: disable=consider-using-transaction
+
+    # Build result set
+    if rows is not None and description is not None:
+        return SupersetResultSet(
+            rows,
+            description,
+            database.db_engine_spec,
+        )
+
+    return None
+
+
+class SQLExecutor:
+    """
+    SQL query executor implementation.
+
+    Implements Database.execute() and execute_async() methods.
+    See superset_core.api.models.Database for the full public API 
documentation.
+    """
+
+    def __init__(self, database: Database) -> None:
+        """
+        Initialize the executor with a database.
+
+        :param database: Database model instance to execute queries against
+        """
+        self.database = database
+
+    def execute(
+        self,
+        sql: str,
+        options: QueryOptions | None = None,
+    ) -> QueryResult:
+        """
+        Execute SQL synchronously.
+
+        If options.dry_run=True, returns the transformed SQL without execution.
+        All transformations (RLS, templates, limits) are still applied.
+
+        See superset_core.api.models.Database.execute() for full documentation.
+        """
+        from superset_core.api.types import (
+            QueryOptions as QueryOptionsType,
+            QueryResult as QueryResultType,
+            QueryStatus,
+        )
+
+        opts: QueryOptionsType = options or QueryOptionsType()
+        start_time = time.time()
+
+        try:
+            # 1. Prepare SQL (assembly only, no security checks)
+            script, catalog, schema = self._prepare_sql(sql, opts)
+
+            # 2. Security checks
+            self._check_security(script)
+
+            # 3. Get mutation status and format SQL
+            has_mutation = script.has_mutation()
+            final_sql = script.format()
+
+            # DRY RUN: Return transformed SQL without execution
+            if opts.dry_run:
+                execution_time_ms = (time.time() - start_time) * 1000
+                return QueryResultType(
+                    status=QueryStatus.SUCCESS,
+                    data=None,
+                    row_count=0,
+                    query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                    query_id=None,  # No Query model created
+                    execution_time_ms=execution_time_ms,
+                    is_cached=False,
+                )
+
+            # 4. Check cache
+            cached_result = self._try_get_cached_result(has_mutation, 
final_sql, opts)
+            if cached_result:
+                return cached_result
+
+            # 5. Create Query model for audit
+            query = self._create_query_record(
+                final_sql, opts, catalog, schema, status="running"
+            )
+
+            # 6. Execute with timeout
+            timeout = opts.timeout_seconds or app.config.get("SQLLAB_TIMEOUT", 
30)
+            timeout_msg = f"Query exceeded the {timeout} seconds timeout."
+
+            with utils.timeout(seconds=timeout, error_message=timeout_msg):
+                df = self._execute_statements(
+                    final_sql,
+                    catalog,
+                    schema,
+                    query,
+                )
+
+            execution_time_ms = (time.time() - start_time) * 1000
+
+            # 5. Update query record
+            query.status = "success"
+            query.rows = len(df)
+            query.progress = 100
+            db.session.commit()  # pylint: disable=consider-using-transaction
+
+            result = QueryResultType(
+                status=QueryStatus.SUCCESS,
+                data=df,
+                row_count=len(df),
+                query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                query_id=query.id,
+                execution_time_ms=execution_time_ms,
+            )
+
+            # 6. Store in cache (if SELECT and caching enabled)
+            if not has_mutation:
+                self._store_in_cache(result, final_sql, opts)
+
+            return result
+
+        except SupersetTimeoutException:
+            return self._create_error_result(
+                QueryStatus.TIMED_OUT,
+                "Query exceeded the timeout limit",
+                sql,
+                start_time,
+            )
+        except SupersetSecurityException as ex:
+            return self._create_error_result(
+                QueryStatus.FAILED, str(ex), sql, start_time
+            )
+        except Exception as ex:
+            error_msg = self.database.db_engine_spec.extract_error_message(ex)
+            return self._create_error_result(
+                QueryStatus.FAILED, error_msg, sql, start_time
+            )
+
+    def execute_async(
+        self,
+        sql: str,
+        options: QueryOptions | None = None,
+    ) -> AsyncQueryHandle:
+        """
+        Execute SQL asynchronously via Celery.
+
+        If options.dry_run=True, returns the transformed SQL as a completed
+        AsyncQueryHandle without submitting to Celery.
+
+        See superset_core.api.models.Database.execute_async() for full 
documentation.
+        """
+        from superset_core.api.types import (
+            QueryOptions as QueryOptionsType,
+            QueryResult as QueryResultType,
+            QueryStatus,
+        )
+
+        opts: QueryOptionsType = options or QueryOptionsType()
+
+        # 1. Prepare SQL (assembly only, no security checks)
+        script, catalog, schema = self._prepare_sql(sql, opts)
+
+        # 2. Security checks
+        self._check_security(script)
+
+        # 3. Get mutation status and format SQL
+        has_mutation = script.has_mutation()
+        final_sql = script.format()
+
+        # DRY RUN: Return transformed SQL as completed async handle
+        if opts.dry_run:
+            dry_run_result = QueryResultType(
+                status=QueryStatus.SUCCESS,
+                data=None,
+                row_count=0,
+                query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                query_id=None,
+                execution_time_ms=0,
+                is_cached=False,
+            )
+            return self._create_cached_handle(dry_run_result)
+
+        # 4. Check cache
+        if cached_result := self._try_get_cached_result(has_mutation, 
final_sql, opts):
+            return self._create_cached_handle(cached_result)
+
+        # 5. Create Query model for audit
+        query = self._create_query_record(
+            final_sql, opts, catalog, schema, status="pending"
+        )
+
+        # 6. Submit to Celery
+        self._submit_query_to_celery(query, final_sql, opts)
+
+        # 7. Create and return handle with bound methods
+        return self._create_async_handle(query.id)
+
+    def _prepare_sql(
+        self,
+        sql: str,
+        opts: QueryOptions,
+    ) -> tuple[SQLScript, str | None, str | None]:
+        """
+        Prepare SQL for execution (no side effects, no security checks).
+
+        This method performs SQL preprocessing:
+        1. Template rendering
+        2. SQL parsing
+        3. Catalog/schema resolution
+        4. RLS application
+        5. Limit application (if not mutation)
+
+        Security checks (disallowed functions, DML permission) are performed
+        by the caller after receiving the prepared script.
+
+        :param sql: Original SQL query
+        :param opts: Query options
+        :returns: Tuple of (prepared SQLScript, catalog, schema)
+        """
+        # 1. Render Jinja2 templates
+        rendered_sql = self._render_sql_template(sql, opts.template_params)
+
+        # 2. Parse SQL with SQLScript
+        script = SQLScript(rendered_sql, self.database.db_engine_spec.engine)
+
+        # 3. Get catalog and schema
+        catalog = opts.catalog or self.database.get_default_catalog()
+        schema = opts.schema or self.database.get_default_schema(catalog)
+
+        # 4. Apply RLS directly to script statements
+        self._apply_rls_to_script(script, catalog, schema)
+
+        # 5. Apply limit only if not a mutation
+        if not script.has_mutation():
+            self._apply_limit_to_script(script, opts)
+
+        return script, catalog, schema
+
+    def _check_security(self, script: SQLScript) -> None:
+        """
+        Perform security checks on prepared SQL script.
+
+        :param script: Prepared SQLScript
+        :raises SupersetSecurityException: If security checks fail
+        """
+        # Check disallowed functions
+        if disallowed := self._check_disallowed_functions(script):
+            raise SupersetSecurityException(
+                SupersetError(
+                    message=f"Disallowed SQL functions: {', 
'.join(disallowed)}",
+                    error_type=SupersetErrorType.INVALID_SQL_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            )
+
+        # Check DML permission
+        if script.has_mutation() and not self.database.allow_dml:
+            raise SupersetSecurityException(
+                SupersetError(
+                    message="DML queries are not allowed on this database",
+                    error_type=SupersetErrorType.DML_NOT_ALLOWED_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            )
+
+    def _execute_statements(
+        self,
+        sql: str,
+        catalog: str | None,
+        schema: str | None,
+        query: Any,
+    ) -> Any:
+        """
+        Execute SQL statements with progress tracking.
+
+        Progress is tracked via Query.progress field, matching SQL Lab 
behavior.
+        Uses the same execution path for both single and multi-statement 
queries.
+
+        :param sql: Final SQL to execute (with RLS and all transformations 
applied)
+        :param catalog: Catalog name
+        :param schema: Schema name
+        :param query: Query model for progress tracking
+        :returns: DataFrame with results from last statement
+        """
+        import pandas as pd
+
+        # Parse the final SQL (with RLS applied) to get statements
+        script = SQLScript(sql, self.database.db_engine_spec.engine)
+        statements = script.statements
+
+        # Handle empty script
+        if not statements:
+            return pd.DataFrame()
+
+        # Use consistent execution path for all queries
+        with self.database.get_raw_connection(catalog=catalog, schema=schema) 
as conn:
+            cursor = conn.cursor()
+            result_set = execute_sql_with_cursor(
+                database=self.database,
+                cursor=cursor,
+                statements=[stmt.format() for stmt in statements],
+                query=query,
+                log_query_fn=self._log_query,
+            )
+
+            if result_set is not None:
+                return result_set.to_pandas_df()
+
+        return pd.DataFrame()
+
+    def _log_query(
+        self,
+        sql: str,
+        schema: str | None,
+    ) -> None:
+        """
+        Log query using QUERY_LOGGER config.
+
+        :param sql: SQL to log
+        :param schema: Schema name
+        """
+        from superset import security_manager
+
+        if log_query := app.config.get("QUERY_LOGGER"):
+            log_query(
+                self.database,
+                sql,
+                schema,
+                __name__,
+                security_manager,
+                {},
+            )
+
+    def _create_error_result(
+        self,
+        status: Any,
+        error_message: str,
+        sql: str,
+        start_time: float,
+    ) -> QueryResult:
+        """
+        Create a QueryResult for error cases.
+
+        :param status: QueryStatus enum value
+        :param error_message: Error message to include
+        :param sql: SQL query (original if error occurred before 
transformation)
+        :param start_time: Start time for calculating execution duration
+        :returns: QueryResult with error status
+        """
+        from superset_core.api.types import QueryResult as QueryResultType
+
+        return QueryResultType(
+            status=status,
+            error_message=error_message,
+            query=sql,
+            execution_time_ms=(time.time() - start_time) * 1000,
+        )
+
+    def _render_sql_template(
+        self, sql: str, template_params: dict[str, Any] | None
+    ) -> str:
+        """
+        Render Jinja2 template with params.
+
+        :param sql: SQL string potentially containing Jinja2 templates
+        :param template_params: Parameters to pass to the template
+        :returns: Rendered SQL string
+        """
+        if not template_params:

Review Comment:
   **Suggestion:** The template renderer short-circuits when `template_params` 
is an empty dict because `if not template_params:` treats empty mapping as 
falsey; this prevents rendering templates that rely on default/empty context. 
Only skip rendering when `template_params` is None. [logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
           if template_params is None:
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   An empty dict is falsey, so the current check skips template rendering when 
callers
   intentionally pass an empty mapping. Only skipping when template_params is 
None keeps
   the intended distinction between "no rendering requested" and "render with 
empty context".
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/sql/execution/executor.py
   **Line:** 528:528
   **Comment:**
        *Logic Error: The template renderer short-circuits when 
`template_params` is an empty dict because `if not template_params:` treats 
empty mapping as falsey; this prevents rendering templates that rely on 
default/empty context. Only skip rendering when `template_params` is None.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/sql/execution/celery_task.py:
##########
@@ -0,0 +1,434 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Celery task for async SQL execution.
+
+This module provides the Celery task for executing SQL queries asynchronously.
+It is used by SQLExecutor.execute_async() to run queries in the background.
+"""
+
+from __future__ import annotations
+
+import dataclasses
+import logging
+import sys
+import uuid
+from sys import getsizeof
+from typing import Any, TYPE_CHECKING
+
+import msgpack
+from celery.exceptions import SoftTimeLimitExceeded
+from flask import current_app as app, has_app_context
+from flask_babel import gettext as __
+
+from superset import (
+    db,
+    results_backend,
+    security_manager,
+)
+from superset.common.db_query_status import QueryStatus
+from superset.constants import QUERY_CANCEL_KEY
+from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
+from superset.exceptions import (
+    SupersetErrorException,
+    SupersetErrorsException,
+)
+from superset.extensions import celery_app
+from superset.models.sql_lab import Query
+from superset.result_set import SupersetResultSet
+from superset.sql.execution.executor import execute_sql_with_cursor
+from superset.sql.parse import SQLScript
+from superset.sqllab.utils import write_ipc_buffer
+from superset.utils import json
+from superset.utils.core import override_user, zlib_compress
+from superset.utils.dates import now_as_float
+from superset.utils.decorators import stats_timing
+
+if TYPE_CHECKING:
+    pass
+
+logger = logging.getLogger(__name__)
+
+BYTES_IN_MB = 1024 * 1024
+
+
+def _get_query(query_id: int) -> Query:
+    """Get the query by ID."""
+    return db.session.query(Query).filter_by(id=query_id).one()
+
+
+def _handle_query_error(
+    ex: Exception,
+    query: Query,
+    payload: dict[str, Any] | None = None,
+    prefix_message: str = "",
+) -> dict[str, Any]:
+    """Handle error while processing the SQL query."""
+    payload = payload or {}
+    msg = f"{prefix_message} {str(ex)}".strip()
+    query.error_message = msg
+    query.tmp_table_name = None
+    query.status = QueryStatus.FAILED
+
+    if not query.end_time:
+        query.end_time = now_as_float()
+
+    # Extract DB-specific errors
+    if isinstance(ex, SupersetErrorException):
+        errors = [ex.error]
+    elif isinstance(ex, SupersetErrorsException):
+        errors = ex.errors
+    else:
+        errors = query.database.db_engine_spec.extract_errors(
+            str(ex), database_name=query.database.unique_name
+        )
+
+    errors_payload = [dataclasses.asdict(error) for error in errors]
+    if errors:
+        query.set_extra_json_key("errors", errors_payload)
+
+    db.session.commit()  # pylint: disable=consider-using-transaction
+    payload.update({"status": query.status, "error": msg, "errors": 
errors_payload})
+    if troubleshooting_link := app.config["TROUBLESHOOTING_LINK"]:
+        payload["link"] = troubleshooting_link
+    return payload
+
+
+def _serialize_payload(payload: dict[Any, Any]) -> bytes | str:
+    """Serialize payload for storage based on RESULTS_BACKEND_USE_MSGPACK 
config."""
+    from superset import results_backend_use_msgpack
+
+    if results_backend_use_msgpack:
+        return msgpack.dumps(payload, default=json.json_iso_dttm_ser, 
use_bin_type=True)
+    return json.dumps(payload, default=json.json_iso_dttm_ser, ignore_nan=True)
+
+
+def _prepare_statement_blocks(
+    rendered_query: str,
+    db_engine_spec: Any,
+) -> tuple[SQLScript, list[str]]:
+    """
+    Parse SQL and build statement blocks for execution.
+
+    Note: RLS, security checks, and other preprocessing are handled by
+    SQLExecutor before the query reaches this task.
+    """
+    parsed_script = SQLScript(rendered_query, engine=db_engine_spec.engine)
+
+    # Build statement blocks for execution
+    if db_engine_spec.run_multiple_statements_as_one:
+        blocks = 
[parsed_script.format(comments=db_engine_spec.allows_sql_comments)]
+    else:
+        blocks = [
+            statement.format(comments=db_engine_spec.allows_sql_comments)
+            for statement in parsed_script.statements
+        ]
+
+    return parsed_script, blocks
+
+
+def _finalize_successful_query(
+    query: Query,
+    result_set: SupersetResultSet,
+    payload: dict[str, Any],
+) -> None:
+    """Update query metadata and payload after successful execution."""
+    query.rows = result_set.size
+    query.progress = 100
+    query.set_extra_json_key("progress", None)
+    query.set_extra_json_key("columns", result_set.columns)
+    query.end_time = now_as_float()
+
+    data, columns = _serialize_result_set(result_set)
+
+    payload.update(
+        {
+            "status": QueryStatus.SUCCESS,
+            "data": data,
+            "columns": columns,
+            "query": query.to_dict(),
+        }
+    )
+    payload["query"]["state"] = QueryStatus.SUCCESS
+
+
+def _store_results_in_backend(
+    query: Query,
+    payload: dict[str, Any],
+    database: Any,
+) -> None:
+    """Store query results in the results backend."""
+    key = str(uuid.uuid4())
+    payload["query"]["resultsKey"] = key
+    logger.info(
+        "Query %s: Storing results in results backend, key: %s",
+        str(query.id),
+        key,
+    )
+    stats_logger = app.config["STATS_LOGGER"]
+    with stats_timing("sqllab.query.results_backend_write", stats_logger):
+        with stats_timing(
+            "sqllab.query.results_backend_write_serialization", stats_logger
+        ):
+            serialized_payload = _serialize_payload(payload)
+
+            # Check payload size limit

Review Comment:
   **Suggestion:** Type error/encoding bug: `_serialize_payload` can return a 
`str` when JSON is used, but `zlib_compress` expects bytes; the code passes a 
string to compression and measures size inconsistently — ensure the serialized 
payload is encoded to bytes before size checks and compression. [type error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
               # Ensure payload is bytes for compression and accurate size 
measurement
               if isinstance(serialized_payload, str):
                   serialized_payload = serialized_payload.encode("utf-8")
   
   ```
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/sql/execution/celery_task.py
   **Line:** 189:189
   **Comment:**
        *Type Error: Type error/encoding bug: `_serialize_payload` can return a 
`str` when JSON is used, but `zlib_compress` expects bytes; the code passes a 
string to compression and measures size inconsistently — ensure the serialized 
payload is encoded to bytes before size checks and compression.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/sql/execution/executor.py:
##########
@@ -0,0 +1,999 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+SQL Executor implementation for Database.execute() and execute_async().
+
+This module provides the SQLExecutor class that implements the query execution
+methods defined in superset_core.api.models.Database.
+
+Implementation Features
+-----------------------
+
+Query Preparation (applies to both sync and async):
+- Jinja2 template rendering (via template_params in QueryOptions)
+- SQL mutation via SQL_QUERY_MUTATOR config hook
+- DML permission checking (requires database.allow_dml=True for DML)
+- Disallowed functions checking via DISALLOWED_SQL_FUNCTIONS config
+- Row-level security (RLS) via AST transformation (always applied)
+- Result limit application via SQL_MAX_ROW config
+- Catalog/schema resolution and validation
+
+Synchronous Execution (execute):
+- Multi-statement SQL parsing and execution
+- Progress tracking via Query model
+- Result caching via cache_manager.data_cache
+- Query logging via QUERY_LOGGER config hook
+- Timeout protection via SQLLAB_TIMEOUT config
+- Dry run mode (returns transformed SQL without execution)
+
+Asynchronous Execution (execute_async):
+- Celery task submission for background execution
+- Security validation before submission
+- Query model creation with PENDING status
+- Result caching check (returns cached if available)
+- Background execution with timeout via SQLLAB_ASYNC_TIME_LIMIT_SEC
+- Results stored in results backend for retrieval
+- Handle-based progress tracking and cancellation
+
+See Database.execute() and Database.execute_async() docstrings in
+superset_core.api.models for the public API contract.
+"""
+
+from __future__ import annotations
+
+import logging
+import time
+from datetime import datetime
+from typing import Any, TYPE_CHECKING
+
+from flask import current_app as app, g, has_app_context
+
+from superset import db
+from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
+from superset.exceptions import (
+    SupersetSecurityException,
+    SupersetTimeoutException,
+)
+from superset.extensions import cache_manager
+from superset.sql.parse import SQLScript
+from superset.utils import core as utils
+
+if TYPE_CHECKING:
+    from superset_core.api.types import (
+        AsyncQueryHandle,
+        QueryOptions,
+        QueryResult,
+    )
+
+    from superset.models.core import Database
+    from superset.result_set import SupersetResultSet
+
+logger = logging.getLogger(__name__)
+
+
+def execute_sql_with_cursor(
+    database: Database,
+    cursor: Any,
+    statements: list[str],
+    query: Any,
+    log_query_fn: Any | None = None,
+    check_stopped_fn: Any | None = None,
+    execute_fn: Any | None = None,
+) -> SupersetResultSet | None:
+    """
+    Execute SQL statements with a cursor and return result set.
+
+    This is the shared execution logic used by both sync (SQLExecutor) and
+    async (celery_task) execution paths. It handles multi-statement execution
+    with progress tracking via the Query model.
+
+    :param database: Database model to execute against
+    :param cursor: Database cursor to use for execution
+    :param statements: List of SQL statements to execute
+    :param query: Query model for progress tracking
+    :param log_query_fn: Optional function to log queries, called as fn(sql, 
schema)
+    :param check_stopped_fn: Optional function to check if query was stopped.
+        Should return True if stopped. Used by async execution for 
cancellation.
+    :param execute_fn: Optional custom execute function. If not provided, uses
+        database.db_engine_spec.execute(cursor, sql, database). Custom function
+        should accept (cursor, sql) and handle execution.
+    :returns: SupersetResultSet from last statement, or None if stopped
+    """
+    from superset.result_set import SupersetResultSet
+
+    total = len(statements)
+    if total == 0:
+        return None
+
+    rows = None
+    description = None
+
+    for i, statement in enumerate(statements):
+        # Check if query was stopped (async cancellation)
+        if check_stopped_fn and check_stopped_fn():
+            return None
+
+        # Apply SQL mutation
+        stmt_sql = database.mutate_sql_based_on_config(
+            statement,
+            is_split=True,
+        )
+
+        # Log query
+        if log_query_fn:
+            log_query_fn(stmt_sql, query.schema)
+
+        # Execute - use custom function or default
+        if execute_fn:
+            execute_fn(cursor, stmt_sql)
+        else:
+            database.db_engine_spec.execute(cursor, stmt_sql, database)
+
+        # Fetch results from last statement only
+        if i == total - 1:
+            description = cursor.description
+            rows = database.db_engine_spec.fetch_data(cursor)
+        else:
+            cursor.fetchall()
+
+        # Update progress on Query model
+        progress_pct = int(((i + 1) / total) * 100)
+        query.progress = progress_pct
+        query.set_extra_json_key(
+            "progress",
+            f"Running statement {i + 1} of {total}",
+        )
+        db.session.commit()  # pylint: disable=consider-using-transaction
+
+    # Build result set
+    if rows is not None and description is not None:
+        return SupersetResultSet(
+            rows,
+            description,
+            database.db_engine_spec,
+        )
+
+    return None
+
+
+class SQLExecutor:
+    """
+    SQL query executor implementation.
+
+    Implements Database.execute() and execute_async() methods.
+    See superset_core.api.models.Database for the full public API 
documentation.
+    """
+
+    def __init__(self, database: Database) -> None:
+        """
+        Initialize the executor with a database.
+
+        :param database: Database model instance to execute queries against
+        """
+        self.database = database
+
+    def execute(
+        self,
+        sql: str,
+        options: QueryOptions | None = None,
+    ) -> QueryResult:
+        """
+        Execute SQL synchronously.
+
+        If options.dry_run=True, returns the transformed SQL without execution.
+        All transformations (RLS, templates, limits) are still applied.
+
+        See superset_core.api.models.Database.execute() for full documentation.
+        """
+        from superset_core.api.types import (
+            QueryOptions as QueryOptionsType,
+            QueryResult as QueryResultType,
+            QueryStatus,
+        )
+
+        opts: QueryOptionsType = options or QueryOptionsType()
+        start_time = time.time()
+
+        try:
+            # 1. Prepare SQL (assembly only, no security checks)
+            script, catalog, schema = self._prepare_sql(sql, opts)
+
+            # 2. Security checks
+            self._check_security(script)
+
+            # 3. Get mutation status and format SQL
+            has_mutation = script.has_mutation()
+            final_sql = script.format()
+
+            # DRY RUN: Return transformed SQL without execution
+            if opts.dry_run:
+                execution_time_ms = (time.time() - start_time) * 1000
+                return QueryResultType(
+                    status=QueryStatus.SUCCESS,
+                    data=None,
+                    row_count=0,
+                    query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                    query_id=None,  # No Query model created
+                    execution_time_ms=execution_time_ms,
+                    is_cached=False,
+                )
+
+            # 4. Check cache
+            cached_result = self._try_get_cached_result(has_mutation, 
final_sql, opts)
+            if cached_result:
+                return cached_result
+
+            # 5. Create Query model for audit
+            query = self._create_query_record(
+                final_sql, opts, catalog, schema, status="running"
+            )
+
+            # 6. Execute with timeout
+            timeout = opts.timeout_seconds or app.config.get("SQLLAB_TIMEOUT", 
30)
+            timeout_msg = f"Query exceeded the {timeout} seconds timeout."
+
+            with utils.timeout(seconds=timeout, error_message=timeout_msg):
+                df = self._execute_statements(
+                    final_sql,
+                    catalog,
+                    schema,
+                    query,
+                )
+
+            execution_time_ms = (time.time() - start_time) * 1000
+
+            # 5. Update query record
+            query.status = "success"
+            query.rows = len(df)
+            query.progress = 100
+            db.session.commit()  # pylint: disable=consider-using-transaction
+
+            result = QueryResultType(
+                status=QueryStatus.SUCCESS,
+                data=df,
+                row_count=len(df),
+                query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                query_id=query.id,
+                execution_time_ms=execution_time_ms,
+            )
+
+            # 6. Store in cache (if SELECT and caching enabled)
+            if not has_mutation:
+                self._store_in_cache(result, final_sql, opts)
+
+            return result
+
+        except SupersetTimeoutException:
+            return self._create_error_result(
+                QueryStatus.TIMED_OUT,
+                "Query exceeded the timeout limit",
+                sql,
+                start_time,
+            )
+        except SupersetSecurityException as ex:
+            return self._create_error_result(
+                QueryStatus.FAILED, str(ex), sql, start_time
+            )
+        except Exception as ex:
+            error_msg = self.database.db_engine_spec.extract_error_message(ex)
+            return self._create_error_result(
+                QueryStatus.FAILED, error_msg, sql, start_time
+            )
+
+    def execute_async(
+        self,
+        sql: str,
+        options: QueryOptions | None = None,
+    ) -> AsyncQueryHandle:
+        """
+        Execute SQL asynchronously via Celery.
+
+        If options.dry_run=True, returns the transformed SQL as a completed
+        AsyncQueryHandle without submitting to Celery.
+
+        See superset_core.api.models.Database.execute_async() for full 
documentation.
+        """
+        from superset_core.api.types import (
+            QueryOptions as QueryOptionsType,
+            QueryResult as QueryResultType,
+            QueryStatus,
+        )
+
+        opts: QueryOptionsType = options or QueryOptionsType()
+
+        # 1. Prepare SQL (assembly only, no security checks)
+        script, catalog, schema = self._prepare_sql(sql, opts)
+
+        # 2. Security checks
+        self._check_security(script)
+
+        # 3. Get mutation status and format SQL
+        has_mutation = script.has_mutation()
+        final_sql = script.format()
+
+        # DRY RUN: Return transformed SQL as completed async handle
+        if opts.dry_run:
+            dry_run_result = QueryResultType(
+                status=QueryStatus.SUCCESS,
+                data=None,
+                row_count=0,
+                query=final_sql,  # Transformed SQL (after RLS, templates, 
limits)
+                query_id=None,
+                execution_time_ms=0,
+                is_cached=False,
+            )
+            return self._create_cached_handle(dry_run_result)
+
+        # 4. Check cache
+        if cached_result := self._try_get_cached_result(has_mutation, 
final_sql, opts):
+            return self._create_cached_handle(cached_result)
+
+        # 5. Create Query model for audit
+        query = self._create_query_record(
+            final_sql, opts, catalog, schema, status="pending"
+        )
+
+        # 6. Submit to Celery
+        self._submit_query_to_celery(query, final_sql, opts)
+
+        # 7. Create and return handle with bound methods
+        return self._create_async_handle(query.id)
+
+    def _prepare_sql(
+        self,
+        sql: str,
+        opts: QueryOptions,
+    ) -> tuple[SQLScript, str | None, str | None]:
+        """
+        Prepare SQL for execution (no side effects, no security checks).
+
+        This method performs SQL preprocessing:
+        1. Template rendering
+        2. SQL parsing
+        3. Catalog/schema resolution
+        4. RLS application
+        5. Limit application (if not mutation)
+
+        Security checks (disallowed functions, DML permission) are performed
+        by the caller after receiving the prepared script.
+
+        :param sql: Original SQL query
+        :param opts: Query options
+        :returns: Tuple of (prepared SQLScript, catalog, schema)
+        """
+        # 1. Render Jinja2 templates
+        rendered_sql = self._render_sql_template(sql, opts.template_params)
+
+        # 2. Parse SQL with SQLScript
+        script = SQLScript(rendered_sql, self.database.db_engine_spec.engine)
+
+        # 3. Get catalog and schema
+        catalog = opts.catalog or self.database.get_default_catalog()
+        schema = opts.schema or self.database.get_default_schema(catalog)
+
+        # 4. Apply RLS directly to script statements
+        self._apply_rls_to_script(script, catalog, schema)
+
+        # 5. Apply limit only if not a mutation
+        if not script.has_mutation():
+            self._apply_limit_to_script(script, opts)
+
+        return script, catalog, schema
+
+    def _check_security(self, script: SQLScript) -> None:
+        """
+        Perform security checks on prepared SQL script.
+
+        :param script: Prepared SQLScript
+        :raises SupersetSecurityException: If security checks fail
+        """
+        # Check disallowed functions
+        if disallowed := self._check_disallowed_functions(script):
+            raise SupersetSecurityException(
+                SupersetError(
+                    message=f"Disallowed SQL functions: {', 
'.join(disallowed)}",
+                    error_type=SupersetErrorType.INVALID_SQL_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            )
+
+        # Check DML permission
+        if script.has_mutation() and not self.database.allow_dml:
+            raise SupersetSecurityException(
+                SupersetError(
+                    message="DML queries are not allowed on this database",
+                    error_type=SupersetErrorType.DML_NOT_ALLOWED_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            )
+
+    def _execute_statements(
+        self,
+        sql: str,
+        catalog: str | None,
+        schema: str | None,
+        query: Any,
+    ) -> Any:
+        """
+        Execute SQL statements with progress tracking.
+
+        Progress is tracked via Query.progress field, matching SQL Lab 
behavior.
+        Uses the same execution path for both single and multi-statement 
queries.
+
+        :param sql: Final SQL to execute (with RLS and all transformations 
applied)
+        :param catalog: Catalog name
+        :param schema: Schema name
+        :param query: Query model for progress tracking
+        :returns: DataFrame with results from last statement
+        """
+        import pandas as pd
+
+        # Parse the final SQL (with RLS applied) to get statements
+        script = SQLScript(sql, self.database.db_engine_spec.engine)
+        statements = script.statements
+
+        # Handle empty script
+        if not statements:
+            return pd.DataFrame()
+
+        # Use consistent execution path for all queries
+        with self.database.get_raw_connection(catalog=catalog, schema=schema) 
as conn:
+            cursor = conn.cursor()
+            result_set = execute_sql_with_cursor(
+                database=self.database,
+                cursor=cursor,
+                statements=[stmt.format() for stmt in statements],
+                query=query,
+                log_query_fn=self._log_query,
+            )
+
+            if result_set is not None:
+                return result_set.to_pandas_df()
+
+        return pd.DataFrame()
+
+    def _log_query(
+        self,
+        sql: str,
+        schema: str | None,
+    ) -> None:
+        """
+        Log query using QUERY_LOGGER config.
+
+        :param sql: SQL to log
+        :param schema: Schema name
+        """
+        from superset import security_manager
+
+        if log_query := app.config.get("QUERY_LOGGER"):
+            log_query(
+                self.database,
+                sql,
+                schema,
+                __name__,
+                security_manager,
+                {},
+            )
+
+    def _create_error_result(
+        self,
+        status: Any,
+        error_message: str,
+        sql: str,
+        start_time: float,
+    ) -> QueryResult:
+        """
+        Create a QueryResult for error cases.
+
+        :param status: QueryStatus enum value
+        :param error_message: Error message to include
+        :param sql: SQL query (original if error occurred before 
transformation)
+        :param start_time: Start time for calculating execution duration
+        :returns: QueryResult with error status
+        """
+        from superset_core.api.types import QueryResult as QueryResultType
+
+        return QueryResultType(
+            status=status,
+            error_message=error_message,
+            query=sql,
+            execution_time_ms=(time.time() - start_time) * 1000,
+        )
+
+    def _render_sql_template(
+        self, sql: str, template_params: dict[str, Any] | None
+    ) -> str:
+        """
+        Render Jinja2 template with params.
+
+        :param sql: SQL string potentially containing Jinja2 templates
+        :param template_params: Parameters to pass to the template
+        :returns: Rendered SQL string
+        """
+        if not template_params:
+            return sql
+
+        from superset.jinja_context import get_template_processor
+
+        tp = get_template_processor(database=self.database)
+        return tp.process_template(sql, **template_params)
+
+    def _apply_limit_to_script(self, script: SQLScript, opts: QueryOptions) -> 
None:
+        """
+        Apply limit to the last statement in the script in place.
+
+        :param script: SQLScript object to modify
+        :param opts: Query options
+        """
+        # Skip if no limit requested
+        if not opts.limit:
+            return
+
+        sql_max_row = app.config.get("SQL_MAX_ROW")
+        effective_limit = opts.limit
+        if sql_max_row and opts.limit > sql_max_row:
+            effective_limit = sql_max_row
+
+        # Apply limit to last statement only
+        if script.statements:
+            script.statements[-1].set_limit_value(
+                effective_limit,
+                self.database.db_engine_spec.limit_method,
+            )
+
+    def _try_get_cached_result(
+        self,
+        has_mutation: bool,
+        sql: str,
+        opts: QueryOptions,
+    ) -> QueryResult | None:
+        """
+        Try to get a cached result if conditions allow.
+
+        :param has_mutation: Whether the query contains mutations (DML)
+        :param sql: SQL query
+        :param opts: Query options
+        :returns: Cached QueryResult or None
+        """
+        if has_mutation or (opts.cache and opts.cache.force_refresh):
+            return None
+
+        return self._get_from_cache(sql, opts)
+
+    def _check_disallowed_functions(self, script: SQLScript) -> set[str] | 
None:
+        """
+        Check for disallowed SQL functions.
+
+        :param script: Parsed SQL script
+        :returns: Set of disallowed functions found, or None if none found
+        """
+        disallowed_config = app.config.get("DISALLOWED_SQL_FUNCTIONS", {})
+        engine_name = self.database.db_engine_spec.engine
+
+        # Get disallowed functions for this engine
+        engine_disallowed = disallowed_config.get(engine_name, set())
+        if not engine_disallowed:
+            return None
+
+        # Check each statement for disallowed functions
+        found = set()
+        for statement in script.statements:
+            # Use the statement's AST to check for function calls
+            statement_str = str(statement).upper()
+            for func in engine_disallowed:
+                if func.upper() in statement_str:
+                    found.add(func)
+
+        return found if found else None
+
+    def _apply_rls_to_script(
+        self, script: SQLScript, catalog: str | None, schema: str | None
+    ) -> None:
+        """
+        Apply Row-Level Security to SQLScript statements in place.
+
+        :param script: SQLScript object to modify
+        :param catalog: Catalog name
+        :param schema: Schema name
+        """
+        from superset.utils.rls import apply_rls
+
+        # Apply RLS to each statement in the script
+        for statement in script.statements:
+            apply_rls(self.database, catalog, schema or "", statement)
+
+    def _create_query_record(
+        self,
+        sql: str,
+        opts: QueryOptions,
+        catalog: str | None,
+        schema: str | None,
+        status: str = "running",
+    ) -> Any:
+        """
+        Create Query model for audit/tracking.
+
+        :param sql: SQL to execute
+        :param opts: Query options
+        :param catalog: Catalog name
+        :param schema: Schema name
+        :param status: Initial query status ("running" for sync, "pending" for 
async)
+        :returns: Query model instance
+        """
+        import uuid
+
+        from superset.models.sql_lab import Query as QueryModel
+
+        user_id = None
+        if has_app_context() and hasattr(g, "user") and g.user:
+            user_id = g.user.get_id()
+
+        # Generate client_id for Query model
+        client_id = uuid.uuid4().hex[:11]
+
+        query = QueryModel(
+            client_id=client_id,
+            database_id=self.database.id,
+            sql=sql,
+            catalog=catalog,
+            schema=schema,
+            user_id=user_id,
+            status=status,
+            limit=opts.limit,
+        )
+        db.session.add(query)
+        db.session.commit()  # pylint: disable=consider-using-transaction
+
+        return query
+
+    def _get_from_cache(self, sql: str, opts: QueryOptions) -> QueryResult | 
None:
+        """
+        Check results cache for existing result.
+
+        :param sql: SQL query
+        :param opts: Query options
+        :returns: Cached QueryResult if found, None otherwise
+        """
+        from superset_core.api.types import (
+            QueryResult as QueryResultType,
+            QueryStatus,
+        )
+
+        cache_key = self._generate_cache_key(sql, opts)
+
+        if (cached := cache_manager.data_cache.get(cache_key)) is not None:
+            return QueryResultType(
+                status=QueryStatus.SUCCESS,
+                data=cached.get("data"),
+                row_count=cached.get("row_count", 0),
+                query=sql,
+                is_cached=True,
+                execution_time_ms=0,
+            )
+
+        return None
+
+    def _store_in_cache(
+        self, result: QueryResult, sql: str, opts: QueryOptions
+    ) -> None:
+        """
+        Store result in cache.
+
+        :param result: Query result to cache
+        :param sql: SQL query (for cache key)
+        :param opts: Query options
+        """
+        from superset_core.api.types import QueryStatus
+
+        if result.status != QueryStatus.SUCCESS or result.data is None:
+            return
+
+        cache_key = self._generate_cache_key(sql, opts)
+        timeout = (
+            (opts.cache.timeout if opts.cache else None)
+            or self.database.cache_timeout
+            or app.config.get("CACHE_DEFAULT_TIMEOUT", 300)
+        )
+
+        cache_manager.data_cache.set(
+            cache_key,
+            {"data": result.data, "row_count": result.row_count},
+            timeout=timeout,
+        )
+
+    def _generate_cache_key(self, sql: str, opts: QueryOptions) -> str:
+        """
+        Generate cache key for query result.
+
+        :param sql: SQL query
+        :param opts: Query options
+        :returns: Cache key string
+        """
+        import hashlib
+
+        # Include relevant options in the cache key
+        key_parts = [
+            str(self.database.id),
+            sql,
+            opts.catalog or "",
+            opts.schema or "",
+            str(opts.limit or ""),

Review Comment:
   **Suggestion:** The cache key builder uses `str(opts.limit or "")` which 
converts a valid numeric limit of 0 into the empty string, collapsing distinct 
keys; use an explicit None check so 0 is serialized as "0" and keys remain 
unique. [logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
               str(opts.limit) if getattr(opts, "limit", None) is not None else 
"",
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The current expression converts a legitimate limit of 0 to the empty string, 
collapsing
   distinct cache keys and causing incorrect cache hits. Serializing opts.limit 
only when
   it's not None preserves key uniqueness.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/sql/execution/executor.py
   **Line:** 735:735
   **Comment:**
        *Logic Error: The cache key builder uses `str(opts.limit or "")` which 
converts a valid numeric limit of 0 into the empty string, collapsing distinct 
keys; use an explicit None check so 0 is serialized as "0" and keys remain 
unique.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/sql/execution/celery_task.py:
##########
@@ -0,0 +1,434 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Celery task for async SQL execution.
+
+This module provides the Celery task for executing SQL queries asynchronously.
+It is used by SQLExecutor.execute_async() to run queries in the background.
+"""
+
+from __future__ import annotations
+
+import dataclasses
+import logging
+import sys
+import uuid
+from sys import getsizeof
+from typing import Any, TYPE_CHECKING
+
+import msgpack
+from celery.exceptions import SoftTimeLimitExceeded
+from flask import current_app as app, has_app_context
+from flask_babel import gettext as __
+
+from superset import (
+    db,
+    results_backend,
+    security_manager,
+)
+from superset.common.db_query_status import QueryStatus
+from superset.constants import QUERY_CANCEL_KEY
+from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
+from superset.exceptions import (
+    SupersetErrorException,
+    SupersetErrorsException,
+)
+from superset.extensions import celery_app
+from superset.models.sql_lab import Query
+from superset.result_set import SupersetResultSet
+from superset.sql.execution.executor import execute_sql_with_cursor
+from superset.sql.parse import SQLScript
+from superset.sqllab.utils import write_ipc_buffer
+from superset.utils import json
+from superset.utils.core import override_user, zlib_compress
+from superset.utils.dates import now_as_float
+from superset.utils.decorators import stats_timing
+
+if TYPE_CHECKING:
+    pass
+
+logger = logging.getLogger(__name__)
+
+BYTES_IN_MB = 1024 * 1024
+
+
+def _get_query(query_id: int) -> Query:
+    """Get the query by ID."""
+    return db.session.query(Query).filter_by(id=query_id).one()
+
+
+def _handle_query_error(
+    ex: Exception,
+    query: Query,
+    payload: dict[str, Any] | None = None,
+    prefix_message: str = "",
+) -> dict[str, Any]:
+    """Handle error while processing the SQL query."""
+    payload = payload or {}
+    msg = f"{prefix_message} {str(ex)}".strip()
+    query.error_message = msg
+    query.tmp_table_name = None
+    query.status = QueryStatus.FAILED
+
+    if not query.end_time:
+        query.end_time = now_as_float()
+
+    # Extract DB-specific errors
+    if isinstance(ex, SupersetErrorException):
+        errors = [ex.error]
+    elif isinstance(ex, SupersetErrorsException):
+        errors = ex.errors
+    else:
+        errors = query.database.db_engine_spec.extract_errors(
+            str(ex), database_name=query.database.unique_name
+        )
+
+    errors_payload = [dataclasses.asdict(error) for error in errors]
+    if errors:
+        query.set_extra_json_key("errors", errors_payload)
+
+    db.session.commit()  # pylint: disable=consider-using-transaction
+    payload.update({"status": query.status, "error": msg, "errors": 
errors_payload})
+    if troubleshooting_link := app.config["TROUBLESHOOTING_LINK"]:
+        payload["link"] = troubleshooting_link
+    return payload
+
+
+def _serialize_payload(payload: dict[Any, Any]) -> bytes | str:
+    """Serialize payload for storage based on RESULTS_BACKEND_USE_MSGPACK 
config."""
+    from superset import results_backend_use_msgpack
+
+    if results_backend_use_msgpack:
+        return msgpack.dumps(payload, default=json.json_iso_dttm_ser, 
use_bin_type=True)
+    return json.dumps(payload, default=json.json_iso_dttm_ser, ignore_nan=True)
+
+
+def _prepare_statement_blocks(
+    rendered_query: str,
+    db_engine_spec: Any,
+) -> tuple[SQLScript, list[str]]:
+    """
+    Parse SQL and build statement blocks for execution.
+
+    Note: RLS, security checks, and other preprocessing are handled by
+    SQLExecutor before the query reaches this task.
+    """
+    parsed_script = SQLScript(rendered_query, engine=db_engine_spec.engine)
+
+    # Build statement blocks for execution
+    if db_engine_spec.run_multiple_statements_as_one:
+        blocks = 
[parsed_script.format(comments=db_engine_spec.allows_sql_comments)]
+    else:
+        blocks = [
+            statement.format(comments=db_engine_spec.allows_sql_comments)
+            for statement in parsed_script.statements
+        ]
+
+    return parsed_script, blocks
+
+
+def _finalize_successful_query(
+    query: Query,
+    result_set: SupersetResultSet,
+    payload: dict[str, Any],
+) -> None:
+    """Update query metadata and payload after successful execution."""
+    query.rows = result_set.size
+    query.progress = 100
+    query.set_extra_json_key("progress", None)
+    query.set_extra_json_key("columns", result_set.columns)
+    query.end_time = now_as_float()
+
+    data, columns = _serialize_result_set(result_set)
+
+    payload.update(
+        {
+            "status": QueryStatus.SUCCESS,
+            "data": data,
+            "columns": columns,
+            "query": query.to_dict(),
+        }
+    )
+    payload["query"]["state"] = QueryStatus.SUCCESS
+
+
+def _store_results_in_backend(
+    query: Query,
+    payload: dict[str, Any],
+    database: Any,
+) -> None:
+    """Store query results in the results backend."""
+    key = str(uuid.uuid4())
+    payload["query"]["resultsKey"] = key
+    logger.info(
+        "Query %s: Storing results in results backend, key: %s",
+        str(query.id),
+        key,
+    )
+    stats_logger = app.config["STATS_LOGGER"]
+    with stats_timing("sqllab.query.results_backend_write", stats_logger):
+        with stats_timing(
+            "sqllab.query.results_backend_write_serialization", stats_logger
+        ):
+            serialized_payload = _serialize_payload(payload)
+
+            # Check payload size limit
+            if sql_lab_payload_max_mb := 
app.config.get("SQLLAB_PAYLOAD_MAX_MB"):
+                serialized_payload_size = sys.getsizeof(serialized_payload)
+                max_bytes = sql_lab_payload_max_mb * BYTES_IN_MB
+
+                if serialized_payload_size > max_bytes:
+                    logger.info("Result size exceeds the allowed limit.")
+                    raise SupersetErrorException(
+                        SupersetError(
+                            message=(
+                                f"Result size "
+                                f"({serialized_payload_size / BYTES_IN_MB:.2f} 
MB) "
+                                f"exceeds the allowed limit of "
+                                f"{sql_lab_payload_max_mb} MB."
+                            ),
+                            
error_type=SupersetErrorType.RESULT_TOO_LARGE_ERROR,
+                            level=ErrorLevel.ERROR,
+                        )
+                    )
+
+        cache_timeout = database.cache_timeout
+        if cache_timeout is None:
+            cache_timeout = app.config["CACHE_DEFAULT_TIMEOUT"]
+
+        compressed = zlib_compress(serialized_payload)
+        logger.debug("*** serialized payload size: %i", 
getsizeof(serialized_payload))
+        logger.debug("*** compressed payload size: %i", getsizeof(compressed))
+
+        write_success = results_backend.set(key, compressed, cache_timeout)
+        if not write_success:
+            logger.error(
+                "Query %s: Failed to store results in backend, key: %s",
+                str(query.id),
+                key,
+            )
+            stats_logger.incr("sqllab.results_backend.write_failure")
+            query.results_key = None
+            query.status = QueryStatus.FAILED
+            query.error_message = (
+                "Failed to store query results in the results backend. "
+                "Please try again or contact your administrator."
+            )
+            db.session.commit()  # pylint: disable=consider-using-transaction
+            raise SupersetErrorException(
+                SupersetError(
+                    message=__("Failed to store query results. Please try 
again."),
+                    error_type=SupersetErrorType.RESULTS_BACKEND_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            )
+        else:
+            query.results_key = key
+            logger.info(
+                "Query %s: Successfully stored results in backend, key: %s",
+                str(query.id),
+                key,
+            )
+
+
+def _serialize_result_set(
+    result_set: SupersetResultSet,
+) -> tuple[bytes | list[Any], list[Any]]:
+    """
+    Serialize result set based on RESULTS_BACKEND_USE_MSGPACK config.
+
+    When msgpack is enabled, uses Apache Arrow IPC format for efficiency.
+    Otherwise, falls back to JSON-serializable records.
+
+    :param result_set: Query result set to serialize
+    :returns: Tuple of (serialized_data, columns)
+    """
+    from superset import results_backend_use_msgpack
+    from superset.dataframe import df_to_records
+
+    if results_backend_use_msgpack:
+        if has_app_context():
+            stats_logger = app.config["STATS_LOGGER"]
+            with stats_timing(
+                "sqllab.query.results_backend_pa_serialization", stats_logger
+            ):
+                data: bytes | list[Any] = write_ipc_buffer(
+                    result_set.pa_table
+                ).to_pybytes()
+        else:
+            data = write_ipc_buffer(result_set.pa_table).to_pybytes()
+    else:
+        df = result_set.to_pandas_df()
+        data = df_to_records(df) or []
+
+    return (data, result_set.columns)
+
+
+@celery_app.task(name="query_execution.execute_sql")
+def execute_sql_task(
+    query_id: int,
+    rendered_query: str,
+    username: str | None = None,
+    start_time: float | None = None,
+) -> dict[str, Any] | None:
+    """
+    Execute SQL query asynchronously via Celery.
+
+    This task is used by SQLExecutor.execute_async() to run queries
+    in background workers with full feature support.
+
+    :param query_id: ID of the Query model
+    :param rendered_query: Pre-rendered SQL query to execute
+    :param username: Username for context override
+    :param start_time: Query start time for timing metrics
+    :returns: Query result payload or None
+    """
+    with app.test_request_context():
+        with override_user(security_manager.find_user(username)):
+            try:
+                return _execute_sql_statements(
+                    query_id,
+                    rendered_query,
+                    start_time=start_time,
+                )
+            except Exception as ex:
+                logger.debug("Query %d: %s", query_id, ex)
+                stats_logger = app.config["STATS_LOGGER"]
+                stats_logger.incr("error_sqllab_unhandled")
+                query = _get_query(query_id=query_id)
+                return _handle_query_error(ex, query)
+
+
+def _make_check_stopped_fn(query: Query) -> Any:
+    """Create a function to check if query was stopped."""
+
+    def check_stopped() -> bool:
+        db.session.refresh(query)
+        return query.status == QueryStatus.STOPPED
+
+    return check_stopped
+
+
+def _make_execute_fn(query: Query, db_engine_spec: Any) -> Any:
+    """Create an execute function with stats timing."""
+
+    def execute_with_stats(cursor: Any, sql: str) -> None:
+        query.executed_sql = sql
+        stats_logger = app.config["STATS_LOGGER"]
+        with stats_timing("sqllab.query.time_executing_query", stats_logger):
+            db_engine_spec.execute_with_cursor(cursor, sql, query)
+
+    return execute_with_stats
+
+
+def _make_log_query_fn(database: Any) -> Any:
+    """Create a query logging function."""
+
+    def log_query(sql: str, schema: str | None) -> None:
+        if log_query_fn := app.config.get("QUERY_LOGGER"):
+            log_query_fn(
+                database.sqlalchemy_uri,
+                sql,
+                schema,
+                __name__,
+                security_manager,
+                None,
+            )
+
+    return log_query
+
+
+def _execute_sql_statements(
+    query_id: int,
+    rendered_query: str,
+    start_time: float | None,
+) -> dict[str, Any] | None:
+    """Execute SQL statements and store results."""
+    if start_time:
+        stats_logger = app.config["STATS_LOGGER"]
+        stats_logger.timing("sqllab.query.time_pending", now_as_float() - 
start_time)
+
+    query = _get_query(query_id=query_id)
+    payload: dict[str, Any] = {"query_id": query_id}
+    database = query.database
+    db_engine_spec = database.db_engine_spec
+    db_engine_spec.patch()
+
+    logger.info("Query %s: Set query to 'running'", str(query_id))
+    query.status = QueryStatus.RUNNING
+    query.start_running_time = now_as_float()
+    db.session.commit()  # pylint: disable=consider-using-transaction
+
+    parsed_script, blocks = _prepare_statement_blocks(rendered_query, 
db_engine_spec)
+
+    with database.get_raw_connection(
+        catalog=query.catalog,
+        schema=query.schema,
+    ) as conn:
+        cursor = conn.cursor()
+
+        cancel_query_id = db_engine_spec.get_cancel_query_id(cursor, query)
+        if cancel_query_id is not None:
+            query.set_extra_json_key(QUERY_CANCEL_KEY, cancel_query_id)
+            db.session.commit()  # pylint: disable=consider-using-transaction
+
+        try:
+            result_set = execute_sql_with_cursor(
+                database=database,
+                cursor=cursor,
+                statements=blocks,
+                query=query,
+                log_query_fn=_make_log_query_fn(database),
+                check_stopped_fn=_make_check_stopped_fn(query),
+                execute_fn=_make_execute_fn(query, db_engine_spec),
+            )
+        except SoftTimeLimitExceeded as ex:
+            query.status = QueryStatus.TIMED_OUT
+            logger.warning("Query %d: Time limit exceeded", query.id)
+            timeout_sec = app.config["SQLLAB_ASYNC_TIME_LIMIT_SEC"]
+            raise SupersetErrorException(
+                SupersetError(
+                    message=__(
+                        "The query was killed after %(sqllab_timeout)s 
seconds. "
+                        "It might be too complex, or the database might be "
+                        "under heavy load.",
+                        sqllab_timeout=timeout_sec,
+                    ),
+                    error_type=SupersetErrorType.SQLLAB_TIMEOUT_ERROR,
+                    level=ErrorLevel.ERROR,
+                )
+            ) from ex
+
+        # Check if stopped
+        if result_set is None:
+            payload.update({"status": QueryStatus.STOPPED})
+            return payload
+
+        # Commit for mutations
+        if parsed_script.has_mutation() or query.select_as_cta:
+            conn.commit()  # pylint: disable=consider-using-transaction
+
+    _finalize_successful_query(query, result_set, payload)
+
+    if results_backend:
+        _store_results_in_backend(query, payload, database)
+
+    if query.status != QueryStatus.FAILED:
+        query.status = QueryStatus.SUCCESS
+    db.session.commit()  # pylint: disable=consider-using-transaction
+
+    return None

Review Comment:
   **Suggestion:** Logic bug: `_execute_sql_statements` returns `None` on 
success instead of returning the assembled `payload`, so callers (including the 
Celery task) don't receive the query result payload; return `payload` at the 
end. [logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
       return payload
   ```
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/sql/execution/celery_task.py
   **Line:** 434:434
   **Comment:**
        *Logic Error: Logic bug: `_execute_sql_statements` returns `None` on 
success instead of returning the assembled `payload`, so callers (including the 
Celery task) don't receive the query result payload; return `payload` at the 
end.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/sql/execution/conftest.py:
##########
@@ -0,0 +1,309 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Shared fixtures and helpers for SQL execution tests.
+
+This module provides common mocks, fixtures, and helper functions used across
+test_celery_task.py and test_executor.py to reduce code duplication.
+"""
+
+from contextlib import contextmanager
+from typing import Any
+from unittest.mock import MagicMock
+
+import pandas as pd
+import pytest
+from flask import current_app
+from pytest_mock import MockerFixture
+
+from superset.common.db_query_status import QueryStatus as QueryStatusEnum
+from superset.models.core import Database
+
+# =============================================================================
+# Core Fixtures
+# =============================================================================
+
+
[email protected](autouse=True)
+def mock_db_session(mocker: MockerFixture) -> MagicMock:
+    """Mock database session for all tests to avoid foreign key constraints."""
+    mock_session = MagicMock()
+    mocker.patch("superset.sql.execution.executor.db.session", mock_session)
+    mocker.patch("superset.sql.execution.celery_task.db.session", mock_session)
+    return mock_session
+
+
[email protected]
+def mock_query() -> MagicMock:
+    """Create a mock Query model."""
+    query = MagicMock()
+    query.id = 123
+    query.database_id = 1
+    query.sql = "SELECT * FROM users"
+    query.status = QueryStatusEnum.PENDING
+    query.error_message = None
+    query.progress = 0
+    query.end_time = None
+    query.start_running_time = None
+    query.executed_sql = None
+    query.tmp_table_name = None
+    query.catalog = None
+    query.schema = "public"
+    query.extra = {}
+    query.set_extra_json_key = MagicMock()
+    query.results_key = None
+    query.select_as_cta = False
+    query.rows = 0
+    query.to_dict = MagicMock(return_value={"id": 123})
+    query.database = MagicMock()
+    query.database.db_engine_spec.extract_errors.return_value = []
+    query.database.unique_name = "test_db"
+    query.database.cache_timeout = 300
+    return query
+
+
[email protected]
+def mock_database() -> MagicMock:
+    """Create a mock Database."""
+    database = MagicMock()
+    database.id = 1
+    database.unique_name = "test_db"
+    database.cache_timeout = 300
+    database.sqlalchemy_uri = "postgresql://localhost/test"
+    database.db_engine_spec = MagicMock()
+    database.db_engine_spec.engine = "postgresql"
+    database.db_engine_spec.run_multiple_statements_as_one = False
+    database.db_engine_spec.allows_sql_comments = True
+    database.db_engine_spec.extract_errors = MagicMock(return_value=[])
+    database.db_engine_spec.execute_with_cursor = MagicMock()
+    database.db_engine_spec.get_cancel_query_id = MagicMock(return_value=None)
+    database.db_engine_spec.patch = MagicMock()
+    database.db_engine_spec.fetch_data = MagicMock(return_value=[])
+    return database
+
+
[email protected]
+def mock_result_set() -> MagicMock:
+    """Create a mock SupersetResultSet."""
+    result_set = MagicMock()
+    result_set.size = 2
+    result_set.columns = [{"name": "id"}, {"name": "name"}]
+    result_set.pa_table = MagicMock()
+    result_set.to_pandas_df = MagicMock(
+        return_value=pd.DataFrame({"id": [1, 2], "name": ["Alice", "Bob"]})
+    )
+    return result_set
+
+
[email protected]
+def database() -> Database:
+    """Create a real test database instance."""
+    return Database(
+        id=1,
+        database_name="test_db",
+        sqlalchemy_uri="sqlite://",
+        allow_dml=False,
+    )
+
+
[email protected]
+def database_with_dml() -> Database:
+    """Create a real test database instance with DML allowed."""
+    return Database(
+        id=2,
+        database_name="test_db_dml",
+        sqlalchemy_uri="sqlite://",
+        allow_dml=True,
+    )
+
+
+# =============================================================================
+# Helper Functions for Mock Creation
+# =============================================================================
+
+
+def create_mock_cursor(
+    column_names: list[str],
+    data: list[tuple[Any, ...]] | None = None,
+) -> MagicMock:
+    """
+    Create a mock database cursor with column description.
+
+    Args:
+        column_names: List of column names
+        data: Optional data to return from fetchall()
+
+    Returns:
+        Configured MagicMock cursor
+    """
+    mock_cursor = MagicMock()
+    mock_cursor.description = [(name,) for name in column_names]
+    if data is not None:
+        mock_cursor.fetchall.return_value = data
+    return mock_cursor
+
+
+def create_mock_connection(mock_cursor: MagicMock | None = None) -> MagicMock:
+    """
+    Create a mock database connection.
+
+    Args:
+        mock_cursor: Optional cursor to return from cursor()
+
+    Returns:
+        Configured MagicMock connection with context manager support
+    """
+    if mock_cursor is None:
+        mock_cursor = create_mock_cursor([])
+
+    mock_conn = MagicMock()
+    mock_conn.cursor.return_value = mock_cursor
+    mock_conn.__enter__ = MagicMock(return_value=mock_conn)
+    mock_conn.__exit__ = MagicMock(return_value=False)

Review Comment:
   **Suggestion:** The mock connection lacks a `close()` attribute but 
production code sometimes wraps raw connections with contextlib.closing (which 
calls `close()`); add a `close` MagicMock to the mock connection so 
`closing(mock_conn)` works and tests don't fail when `close()` is invoked. 
[resource leak]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
       # Provide context manager dunder methods and a close() method to match 
real connection behavior
       mock_conn.__enter__ = MagicMock(return_value=mock_conn)
       mock_conn.__exit__ = MagicMock(return_value=False)
       mock_conn.close = MagicMock()
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The production code sometimes wraps raw connections with contextlib.closing 
which will call close() on the object.
   The test helper currently creates a MagicMock connection that implements 
__enter__/__exit__ but lacks close(), which can lead to AttributeError if 
closing(...) is used in any patched path.
   Adding mock_conn.close = MagicMock() is a small, safe, and defensive change 
that better mirrors real connection behavior and prevents intermittent test 
failures.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/sql/execution/conftest.py
   **Line:** 176:177
   **Comment:**
        *Resource Leak: The mock connection lacks a `close()` attribute but 
production code sometimes wraps raw connections with contextlib.closing (which 
calls `close()`); add a `close` MagicMock to the mock connection so 
`closing(mock_conn)` works and tests don't fail when `close()` is invoked.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/sql/execution/test_executor.py:
##########
@@ -0,0 +1,2096 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Tests for SQLExecutor.
+
+These tests cover the SQL execution API including:
+- Basic execution
+- DML handling
+- Jinja2 template rendering
+- CTAS/CVAS support
+- Security features (RLS, disallowed functions)
+- Result caching
+- Query model persistence
+- Async execution
+"""
+
+from unittest.mock import MagicMock
+
+import msgpack
+import pandas as pd
+import pytest
+from flask import current_app
+from pytest_mock import MockerFixture
+from superset_core.api.types import (
+    CacheOptions,
+    QueryOptions,
+    QueryStatus,
+)
+
+from superset.models.core import Database
+
+# Note: database, database_with_dml, mock_db_session fixtures and
+# mock_query_execution helper are imported from conftest.py
+from .conftest import mock_query_execution
+
+# =============================================================================
+# Basic Execution Tests
+# =============================================================================
+
+
+def test_execute_select_success(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test successful SELECT query execution."""
+    mock_query_execution(
+        mocker,
+        database,
+        return_data=[(1, "Alice"), (2, "Bob")],
+        column_names=["id", "name"],
+    )
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    result = database.execute("SELECT id, name FROM users")
+
+    assert result.status == QueryStatus.SUCCESS
+    assert result.data is not None
+    assert result.row_count == 2
+    assert result.error_message is None
+
+
+def test_execute_with_options(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test query execution with custom options."""
+    mock_query_execution(mocker, database, return_data=[(100,)], 
column_names=["count"])
+    get_raw_conn_mock = mocker.patch.object(
+        database,
+        "get_raw_connection",
+        wraps=database.get_raw_connection,
+    )
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    options = QueryOptions(catalog="main", schema="public", limit=50)
+    result = database.execute("SELECT COUNT(*) FROM users", options=options)
+
+    assert result.status == QueryStatus.SUCCESS
+    get_raw_conn_mock.assert_called_once()
+    call_kwargs = get_raw_conn_mock.call_args[1]
+    assert call_kwargs["catalog"] == "main"
+    assert call_kwargs["schema"] == "public"
+
+
+def test_execute_records_execution_time(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that execution time is recorded."""
+    mock_query_execution(mocker, database, return_data=[(1,)], 
column_names=["id"])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    result = database.execute("SELECT id FROM users")
+
+    assert result.status == QueryStatus.SUCCESS
+    assert result.execution_time_ms is not None
+    assert result.execution_time_ms >= 0
+
+
+def test_execute_creates_query_record(
+    mocker: MockerFixture,
+    database: Database,
+    app_context: None,
+    mock_db_session: MagicMock,
+) -> None:
+    """Test that execute creates a Query record for audit."""
+    from superset.sql.execution.executor import SQLExecutor
+
+    mock_query_execution(mocker, database, return_data=[(1,)], 
column_names=["id"])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    # Mock _create_query_record to return a mock query with ID
+    mock_query = MagicMock()
+    mock_query.id = 123
+    mock_create_query = mocker.patch.object(
+        SQLExecutor, "_create_query_record", return_value=mock_query
+    )
+
+    result = database.execute("SELECT id FROM users")
+
+    assert result.status == QueryStatus.SUCCESS
+    assert result.query_id == 123
+    mock_create_query.assert_called_once()
+
+
+# =============================================================================
+# DML Handling Tests
+# =============================================================================
+
+
+def test_execute_dml_without_permission(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that DML queries fail when database.allow_dml is False."""
+    mocker.patch.dict(
+        current_app.config,
+        {"SQL_QUERY_MUTATOR": None, "SQLLAB_TIMEOUT": 30},
+    )
+
+    result = database.execute("INSERT INTO users (name) VALUES ('test')")
+
+    assert result.status == QueryStatus.FAILED
+    assert result.error_message is not None
+    assert "DML queries are not allowed" in result.error_message
+
+
+def test_execute_dml_with_permission(
+    mocker: MockerFixture, database_with_dml: Database, app_context: None
+) -> None:
+    """Test that DML queries succeed when database.allow_dml is True."""
+    mock_query_execution(mocker, database_with_dml, return_data=[], 
column_names=[])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    result = database_with_dml.execute("INSERT INTO users (name) VALUES 
('test')")
+
+    assert result.status == QueryStatus.SUCCESS
+
+
+def test_execute_update_without_permission(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that UPDATE queries fail when database.allow_dml is False."""
+    mocker.patch.dict(
+        current_app.config,
+        {"SQL_QUERY_MUTATOR": None, "SQLLAB_TIMEOUT": 30},
+    )
+
+    result = database.execute("UPDATE users SET name = 'test' WHERE id = 1")
+
+    assert result.status == QueryStatus.FAILED
+    assert result.error_message is not None
+    assert "DML queries are not allowed" in result.error_message
+
+
+def test_execute_delete_without_permission(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that DELETE queries fail when database.allow_dml is False."""
+    mocker.patch.dict(
+        current_app.config,
+        {"SQL_QUERY_MUTATOR": None, "SQLLAB_TIMEOUT": 30},
+    )
+
+    result = database.execute("DELETE FROM users WHERE id = 1")
+
+    assert result.status == QueryStatus.FAILED
+    assert result.error_message is not None
+    assert "DML queries are not allowed" in result.error_message
+
+
+# =============================================================================
+# Jinja2 Template Rendering Tests
+# =============================================================================
+
+
+def test_execute_with_template_params(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test query execution with Jinja2 template parameters."""
+    mock_query_execution(mocker, database, return_data=[(1,)], 
column_names=["id"])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    # Mock the template processor
+    mock_tp = MagicMock()
+    mock_tp.process_template.return_value = (
+        "SELECT * FROM events WHERE date > '2024-01-01'"
+    )
+    mocker.patch(
+        "superset.jinja_context.get_template_processor",
+        return_value=mock_tp,
+    )
+
+    options = QueryOptions(
+        template_params={"table": "events", "start_date": "2024-01-01"}
+    )
+    result = database.execute(
+        "SELECT * FROM {{ table }} WHERE date > '{{ start_date }}'",
+        options=options,
+    )
+
+    assert result.status == QueryStatus.SUCCESS
+    mock_tp.process_template.assert_called_once()
+
+
+def test_execute_without_template_params_no_rendering(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that template rendering is skipped when no params provided."""
+    mock_query_execution(mocker, database, return_data=[(1,)], 
column_names=["id"])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    mock_get_tp = mocker.patch("superset.jinja_context.get_template_processor")
+
+    result = database.execute("SELECT * FROM users")
+
+    assert result.status == QueryStatus.SUCCESS
+    mock_get_tp.assert_not_called()
+
+
+# =============================================================================
+# Disallowed Functions Tests
+# =============================================================================
+
+
+def test_execute_disallowed_functions(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that disallowed SQL functions are blocked."""
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "DISALLOWED_SQL_FUNCTIONS": {"sqlite": {"LOAD_EXTENSION"}},
+        },
+    )
+
+    result = database.execute("SELECT LOAD_EXTENSION('malicious.so')")
+
+    assert result.status == QueryStatus.FAILED
+    assert result.error_message is not None
+    assert "Disallowed SQL functions" in result.error_message
+
+
+def test_execute_allowed_functions(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that allowed SQL functions work normally."""
+    mock_query_execution(mocker, database, return_data=[(5,)], 
column_names=["count"])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "DISALLOWED_SQL_FUNCTIONS": {"sqlite": {"LOAD_EXTENSION"}},
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    result = database.execute("SELECT COUNT(*) FROM users")
+
+    assert result.status == QueryStatus.SUCCESS
+
+
+# =============================================================================
+# Row-Level Security Tests
+# =============================================================================
+
+
+def test_execute_rls_applied(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that RLS is always applied."""
+    from superset.sql.execution.executor import SQLExecutor
+
+    mock_query_execution(mocker, database, return_data=[(1,)], 
column_names=["id"])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    # Mock _apply_rls_to_script to verify it's always called
+    mock_apply_rls = mocker.patch.object(SQLExecutor, "_apply_rls_to_script")
+
+    result = database.execute("SELECT * FROM users")
+
+    assert result.status == QueryStatus.SUCCESS
+    mock_apply_rls.assert_called()
+
+
+def test_execute_rls_always_applied(
+    mocker: MockerFixture, database: Database, app_context: None
+) -> None:
+    """Test that RLS is always applied."""
+    from superset.sql.execution.executor import SQLExecutor
+
+    mock_query_execution(mocker, database, return_data=[(1,)], 
column_names=["id"])
+    mocker.patch.dict(
+        current_app.config,
+        {
+            "SQL_QUERY_MUTATOR": None,
+            "SQLLAB_TIMEOUT": 30,
+            "SQL_MAX_ROW": None,
+            "QUERY_LOGGER": None,
+        },
+    )
+
+    # Mock _apply_rls_to_script to verify it's always called
+    mock_apply_rls = mocker.patch.object(SQLExecutor, "_apply_rls_to_script")
+
+    result = database.execute("SELECT * FROM users")
+
+    assert result.status == QueryStatus.SUCCESS
+    mock_apply_rls.assert_called()

Review Comment:
   **Suggestion:** Duplicate test: `test_execute_rls_always_applied` is an 
exact duplicate of `test_execute_rls_applied`; keeping both is redundant and 
increases maintenance burden and test runtime without benefit — remove the 
duplicate test. [code duplication]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
   # Duplicate test removed: `test_execute_rls_always_applied` was identical to
   # `test_execute_rls_applied` and has been removed to avoid redundancy.
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The two RLS tests in the file are functionally identical (same setup, same 
assertions).
   Removing the duplicate reduces test bloat and maintenance burden without 
changing behavior.
   This does not fix a bug but is a valid cleanup.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/sql/execution/test_executor.py
   **Line:** 385:408
   **Comment:**
        *Code Duplication: Duplicate test: `test_execute_rls_always_applied` is 
an exact duplicate of `test_execute_rls_applied`; keeping both is redundant and 
increases maintenance burden and test runtime without benefit — remove the 
duplicate test.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to