codeant-ai-for-open-source[bot] commented on code in PR #37200:
URL: https://github.com/apache/superset/pull/37200#discussion_r2697276242


##########
superset/mcp_service/middleware.py:
##########
@@ -755,3 +755,150 @@ def _filter_response(self, response: Any, object_type: 
str, user: Any) -> Any:
                 "Unknown response type for field filtering: %s", type(response)
             )
             return response
+
+
+class ResponseSizeGuardMiddleware(Middleware):
+    """
+    Middleware that prevents oversized responses from overwhelming LLM clients.
+
+    When a tool response exceeds the configured token limit, this middleware
+    intercepts it and returns a helpful error message with suggestions for
+    reducing the response size.
+
+    This is critical for protecting LLM clients like Claude Desktop which can
+    crash or become unresponsive when receiving extremely large responses.
+
+    Configuration via MCP_RESPONSE_SIZE_CONFIG in superset_config.py:
+    - enabled: Toggle the guard on/off (default: True)
+    - token_limit: Maximum estimated tokens per response (default: 50,000)
+    - warn_threshold_pct: Log warnings above this % of limit (default: 80%)
+    - excluded_tools: Tools to skip checking
+    """
+
+    def __init__(
+        self,
+        token_limit: int = 25000,
+        warn_threshold_pct: int = 80,
+        excluded_tools: list[str] | None = None,
+    ) -> None:
+        self.token_limit = token_limit
+        self.warn_threshold_pct = warn_threshold_pct
+        self.warn_threshold = int(token_limit * warn_threshold_pct / 100)
+        self.excluded_tools = set(excluded_tools or [])
+
+    async def on_call_tool(
+        self,
+        context: MiddlewareContext,
+        call_next: Callable[[MiddlewareContext], Awaitable[Any]],
+    ) -> Any:
+        """Check response size after tool execution."""
+        tool_name = getattr(context.message, "name", "unknown")
+
+        # Skip excluded tools
+        if tool_name in self.excluded_tools:
+            return await call_next(context)
+
+        # Execute the tool
+        response = await call_next(context)
+
+        # Estimate response token count
+        from superset.mcp_service.utils.token_utils import (
+            estimate_response_tokens,
+            format_size_limit_error,
+        )
+
+        estimated_tokens = estimate_response_tokens(response)
+
+        # Log warning if approaching limit
+        if estimated_tokens > self.warn_threshold:
+            logger.warning(
+                "Response size warning for %s: ~%d tokens (%.0f%% of %d 
limit)",
+                tool_name,
+                estimated_tokens,
+                (estimated_tokens / self.token_limit) * 100,
+                self.token_limit,
+            )
+
+        # Block if over limit
+        if estimated_tokens > self.token_limit:
+            # Extract params for smart suggestions
+            params = getattr(context.message, "params", {}) or {}
+
+            # Log the blocked response
+            logger.error(
+                "Response blocked for %s: ~%d tokens exceeds limit of %d",
+                tool_name,
+                estimated_tokens,
+                self.token_limit,
+            )
+
+            # Log to event logger for monitoring
+            try:
+                user_id = get_user_id()
+                event_logger.log(
+                    user_id=user_id,
+                    action="mcp_response_size_exceeded",
+                    curated_payload={
+                        "tool": tool_name,
+                        "estimated_tokens": estimated_tokens,
+                        "token_limit": self.token_limit,
+                        "params": params,
+                    },
+                )
+            except Exception as log_error:
+                logger.warning("Failed to log size exceeded event: %s", 
log_error)
+
+            # Generate helpful error message with suggestions
+            error_message = format_size_limit_error(
+                tool_name=tool_name,
+                params=params,
+                estimated_tokens=estimated_tokens,
+                token_limit=self.token_limit,
+                response=response,

Review Comment:
   **Suggestion:** The middleware serializes the entire response to estimate 
tokens and then passes the full response into the error formatter; for very 
large responses this double serialization can OOM or exhaust memory/CPU. Wrap 
the estimation in a safe try/except (treat failures as "over limit") and avoid 
passing the full response into the formatter (pass None or a small summary) to 
prevent large-memory operations. [performance]
   
   <details>
   <summary><b>Severity Level:</b> Critical 🚨</summary>
   
   ```mdx
   - ❌ MCP tool calls can OOM on very large responses.
   - ⚠️ Middleware may crash instead of blocking oversized output.
   - ⚠️ Affected tools: list_charts, get_chart_data, list_datasets.
   ```
   </details>
   
   ```suggestion
           # Estimate response token count (guard against huge responses 
causing OOM)
           from superset.mcp_service.utils.token_utils import (
               estimate_response_tokens,
               format_size_limit_error,
           )
   
           try:
               estimated_tokens = estimate_response_tokens(response)
           except MemoryError as me:
               logger.warning(
                   "MemoryError while estimating tokens for %s: %s", tool_name, 
me
               )
               # Treat as over the limit to be safe without attempting further 
serialization
               estimated_tokens = self.token_limit + 1
           except Exception as e:
               logger.warning(
                   "Failed to estimate response tokens for %s: %s", tool_name, e
               )
               # Conservative fallback: assume it's over the limit so we block 
rather than risk OOM
               estimated_tokens = self.token_limit + 1
   
           # Log warning if approaching limit
           if estimated_tokens > self.warn_threshold:
               logger.warning(
                   "Response size warning for %s: ~%d tokens (%.0f%% of %d 
limit)",
                   tool_name,
                   estimated_tokens,
                   (estimated_tokens / self.token_limit) * 100,
                   self.token_limit,
               )
   
           # Block if over limit
           if estimated_tokens > self.token_limit:
               # Extract params for smart suggestions
               params = getattr(context.message, "params", {}) or {}
   
               # Log the blocked response
               logger.error(
                   "Response blocked for %s: ~%d tokens exceeds limit of %d",
                   tool_name,
                   estimated_tokens,
                   self.token_limit,
               )
   
               # Log to event logger for monitoring
               try:
                   user_id = get_user_id()
                   event_logger.log(
                       user_id=user_id,
                       action="mcp_response_size_exceeded",
                       curated_payload={
                           "tool": tool_name,
                           "estimated_tokens": estimated_tokens,
                           "token_limit": self.token_limit,
                           "params": params,
                       },
                   )
               except Exception as log_error:
                   logger.warning("Failed to log size exceeded event: %s", 
log_error)
   
               # Generate helpful error message with suggestions
               # Avoid passing the full `response` (which may be huge) into the 
formatter.
               error_message = format_size_limit_error(
                   tool_name=tool_name,
                   params=params,
                   estimated_tokens=estimated_tokens,
                   token_limit=self.token_limit,
                   response=None,
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Invoke an MCP tool that returns a very large payload (e.g., a tool that 
lists many
   charts). The middleware entry point is 
ResponseSizeGuardMiddleware.on_call_tool at
   superset/mcp_service/middleware.py:789-863. The middleware executes the tool 
and receives
   `response` at superset/mcp_service/middleware.py:801-802.
   
   2. The middleware calls estimate_response_tokens at
   superset/mcp_service/middleware.py:804-810; that function dispatches to
   superset/mcp_service/utils/token_utils.py:59-89 which uses json.dumps(...) 
to convert
   large responses to strings for token estimation.
   
   3. For extremely large responses (for example, listing 100+ charts with big 
JSON fields),
   json.dumps can raise MemoryError or consume excessive memory during 
serialization inside
   token_utils. The current code at superset/mcp_service/middleware.py:810 
directly assigns
   estimated_tokens = estimate_response_tokens(response) with no try/except, so 
that
   MemoryError propagates.
   
   4. If estimation succeeds but the response is huge, the middleware then calls
   format_size_limit_error with the full `response` argument at
   superset/mcp_service/middleware.py:852-858; format_size_limit_error
   (superset/mcp_service/utils/token_utils.py:358-402) may analyze/serialize 
the `response`
   again, causing a second heavy memory operation. This can OOM/crash the 
request handler and
   destabilize the MCP worker.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/mcp_service/middleware.py
   **Line:** 804:857
   **Comment:**
        *Performance: The middleware serializes the entire response to estimate 
tokens and then passes the full response into the error formatter; for very 
large responses this double serialization can OOM or exhaust memory/CPU. Wrap 
the estimation in a safe try/except (treat failures as "over limit") and avoid 
passing the full response into the formatter (pass None or a small summary) to 
prevent large-memory operations.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/mcp_service/test_middleware.py:
##########
@@ -0,0 +1,336 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Unit tests for MCP service middleware.
+"""
+
+from unittest.mock import AsyncMock, MagicMock, patch
+
+import pytest
+from fastmcp.exceptions import ToolError
+
+from superset.mcp_service.middleware import (
+    create_response_size_guard_middleware,
+    ResponseSizeGuardMiddleware,
+)
+
+
+class TestResponseSizeGuardMiddleware:
+    """Test ResponseSizeGuardMiddleware class."""
+
+    def test_init_default_values(self):
+        """Should initialize with default values."""
+        middleware = ResponseSizeGuardMiddleware()
+        assert middleware.token_limit == 25000
+        assert middleware.warn_threshold_pct == 80
+        assert middleware.warn_threshold == 20000
+        assert middleware.excluded_tools == set()
+
+    def test_init_custom_values(self):
+        """Should initialize with custom values."""
+        middleware = ResponseSizeGuardMiddleware(
+            token_limit=10000,
+            warn_threshold_pct=70,
+            excluded_tools=["health_check", "get_chart_preview"],
+        )
+        assert middleware.token_limit == 10000
+        assert middleware.warn_threshold_pct == 70
+        assert middleware.warn_threshold == 7000
+        assert middleware.excluded_tools == {"health_check", 
"get_chart_preview"}
+
+    @pytest.mark.asyncio
+    async def test_allows_small_response(self):
+        """Should allow responses under token limit."""
+        middleware = ResponseSizeGuardMiddleware(token_limit=25000)
+
+        # Create mock context
+        context = MagicMock()
+        context.message.name = "list_charts"
+        context.message.params = {}
+
+        # Create mock call_next that returns small response
+        small_response = {"charts": [{"id": 1, "name": "test"}]}
+        call_next = AsyncMock(return_value=small_response)
+
+        with (
+            patch("superset.mcp_service.middleware.get_user_id", 
return_value=1),
+            patch("superset.mcp_service.middleware.event_logger"),

Review Comment:
   **Suggestion:** Incorrect use of parentheses around multiple context 
managers creates a single tuple in the with statement, which will raise a 
TypeError because a tuple does not implement the context manager protocol; 
change the with to use multiple context managers (comma-separated) or nested 
with blocks. [possible bug]
   
   <details>
   <summary><b>Severity Level:</b> Critical 🚨</summary>
   
   ```mdx
   - ❌ Unit test failure blocks CI and PR merges.
   - ⚠️ Test coverage for middleware not executed.
   - ❌ Local developer tests fail when running suite.
   ```
   </details>
   
   ```suggestion
           with patch("superset.mcp_service.middleware.get_user_id", 
return_value=1), \
                patch("superset.mcp_service.middleware.event_logger"):
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Run the specific test: pytest
   
tests/unit_tests/mcp_service/test_middleware.py::TestResponseSizeGuardMiddleware::test_allows_small_response
   
      — Test file path: tests/unit_tests/mcp_service/test_middleware.py
   
      — Test function defined around lines 36-78; the with tuple appears at 
lines 70-73.
   
   2. Pytest executes 
TestResponseSizeGuardMiddleware.test_allows_small_response and reaches
   the with statement at line 70:
   
      file tests/unit_tests/mcp_service/test_middleware.py:70 ("with (").
   
   3. Python evaluates the parenthesized expression (patch(...), patch(...)) 
producing a
   tuple,
   
      then attempts to treat that tuple as a context manager and call __enter__ 
on it.
   
   4. Observe runtime failure: TypeError raised with message similar to 
"'tuple' object does
   not support the context manager protocol", causing the test (and CI) to fail 
before
   asserting behavior.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/mcp_service/test_middleware.py
   **Line:** 70:72
   **Comment:**
        *Possible Bug: Incorrect use of parentheses around multiple context 
managers creates a single tuple in the with statement, which will raise a 
TypeError because a tuple does not implement the context manager protocol; 
change the with to use multiple context managers (comma-separated) or nested 
with blocks.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/mcp_service/test_middleware.py:
##########
@@ -0,0 +1,336 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Unit tests for MCP service middleware.
+"""
+
+from unittest.mock import AsyncMock, MagicMock, patch
+
+import pytest
+from fastmcp.exceptions import ToolError
+
+from superset.mcp_service.middleware import (
+    create_response_size_guard_middleware,
+    ResponseSizeGuardMiddleware,
+)
+
+
+class TestResponseSizeGuardMiddleware:
+    """Test ResponseSizeGuardMiddleware class."""
+
+    def test_init_default_values(self):
+        """Should initialize with default values."""
+        middleware = ResponseSizeGuardMiddleware()
+        assert middleware.token_limit == 25000
+        assert middleware.warn_threshold_pct == 80
+        assert middleware.warn_threshold == 20000
+        assert middleware.excluded_tools == set()
+
+    def test_init_custom_values(self):
+        """Should initialize with custom values."""
+        middleware = ResponseSizeGuardMiddleware(
+            token_limit=10000,
+            warn_threshold_pct=70,
+            excluded_tools=["health_check", "get_chart_preview"],
+        )
+        assert middleware.token_limit == 10000
+        assert middleware.warn_threshold_pct == 70
+        assert middleware.warn_threshold == 7000
+        assert middleware.excluded_tools == {"health_check", 
"get_chart_preview"}
+
+    @pytest.mark.asyncio
+    async def test_allows_small_response(self):
+        """Should allow responses under token limit."""
+        middleware = ResponseSizeGuardMiddleware(token_limit=25000)
+
+        # Create mock context
+        context = MagicMock()
+        context.message.name = "list_charts"
+        context.message.params = {}
+
+        # Create mock call_next that returns small response
+        small_response = {"charts": [{"id": 1, "name": "test"}]}
+        call_next = AsyncMock(return_value=small_response)
+
+        with (
+            patch("superset.mcp_service.middleware.get_user_id", 
return_value=1),
+            patch("superset.mcp_service.middleware.event_logger"),
+        ):
+            result = await middleware.on_call_tool(context, call_next)
+
+        assert result == small_response
+        call_next.assert_called_once_with(context)
+
+    @pytest.mark.asyncio
+    async def test_blocks_large_response(self):
+        """Should block responses over token limit."""
+        middleware = ResponseSizeGuardMiddleware(token_limit=100)  # Very low 
limit
+
+        # Create mock context
+        context = MagicMock()
+        context.message.name = "list_charts"
+        context.message.params = {"page_size": 100}
+
+        # Create large response
+        large_response = {
+            "charts": [{"id": i, "name": f"chart_{i}"} for i in range(1000)]
+        }
+        call_next = AsyncMock(return_value=large_response)
+
+        with (
+            patch("superset.mcp_service.middleware.get_user_id", 
return_value=1),
+            patch("superset.mcp_service.middleware.event_logger"),
+        ):

Review Comment:
   **Suggestion:** The with statement is using a parenthesized tuple of patch 
context managers (invalid); replace it with a comma-separated with (or nested 
with) so each patch is used as a proper context manager and the test can 
enter/exit them correctly. [possible bug]
   
   <details>
   <summary><b>Severity Level:</b> Critical 🚨</summary>
   
   ```mdx
   - ❌ Unit test raises TypeError instead of asserting ToolError.
   - ❌ CI job fails on test suite run.
   - ⚠️ Middleware error-path assertions not validated.
   ```
   </details>
   
   ```suggestion
           with patch("superset.mcp_service.middleware.get_user_id", 
return_value=1), \
                patch("superset.mcp_service.middleware.event_logger"):
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Run the specific failing test: pytest
   
tests/unit_tests/mcp_service/test_middleware.py::TestResponseSizeGuardMiddleware::test_blocks_large_response
   
      — Test function located around lines 56-106; the parenthesized with is at 
lines 95-100.
   
   2. Execution reaches the outer with at line 95 which uses parentheses:
   tests/unit_tests/mcp_service/test_middleware.py:95 ("with (").
   
   3. Python evaluates the tuple of patch() calls and tries to use the 
resulting tuple as a
   context manager.
   
   4. Expect a TypeError ("'tuple' object does not support the context manager 
protocol")
   before the inner pytest.raises block runs, causing the test to error instead 
of asserting
   ToolError.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/mcp_service/test_middleware.py
   **Line:** 95:98
   **Comment:**
        *Possible Bug: The with statement is using a parenthesized tuple of 
patch context managers (invalid); replace it with a comma-separated with (or 
nested with) so each patch is used as a proper context manager and the test can 
enter/exit them correctly.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/mcp_service/test_middleware.py:
##########
@@ -0,0 +1,336 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Unit tests for MCP service middleware.
+"""
+
+from unittest.mock import AsyncMock, MagicMock, patch
+
+import pytest
+from fastmcp.exceptions import ToolError
+
+from superset.mcp_service.middleware import (
+    create_response_size_guard_middleware,
+    ResponseSizeGuardMiddleware,
+)
+
+
+class TestResponseSizeGuardMiddleware:
+    """Test ResponseSizeGuardMiddleware class."""
+
+    def test_init_default_values(self):
+        """Should initialize with default values."""
+        middleware = ResponseSizeGuardMiddleware()
+        assert middleware.token_limit == 25000
+        assert middleware.warn_threshold_pct == 80
+        assert middleware.warn_threshold == 20000
+        assert middleware.excluded_tools == set()
+
+    def test_init_custom_values(self):
+        """Should initialize with custom values."""
+        middleware = ResponseSizeGuardMiddleware(
+            token_limit=10000,
+            warn_threshold_pct=70,
+            excluded_tools=["health_check", "get_chart_preview"],
+        )
+        assert middleware.token_limit == 10000
+        assert middleware.warn_threshold_pct == 70
+        assert middleware.warn_threshold == 7000
+        assert middleware.excluded_tools == {"health_check", 
"get_chart_preview"}
+
+    @pytest.mark.asyncio
+    async def test_allows_small_response(self):
+        """Should allow responses under token limit."""
+        middleware = ResponseSizeGuardMiddleware(token_limit=25000)
+
+        # Create mock context
+        context = MagicMock()
+        context.message.name = "list_charts"
+        context.message.params = {}
+
+        # Create mock call_next that returns small response
+        small_response = {"charts": [{"id": 1, "name": "test"}]}
+        call_next = AsyncMock(return_value=small_response)
+
+        with (
+            patch("superset.mcp_service.middleware.get_user_id", 
return_value=1),
+            patch("superset.mcp_service.middleware.event_logger"),
+        ):
+            result = await middleware.on_call_tool(context, call_next)
+
+        assert result == small_response
+        call_next.assert_called_once_with(context)
+
+    @pytest.mark.asyncio
+    async def test_blocks_large_response(self):
+        """Should block responses over token limit."""
+        middleware = ResponseSizeGuardMiddleware(token_limit=100)  # Very low 
limit
+
+        # Create mock context
+        context = MagicMock()
+        context.message.name = "list_charts"
+        context.message.params = {"page_size": 100}
+
+        # Create large response
+        large_response = {
+            "charts": [{"id": i, "name": f"chart_{i}"} for i in range(1000)]
+        }
+        call_next = AsyncMock(return_value=large_response)
+
+        with (
+            patch("superset.mcp_service.middleware.get_user_id", 
return_value=1),
+            patch("superset.mcp_service.middleware.event_logger"),
+        ):
+            with pytest.raises(ToolError) as exc_info:
+                await middleware.on_call_tool(context, call_next)
+
+        # Verify error contains helpful information
+        error_message = str(exc_info.value)
+        assert "Response too large" in error_message
+        assert "limit" in error_message.lower()
+
+    @pytest.mark.asyncio
+    async def test_skips_excluded_tools(self):
+        """Should skip checking for excluded tools."""
+        middleware = ResponseSizeGuardMiddleware(
+            token_limit=100, excluded_tools=["health_check"]
+        )
+
+        # Create mock context for excluded tool
+        context = MagicMock()
+        context.message.name = "health_check"
+        context.message.params = {}
+
+        # Create response that would exceed limit
+        large_response = {"data": "x" * 10000}
+        call_next = AsyncMock(return_value=large_response)
+
+        # Should not raise even though response exceeds limit
+        result = await middleware.on_call_tool(context, call_next)
+        assert result == large_response
+
+    @pytest.mark.asyncio
+    async def test_logs_warning_at_threshold(self):
+        """Should log warning when approaching limit."""
+        middleware = ResponseSizeGuardMiddleware(
+            token_limit=1000, warn_threshold_pct=80
+        )
+
+        context = MagicMock()
+        context.message.name = "list_charts"
+        context.message.params = {}
+
+        # Response at ~85% of limit (should trigger warning but not block)
+        response = {"data": "x" * 2900}  # ~828 tokens at 3.5 chars/token
+        call_next = AsyncMock(return_value=response)
+
+        with (
+            patch("superset.mcp_service.middleware.get_user_id", 
return_value=1),
+            patch("superset.mcp_service.middleware.event_logger"),
+            patch("superset.mcp_service.middleware.logger") as mock_logger,
+        ):

Review Comment:
   **Suggestion:** A three-item parenthesized with (creating a tuple) is used 
which will fail at runtime; change to a comma-separated with so the patches 
(including the one assigned as mock_logger) are entered and the test can assert 
on the mocked logger. [possible bug]
   
   <details>
   <summary><b>Severity Level:</b> Critical 🚨</summary>
   
   ```mdx
   - ❌ Test errors prevent assertion of logger warnings.
   - ❌ CI pipeline fails running unit tests.
   - ⚠️ Logging behavior remains unverified.
   ```
   </details>
   
   ```suggestion
           with patch("superset.mcp_service.middleware.get_user_id", 
return_value=1), \
                patch("superset.mcp_service.middleware.event_logger"), \
                patch("superset.mcp_service.middleware.logger") as mock_logger:
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Run pytest for the warning test: pytest
   
tests/unit_tests/mcp_service/test_middleware.py::TestResponseSizeGuardMiddleware::test_logs_warning_at_threshold
   
      — The test's with-block is around lines 142-147 in
      tests/unit_tests/mcp_service/test_middleware.py.
   
   2. When execution reaches the with at line 142, Python evaluates the 
parenthesized patches
   as a tuple rather than entering each patch context manager.
   
   3. Attempting to use that tuple as a context manager raises TypeError: 
"'tuple' object
   does not support the context manager protocol".
   
   4. As a result, the test errors out and never reaches assertions such as
   mock_logger.warning.assert_called().
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/mcp_service/test_middleware.py
   **Line:** 142:146
   **Comment:**
        *Possible Bug: A three-item parenthesized with (creating a tuple) is 
used which will fail at runtime; change to a comma-separated with so the 
patches (including the one assigned as mock_logger) are entered and the test 
can assert on the mocked logger.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to