codeant-ai-for-open-source[bot] commented on code in PR #37623:
URL: https://github.com/apache/superset/pull/37623#discussion_r2757037232


##########
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx:
##########
@@ -162,7 +162,7 @@ test('500 error triggers danger toast with error message', 
async () => {
 
   // Verify toast message contains error keywords
   expect(addDangerToast.mock.calls.length).toBeGreaterThan(0);
-  const toastMessage = String(addDangerToast.mock.calls[0][0]);
+  const toastMessage = String(addDangerToast.mock.calls[0].url);
   expect(

Review Comment:
   **Suggestion:** The 500-error toast test reads `url` from the first 
`addDangerToast` mock call, but the mock is called with a message string, so 
`addDangerToast.mock.calls[0].url` is `undefined` and the test will incorrectly 
assert against the string `"undefined"` instead of the actual toast message. 
[logic error]
   
   <details>
   <summary><b>Severity Level:</b> Critical 🚨</summary>
   
   ```mdx
   - ❌ Unit test assertion reads wrong value, producing false negatives.
   - ⚠️ DatasetList behavior tests unreliable in CI.
   - ⚠️ Test coverage for error-toasting is ineffective.
   ```
   </details>
   
   ```suggestion
     const toastMessage = String(addDangerToast.mock.calls[0][0]);
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Run the DatasetList behavior tests (npm test or jest) with the PR code.
   
   2. The test "500 error triggers danger toast with error message" in
   
      superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx
   
      (the block around lines 140-170; the toast assertion is at the lines 
shown above)
   
      executes and sets addDangerToast = jest.fn().
   
   3. When the component triggers an error, it calls addDangerToast(message).
   
      Jest records that call as addDangerToast.mock.calls[0] === [message, ...].
   
   4. The test currently reads addDangerToast.mock.calls[0].url (line shown 
above),
   
      which is undefined because mock.calls[0] is an arguments array and has no 
.url
      property.
   
   5. The test converts undefined to "undefined" and performs string checks 
against that,
   
      causing the assertion to either wrongly pass/fail or miss the real toast 
text.
   
   6. Reproducing locally is immediate: run the single test file (jest
   DatasetList.behavior.test.tsx)
   
      and observe the expectation block for the toast either failing or being 
meaningless.
   
   7. The correct access is to the first argument via 
addDangerToast.mock.calls[0][0];
   switching to that
   
      yields the actual message string and valid keyword checks.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** 
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx
   **Line:** 164:166
   **Comment:**
        *Logic Error: The 500-error toast test reads `url` from the first 
`addDangerToast` mock call, but the mock is called with a message string, so 
`addDangerToast.mock.calls[0].url` is `undefined` and the test will incorrectly 
assert against the string `"undefined"` instead of the actual toast message.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx:
##########
@@ -199,7 +199,7 @@ test('network timeout triggers danger toast', async () => {
 
   // Verify toast message contains timeout/network keywords
   expect(addDangerToast.mock.calls.length).toBeGreaterThan(0);
-  const toastMessage = String(addDangerToast.mock.calls[0][0]);
+  const toastMessage = String(addDangerToast.mock.calls[0].url);
   expect(

Review Comment:
   **Suggestion:** The network-timeout toast test also incorrectly accesses 
`addDangerToast.mock.calls[0].url`, which is `undefined` because the mock is 
called with a message string, causing the test to assert against `"undefined"` 
instead of the actual error text. [logic error]
   
   <details>
   <summary><b>Severity Level:</b> Critical 🚨</summary>
   
   ```mdx
   - ❌ Unit test assertion reads wrong value, producing false negatives.
   - ⚠️ DatasetList behavior tests unreliable in CI.
   - ⚠️ Error handling behavior not properly asserted.
   ```
   </details>
   
   ```suggestion
     const toastMessage = String(addDangerToast.mock.calls[0][0]);
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Run the DatasetList behavior tests (jest) including the "network timeout 
triggers
   danger toast"
   
      test in 
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx (the
      block around lines ~176-206).
   
   2. That test creates addDangerToast = jest.fn() and renders the component 
while
   fetchMock.get is configured
   
      to throw a Network timeout error.
   
   3. When the code under test calls addDangerToast(message), Jest records the 
call as
   addDangerToast.mock.calls[0] === [message].
   
   4. The test reads addDangerToast.mock.calls[0].url (line shown above), which 
is undefined
   (arguments array has no .url).
   
   5. Converting undefined to "undefined" makes the keyword checks operate on 
the wrong
   string and the assertion is not validating the real toast content.
   
   6. Running this test locally reproduces the issue immediately; changing the 
access to
   addDangerToast.mock.calls[0][0] fixes it and makes the keyword assertions 
meaningful.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** 
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx
   **Line:** 201:203
   **Comment:**
        *Logic Error: The network-timeout toast test also incorrectly accesses 
`addDangerToast.mock.calls[0].url`, which is `undefined` because the mock is 
called with a message string, causing the test to assert against `"undefined"` 
instead of the actual error text.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-frontend/src/pages/DatasetList/DatasetList.permissions.test.tsx:
##########
@@ -385,7 +385,7 @@ test('user with can_duplicate sees duplicate button only 
for virtual datasets',
   fetchMock.get(
     API_ENDPOINTS.DATASETS,
     { result: [physicalDataset, virtualDataset], count: 2 },
-    { overwriteRoutes: true },
+
   );

Review Comment:
   **Suggestion:** This test is meant to verify duplicate-button visibility 
only for a specific pair of physical and virtual datasets, but since 
`setupMocks()` has already set up a route for `API_ENDPOINTS.DATASETS`, 
fetch-mock will match that route first and ignore your test-specific 
`fetchMock.get`, so the API response may include additional datasets or 
different data than `[physicalDataset, virtualDataset]`, weakening the 
reliability of the assertion. To tighten the contract and ensure the test runs 
against only the intended two datasets, remove the existing datasets route 
before adding the new one. [logic error]
   
   <details>
   <summary><b>Severity Level:</b> Major ⚠️</summary>
   
   ```mdx
   - ⚠️ Duplicate-button visibility assertions unreliable.
   - ⚠️ Affects permission-based UI tests for virtual datasets.
   - ⚠️ Test suite may miss regressions in duplication logic.
   ```
   </details>
   
   ```suggestion
     fetchMock.removeRoute(API_ENDPOINTS.DATASETS);
     fetchMock.get(API_ENDPOINTS.DATASETS, {
       result: [physicalDataset, virtualDataset],
       count: 2,
     });
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Confirm setup: beforeEach() calls setupMocks() 
(DatasetList.permissions.test.tsx lines
   37-39) which sets a default datasets route in DatasetList.testHelpers.tsx 
(lines 485-488).
   
   2. The 'user with can_duplicate sees duplicate button only for virtual 
datasets' test
   registers a per-test datasets mock at DatasetList.permissions.test.tsx lines 
385-389 for
   [physicalDataset, virtualDataset].
   
   3. Because the default datasets route was registered earlier
   (DatasetList.testHelpers.tsx:485-488), fetch-mock will match that first and 
serve
   mockDatasets instead of the two-item list registered in the test. The test 
may therefore
   observe different rows (more datasets) or different dataset kinds, weakening 
the
   assertions about duplicate-button visibility.
   
   4. Reproduce: run this single test and inspect the rendered rows — or check
   fetch-mock.callHistory to see which dataset response was returned; you'll 
find the
   setupMocks() response was used rather than the per-test one.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** 
superset-frontend/src/pages/DatasetList/DatasetList.permissions.test.tsx
   **Line:** 385:389
   **Comment:**
        *Logic Error: This test is meant to verify duplicate-button visibility 
only for a specific pair of physical and virtual datasets, but since 
`setupMocks()` has already set up a route for `API_ENDPOINTS.DATASETS`, 
fetch-mock will match that route first and ignore your test-specific 
`fetchMock.get`, so the API response may include additional datasets or 
different data than `[physicalDataset, virtualDataset]`, weakening the 
reliability of the assertion. To tighten the contract and ensure the test runs 
against only the intended two datasets, remove the existing datasets route 
before adding the new one.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-frontend/src/pages/DatasetList/DatasetList.test.tsx:
##########
@@ -421,15 +410,15 @@ test('component requires explicit mocks for all API 
endpoints', async () => {
   await waitForDatasetsPageReady();
 
   // Verify that critical endpoints were called and had mocks available
-  const newDatasetsCalls = fetchMock.calls(API_ENDPOINTS.DATASETS);
-  const newInfoCalls = fetchMock.calls(API_ENDPOINTS.DATASETS_INFO);
+  const newDatasetsCalls = fetchMock.callHistory.calls(API_ENDPOINTS.DATASETS);
+  const newInfoCalls = 
fetchMock.callHistory.calls(API_ENDPOINTS.DATASETS_INFO);
 
   // These should have been called during render
   expect(newDatasetsCalls.length).toBeGreaterThan(0);
   expect(newInfoCalls.length).toBeGreaterThan(0);
 
   // Verify no unmatched calls (all endpoints were mocked)
-  const unmatchedCalls = fetchMock.calls(false); // false = unmatched only
+  const unmatchedCalls = fetchMock.callHistory.calls(false); // false = 
unmatched only

Review Comment:
   **Suggestion:** The call `fetchMock.callHistory.calls(false)` no longer 
returns unmatched calls in the fetch-mock v12 API, so this test will always see 
zero unmatched calls and fail to detect unmocked endpoints; instead you should 
explicitly filter the full call history for the special `unmatched` route name 
as done elsewhere in the helpers. [logic error]
   
   <details>
   <summary><b>Severity Level:</b> Major ⚠️</summary>
   
   ```mdx
   - ❌ Tests silently ignore unmocked network requests.
   - ⚠️ DatasetList integration tests give false positives.
   - ⚠️ CI test checks may pass incorrectly.
   - ⚠️ Masked regressions in API mocking setup.
   ```
   </details>
   
   ```suggestion
     const allCalls = fetchMock.callHistory.calls();
     const unmatchedCalls = allCalls.filter(
       call => call.route?.config?.name === 'unmatched',
     );
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Run the test suite including this test file: execute `npm test --
   superset-frontend/src/pages/DatasetList/DatasetList.test.tsx` (entry point 
is this test
   file).
   
   2. The test at 
superset-frontend/src/pages/DatasetList/DatasetList.test.tsx:399-423
   renders the component and then at lines 420-422 executes:
   
      - const unmatchedCalls = fetchMock.callHistory.calls(false);
   
      - expect(unmatchedCalls.length).toBe(0);
   
   3. In fetch-mock v12 the API shape changed; calling 
fetchMock.callHistory.calls(false)
   returns an empty array (or no longer filters unmatched), so the assertion 
passes even if
   there are unmocked requests.
   
   4. To reproduce the failing detection: modify a DatasetList path to call an 
unmocked
   endpoint (or remove a mock in setupMocks) and run this test — the current 
code will still
   assert zero unmatched calls at lines 420-422, masking the missing mock.
   
   5. Verify behaviour by inspecting call history earlier in the file where
   callHistory.calls(API_ENDPOINTS.DATASETS) is used (lines 412-415 and 
252-256),
   demonstrating callHistory.calls() is used for filtering by route elsewhere; 
replacing the
   false-flag usage with an explicit filter (improved code) will accurately 
detect unmatched
   route entries.
   
   6. Note: The suggestion is actionable — it replaces the deprecated/changed 
false flag
   usage with explicit filtering of callHistory.calls() entries by route name 
'unmatched',
   restoring test correctness.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset-frontend/src/pages/DatasetList/DatasetList.test.tsx
   **Line:** 421:421
   **Comment:**
        *Logic Error: The call `fetchMock.callHistory.calls(false)` no longer 
returns unmatched calls in the fetch-mock v12 API, so this test will always see 
zero unmatched calls and fail to detect unmocked endpoints; instead you should 
explicitly filter the full call history for the special `unmatched` route name 
as done elsewhere in the helpers.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-frontend/src/pages/DatasetList/DatasetList.testHelpers.tsx:
##########
@@ -468,11 +449,13 @@ export const setupDuplicateErrorMocks = (statusCode: 
number) => {
  * @throws If any unmocked endpoints were called or expected endpoints weren't 
called
  */
 export const assertOnlyExpectedCalls = (expectedEndpoints: string[]) => {
-  const allCalls = fetchMock.calls(true); // Get all calls including unmatched
-  const unmatchedCalls = allCalls.filter(call => call.isUnmatched);
+  const allCalls = fetchMock.callHistory.calls();
+  const unmatchedCalls = allCalls.filter(
+    call => call.route?.config?.name === 'unmatched',
+  );
 

Review Comment:
   **Suggestion:** The unmatched-call detection in `assertOnlyExpectedCalls` is 
brittle and likely incorrect with the fetch-mock v12 API, because it relies on 
`call.route?.config?.name === 'unmatched'`; in v12 unmatched calls are exposed 
via dedicated unmatched-call tracking (e.g. `callHistory.unmatchedCalls()` or 
an `unmatched` flag on the call), so this filter will typically return an empty 
array and fail to detect real unmocked network calls, allowing tests to pass 
even when components hit unmocked endpoints. [logic error]
   
   <details>
   <summary><b>Severity Level:</b> Major ⚠️</summary>
   
   ```mdx
   - ❌ DatasetList tests silently miss unmocked network calls.
   - ⚠️ Test-suite false negatives allow regression slips.
   - ⚠️ CI test stability reduced for DatasetList tests.
   - ⚠️ Other tests using this helper may be affected.
   ```
   </details>
   
   ```suggestion
     // Use fetch-mock's unmatched call tracking instead of relying on a route 
name
     const unmatchedCalls =
       typeof fetchMock.callHistory.unmatchedCalls === 'function'
         ? fetchMock.callHistory.unmatchedCalls()
         : fetchMock.callHistory
             .calls()
             // Fallback for environments where calls expose an `unmatched` flag
             .filter((call: any) => call.unmatched);
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Run the DatasetList tests that import this helper: open file at
   superset-frontend/src/pages/DatasetList/DatasetList.testHelpers.tsx and 
locate
   assertOnlyExpectedCalls starting at line 451 (function body uses
   fetchMock.callHistory.calls()).
   
   2. Execute a test that triggers an unmocked API call (for example any test in
   DatasetList.*.test.tsx that does not add a matching fetchMock.get/post). The 
test call
   path: test renders DatasetList via renderDatasetList() in the same file, 
which mounts the
   component and exercises API calls mocked via setupMocks() (see setupMocks 
earlier in
   file). If a new endpoint is called but not added to setupMocks(), it will 
produce an
   "unmatched" entry in fetch-mock v12 internals.
   
   3. In the current helper (lines 451–476), assertOnlyExpectedCalls calls
   fetchMock.callHistory.calls() and filters by call.route?.config?.name === 
'unmatched'
   (line 453 in the PR diff). With fetch-mock v12, route.config.name is not 
populated for
   unmatched calls, so the filter returns an empty array even when unmatched 
network calls
   occurred.
   
   4. Observation: The assertion block that should throw (the Error with 
`Unmocked endpoints
   called: ...`) is never reached, allowing tests to pass silently despite 
unmocked network
   traffic. This reproduces locally by running "npm test -- DatasetList" or the 
project's
   test runner and adding/removing a mock to verify the helper fails to detect 
the missing
   mock.
   
   5. The improved code uses fetchMock.callHistory.unmatchedCalls() when 
available (or falls
   back to a call.unmatched flag), which correctly surfaces unmatched requests 
on v12.
   Replacing the brittle route-name check with this approach will cause the 
helper to throw
   as intended when unmocked endpoints are hit.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset-frontend/src/pages/DatasetList/DatasetList.testHelpers.tsx
   **Line:** 452:456
   **Comment:**
        *Logic Error: The unmatched-call detection in `assertOnlyExpectedCalls` 
is brittle and likely incorrect with the fetch-mock v12 API, because it relies 
on `call.route?.config?.name === 'unmatched'`; in v12 unmatched calls are 
exposed via dedicated unmatched-call tracking (e.g. 
`callHistory.unmatchedCalls()` or an `unmatched` flag on the call), so this 
filter will typically return an empty array and fail to detect real unmocked 
network calls, allowing tests to pass even when components hit unmocked 
endpoints.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-frontend/src/pages/DatasetList/DatasetList.permissions.test.tsx:
##########
@@ -348,7 +348,7 @@ test('export-only user has no Actions column (no 
write/duplicate permissions)',
   fetchMock.get(
     API_ENDPOINTS.DATASETS,
     { result: [dataset], count: 1 },
-    { overwriteRoutes: true },
+
   );

Review Comment:
   **Suggestion:** In this test, you intend to control the datasets API 
response to a single dataset for an export-only user, but because 
`setupMocks()` has already registered a default route for 
`API_ENDPOINTS.DATASETS`, fetch-mock will satisfy requests with that first 
route and ignore this per-test `fetchMock.get`, meaning the test may be 
exercising unintended data and could silently miss regressions if the default 
mock data changes. Removing the existing datasets route before registering the 
test-specific one ensures the component sees exactly the mocked payload defined 
in this test. [logic error]
   
   <details>
   <summary><b>Severity Level:</b> Major ⚠️</summary>
   
   ```mdx
   - ⚠️ Export-only test may exercise wrong dataset fixture.
   - ⚠️ Tests relying on specific dataset count can be unreliable.
   - ⚠️ Affects DatasetList export-only UI assertions.
   ```
   </details>
   
   ```suggestion
     fetchMock.removeRoute(API_ENDPOINTS.DATASETS);
     fetchMock.get(API_ENDPOINTS.DATASETS, {
       result: [dataset],
       count: 1,
     });
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. beforeEach() in DatasetList.permissions.test.tsx calls setupMocks()
   (DatasetList.permissions.test.tsx lines 37-39), which registers a default 
datasets mock in
   DatasetList.testHelpers.tsx (lines 485-488).
   
   2. The 'export-only user has no Actions column' test registers a per-test 
datasets mock at
   DatasetList.permissions.test.tsx lines 348-352 
(fetchMock.get(API_ENDPOINTS.DATASETS, {
   result: [dataset], count: 1 })).
   
   3. fetch-mock processes routes in registration order; the route added by 
setupMocks()
   (DatasetList.testHelpers.tsx:485-488) is earlier and will match requests, 
causing the
   per-test registration (lines 348-352) to be ignored.
   
   4. Reproduce: run only this test; inspect fetch-mock.callHistory or 
temporarily log which
   route served the datasets request — you'll observe the default mock from 
setupMocks()
   served the request, not the per-test data. The test may pass while not 
exercising the
   intended single-dataset payload.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** 
superset-frontend/src/pages/DatasetList/DatasetList.permissions.test.tsx
   **Line:** 348:352
   **Comment:**
        *Logic Error: In this test, you intend to control the datasets API 
response to a single dataset for an export-only user, but because 
`setupMocks()` has already registered a default route for 
`API_ENDPOINTS.DATASETS`, fetch-mock will satisfy requests with that first 
route and ignore this per-test `fetchMock.get`, meaning the test may be 
exercising unintended data and could silently miss regressions if the default 
mock data changes. Removing the existing datasets route before registering the 
test-specific one ensures the component sees exactly the mocked payload defined 
in this test.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-frontend/src/pages/DatasetList/DatasetList.permissions.test.tsx:
##########
@@ -309,7 +309,7 @@ test('write user sees edit, delete, and export actions', 
async () => {
   fetchMock.get(
     API_ENDPOINTS.DATASETS,
     { result: [dataset], count: 1 },
-    { overwriteRoutes: true },
+
   );

Review Comment:
   **Suggestion:** The test-specific mock for the datasets API is added in 
addition to the default route configured in `setupMocks()`, but because 
fetch-mock matches routes in registration order, this per-test `fetchMock.get` 
for the datasets endpoint will never be hit, so the custom `owners` field you 
set on the dataset is ignored and the test is not actually validating the 
intended permission behavior. To ensure the test uses the customized dataset, 
explicitly remove the existing datasets route before registering the new one so 
that your override is the only matching route. [logic error]
   
   <details>
   <summary><b>Severity Level:</b> Major ⚠️</summary>
   
   ```mdx
   - ⚠️ Tests may not validate owner-based permission behavior.
   - ⚠️ Affected tests: write user permission checks.
   - ⚠️ Test reliability reduced across DatasetList permissions.
   ```
   </details>
   
   ```suggestion
     fetchMock.removeRoute(API_ENDPOINTS.DATASETS);
     fetchMock.get(API_ENDPOINTS.DATASETS, {
       result: [dataset],
       count: 1,
     });
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Observe test setup: beforeEach() in
   superset-frontend/src/pages/DatasetList/DatasetList.permissions.test.tsx 
calls
   setupMocks() (file: DatasetList.permissions.test.tsx, lines 37-39), which 
registers
   default API routes.
   
   2. Inspect setupMocks() in
   superset-frontend/src/pages/DatasetList/DatasetList.testHelpers.tsx: it calls
   fetchMock.removeRoutes() then registers a default datasets route at 
API_ENDPOINTS.DATASETS
   (file: DatasetList.testHelpers.tsx, lines 478-488), specifically:
   
      - fetchMock.get(API_ENDPOINTS.DATASETS, { result: mockDatasets, count:
      mockDatasets.length }) (lines 485-488).
   
   3. In the test 'write user sees edit, delete, and export actions' the test 
registers a
   per-test fetchMock.get(API_ENDPOINTS.DATASETS, { result: [dataset], count: 1 
}) at
   DatasetList.permissions.test.tsx lines 309-313.
   
   4. Because setupMocks() earlier registered a route for 
API_ENDPOINTS.DATASETS (lines
   485-488) and fetch-mock matches routes in registration order, the first 
matching route
   (the one from setupMocks) will satisfy requests. The per-test fetchMock.get 
appended at
   lines 309-313 will not be used, so the dataset with the modified owners 
field is ignored
   and the test does not actually validate the intended owner-based behavior.
   
   5. Reproducing locally: run the test suite for 
DatasetList.permissions.test.tsx with the
   code as-is. Inspect fetch-mock call history (or add a console/log) to 
confirm the datasets
   requests were served by the setupMocks() route 
(DatasetList.testHelpers.tsx:485-488)
   rather than the per-test route (DatasetList.permissions.test.tsx:309-313). 
Expect the test
   to still pass but not exercise the intended per-test dataset shape.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** 
superset-frontend/src/pages/DatasetList/DatasetList.permissions.test.tsx
   **Line:** 309:313
   **Comment:**
        *Logic Error: The test-specific mock for the datasets API is added in 
addition to the default route configured in `setupMocks()`, but because 
fetch-mock matches routes in registration order, this per-test `fetchMock.get` 
for the datasets endpoint will never be hit, so the custom `owners` field you 
set on the dataset is ignored and the test is not actually validating the 
intended permission behavior. To ensure the test uses the customized dataset, 
explicitly remove the existing datasets route before registering the new one so 
that your override is the only matching route.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset-frontend/src/pages/DatasetList/DatasetList.listview.test.tsx:
##########
@@ -1150,7 +1096,7 @@ test('displays error when initial dataset fetch fails 
with 403 permission denied
   });
 
   // Verify toast message contains the 403-specific "Access Denied" text
-  const toastMessage = String(mockAddDangerToast.mock.calls[0][0]);
+  const toastMessage = String(mockAddDangerToast.mock.calls[0].url);

Review Comment:
   **Suggestion:** The test for the 403 dataset fetch error incorrectly reads a 
`url` property from the first call to the danger toast mock, but the toast mock 
is called with the message string as its first argument, so 
`mockAddDangerToast.mock.calls[0].url` is `undefined` and the assertion will 
always fail instead of checking the actual toast message text. [logic error]
   
   <details>
   <summary><b>Severity Level:</b> Major ⚠️</summary>
   
   ```mdx
   - ❌ DatasetList 403 test fails preventing CI pass.
   - ⚠️ Developer feedback loop slowed by failing unit tests.
   - ⚠️ Prevents verifying permission-denied UX in tests.
   ```
   </details>
   
   ```suggestion
     const toastMessage = String(mockAddDangerToast.mock.calls[0][0]);
   ```
   <details>
   <summary><b>Steps of Reproduction ✅ </b></summary>
   
   ```mdx
   1. Open the test file at
   superset-frontend/src/pages/DatasetList/DatasetList.listview.test.tsx and 
locate the 403
   permission-denied test at line ~1083 (the test that sets fetchMock to return 
status: 403).
   The failing assertion is at line 1099 where the code reads
   mockAddDangerToast.mock.calls[0].url.
   
   2. Run the test suite (e.g., npm test or the specific Jest test) which 
executes this test:
   the component renders and when the datasets API returns 403 the component 
calls the
   injected addDangerToast mock (mockAddDangerToast) with a message string. 
Verify via the
   test setup that addDangerToast is mocked as mockAddDangerToast(msg) at lines 
41-49.
   
   3. The test then attempts to read mockAddDangerToast.mock.calls[0].url (line 
1099). Since
   the mock was called with a single string argument, mock.calls[0] is an array 
like [
   "Access Denied" ], not an object with a .url property. Accessing .url yields 
undefined;
   String(undefined) becomes "undefined" so 
expect(toastMessage).toContain('Access Denied')
   will fail.
   
   4. Replace the read to access the first call's first argument:
   mockAddDangerToast.mock.calls[0][0] (the actual message string). This 
matches how jest
   mock call tuples are shaped (array of args). The improved code at lines 
replacing 1099
   with mock.calls[0][0] will correctly extract the toast message and allow the 
assertion to
   verify 'Access Denied'.
   
   Explanation: The file registers addDangerToast to call 
mockAddDangerToast(msg) (lines
   41-49). Jest stores each call as an args array; reading .url is incorrect. 
This is a logic
   bug in the test assertion, not the component.
   ```
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** 
superset-frontend/src/pages/DatasetList/DatasetList.listview.test.tsx
   **Line:** 1099:1099
   **Comment:**
        *Logic Error: The test for the 403 dataset fetch error incorrectly 
reads a `url` property from the first call to the danger toast mock, but the 
toast mock is called with the message string as its first argument, so 
`mockAddDangerToast.mock.calls[0].url` is `undefined` and the assertion will 
always fail instead of checking the actual toast message text.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to