codeant-ai-for-open-source[bot] commented on code in PR #34479:
URL: https://github.com/apache/superset/pull/34479#discussion_r2608144722
##########
superset-frontend/src/dashboard/components/URLShortLinkButton/URLShortLinkButton.test.tsx:
##########
@@ -84,9 +84,8 @@ test('creates email anchor', async () => {
});
test('renders error message on short url error', async () => {
- fetchMock.mock(`glob:*/api/v1/dashboard/${DASHBOARD_ID}/permalink`, 500, {
- overwriteRoutes: true,
- });
+ fetchMock.route(`glob:*/api/v1/dashboard/${DASHBOARD_ID}/permalink`, 500);
Review Comment:
**Suggestion:** Using `fetchMock.route` here does not correctly override the
existing `fetchMock.post` handler for the same permalink endpoint, so the POST
request will still receive the success payload instead of a 500 error, meaning
the error-path test will not actually exercise the failure behavior; switching
to `fetchMock.post` with a 500 status matches the existing route and overrides
it as intended. [logic error]
**Severity Level:** Minor ⚠️
```suggestion
fetchMock.post(`glob:*/api/v1/dashboard/${DASHBOARD_ID}/permalink`, 500);
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
Earlier in this test file the permalink POST is explicitly mocked with:
fetchMock.post(`glob:*/api/v1/dashboard/${DASHBOARD_ID}/permalink`,
PERMALINK_PAYLOAD)
Using fetchMock.route here is ambiguous and may not override the
previously-registered POST mock, so the test intended to simulate a 500
response could still receive the success payload. Replacing the call with
fetchMock.post(..., 500) targets the same HTTP method and clearly overrides the
prior mock, ensuring the error path is actually exercised. This is a functional
test fix, not mere style.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/dashboard/components/URLShortLinkButton/URLShortLinkButton.test.tsx
**Line:** 87:87
**Comment:**
*Logic Error: Using `fetchMock.route` here does not correctly override
the existing `fetchMock.post` handler for the same permalink endpoint, so the
POST request will still receive the success payload instead of a 500 error,
meaning the error-path test will not actually exercise the failure behavior;
switching to `fetchMock.post` with a 500 status matches the existing route and
overrides it as intended.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/explore/components/PropertiesModal/PropertiesModal.test.tsx:
##########
@@ -123,12 +123,12 @@ fetchMock.put('glob:*/api/v1/chart/318', {
owners: [],
slice_name: 'Age distribution of respondents',
},
- },
- sendAsJson: true,
+ }
});
afterAll(() => {
- fetchMock.resetBehavior();
+ fetchMock.removeRoutes();
+ fetchMock.unmockGlobal();
Review Comment:
**Suggestion:** The cleanup function calls `fetchMock.unmockGlobal()`, but
`fetch-mock` does not expose this method, so at runtime this will throw a
`TypeError` (attempting to call an undefined function) and can cause the entire
test file to fail; use the standard `fetchMock.restore()` API to unpatch the
global `fetch` instead. [possible bug]
**Severity Level:** Critical 🚨
```suggestion
fetchMock.restore();
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The PR's current test file calls fetchMock.unmockGlobal(), which is not part
of the standard fetch-mock API and will cause a TypeError at runtime (calling
undefined). Replacing it with fetchMock.restore() (or just relying on
removeRoutes plus restore) is the correct cleanup pattern and prevents the test
file from failing. This is a real bugfix, not a cosmetic change.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/explore/components/PropertiesModal/PropertiesModal.test.tsx
**Line:** 131:131
**Comment:**
*Possible Bug: The cleanup function calls `fetchMock.unmockGlobal()`,
but `fetch-mock` does not expose this method, so at runtime this will throw a
`TypeError` (attempting to call an undefined function) and can cause the entire
test file to fail; use the standard `fetchMock.restore()` API to unpatch the
global `fetch` instead.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/pages/ChartList/ChartList.permissions.test.tsx:
##########
@@ -126,13 +126,9 @@ const renderChartList = (
// Setup API permissions mock
const setupApiPermissions = (permissions: string[]) => {
- fetchMock.get(
- API_ENDPOINTS.CHARTS_INFO,
- {
- permissions,
- },
- { overwriteRoutes: true },
- );
+ fetchMock.get(API_ENDPOINTS.CHARTS_INFO, {
+ permissions,
+ });
Review Comment:
**Suggestion:** The permission-specific mock for the charts info endpoint
never takes effect because `fetch-mock` uses the first matching route, so the
default route configured in `setupMocks` is always used and tests that expect
different permissions will silently receive the default full permissions;
explicitly overriding the existing route fixes this. [logic error]
**Severity Level:** Minor ⚠️
```suggestion
fetchMock.get(
API_ENDPOINTS.CHARTS_INFO,
{ permissions },
{ overwriteRoutes: true },
);
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
Verified in ChartList.testHelpers.tsx that setupMocks() registers a default
fetchMock.get(API_ENDPOINTS.CHARTS_INFO, { permissions: [...] }) route.
fetch-mock matches the first registered route and the test helper here
appends another route for the same matcher; without explicit overwrite the
original default remains in effect and the per-test permissions can be
silently ignored. fetch-mock supports an { overwriteRoutes: true } option
on registration; adding it fixes a real test-logic bug (tests receiving
incorrect permissions) rather than just a cosmetic change.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/pages/ChartList/ChartList.permissions.test.tsx
**Line:** 129:131
**Comment:**
*Logic Error: The permission-specific mock for the charts info endpoint
never takes effect because `fetch-mock` uses the first matching route, so the
default route configured in `setupMocks` is always used and tests that expect
different permissions will silently receive the default full permissions;
explicitly overriding the existing route fixes this.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/pages/RowLevelSecurityList/RowLevelSecurityList.test.tsx:
##########
@@ -121,32 +121,25 @@ describe('RuleList RTL', () => {
expect(screen.getByTestId('rls-list-view')).toBeInTheDocument();
});
+
test('fetched data', async () => {
- fetchMock.resetHistory();
+ fetchMock.clearHistory();
await renderAndWait();
const apiCalls = fetchMock.calls(/rowlevelsecurity\/\?q/);
expect(apiCalls).toHaveLength(1);
expect(apiCalls[0][0]).toMatchInlineSnapshot(
`"http://localhost/api/v1/rowlevelsecurity/?q=(order_column:changed_on_delta_humanized,order_direction:desc,page:0,page_size:25)"`,
);
- fetchMock.resetHistory();
+ fetchMock.clearHistory();
});
test('renders add rule button on empty state', async () => {
- fetchMock.get(
- ruleListEndpoint,
- { result: [], count: 0 },
- { overwriteRoutes: true },
- );
+ fetchMock.get(ruleListEndpoint, { result: [], count: 0 });
await renderAndWait();
const emptyAddRuleButton = await screen.findByTestId('add-rule-empty');
expect(emptyAddRuleButton).toBeInTheDocument();
Review Comment:
**Suggestion:** This test temporarily overrides the list endpoint but only
restores it after the assertions, so if the test fails before the final
`fetchMock.get` runs, subsequent tests will keep using the empty-state response
and fail for the wrong reason; moving the restore into a `finally` block
prevents this state leak. [resource leak]
**Severity Level:** Minor ⚠️
```suggestion
try {
await renderAndWait();
const emptyAddRuleButton = await screen.findByTestId('add-rule-empty');
expect(emptyAddRuleButton).toBeInTheDocument();
} finally {
fetchMock.get(ruleListEndpoint, { result: mockRules, count: 2 });
}
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The test replaces the list endpoint mock and restores it at the end of the
test. If an exception occurs before the restoring fetchMock.get runs,
subsequent tests will receive the empty response and fail unpredictably. Moving
the restore into a finally block ensures test isolation and prevents cascading
failures — a practical and necessary test hardening.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/pages/RowLevelSecurityList/RowLevelSecurityList.test.tsx
**Line:** 138:141
**Comment:**
*Resource Leak: This test temporarily overrides the list endpoint but
only restores it after the assertions, so if the test fails before the final
`fetchMock.get` runs, subsequent tests will keep using the empty-state response
and fail for the wrong reason; moving the restore into a `finally` block
prevents this state leak.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/pages/RowLevelSecurityList/RowLevelSecurityList.test.tsx:
##########
@@ -121,32 +121,25 @@ describe('RuleList RTL', () => {
expect(screen.getByTestId('rls-list-view')).toBeInTheDocument();
});
+
test('fetched data', async () => {
- fetchMock.resetHistory();
+ fetchMock.clearHistory();
await renderAndWait();
const apiCalls = fetchMock.calls(/rowlevelsecurity\/\?q/);
expect(apiCalls).toHaveLength(1);
expect(apiCalls[0][0]).toMatchInlineSnapshot(
`"http://localhost/api/v1/rowlevelsecurity/?q=(order_column:changed_on_delta_humanized,order_direction:desc,page:0,page_size:25)"`,
);
Review Comment:
**Suggestion:** The test clears the fetch-mock call history only at the end,
so if any assertion fails before the final `clearHistory` runs, the shared mock
state will leak into subsequent tests and can cause misleading failures;
wrapping the assertions in a `try`/`finally` ensures cleanup always executes.
[resource leak]
**Severity Level:** Minor ⚠️
```suggestion
try {
await renderAndWait();
const apiCalls = fetchMock.calls(/rowlevelsecurity\/\?q/);
expect(apiCalls).toHaveLength(1);
expect(apiCalls[0][0]).toMatchInlineSnapshot(
`"http://localhost/api/v1/rowlevelsecurity/?q=(order_column:changed_on_delta_humanized,order_direction:desc,page:0,page_size:25)"`,
);
} finally {
fetchMock.clearHistory();
}
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The test currently clears the fetch-mock history before and after the
assertions, but if an assertion throws before the final clearHistory runs the
global mock state will leak into subsequent tests. Wrapping the assertions in a
try/finally to guarantee clearHistory() always runs fixes a real test isolation
issue rather than being purely stylistic.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/pages/RowLevelSecurityList/RowLevelSecurityList.test.tsx
**Line:** 127:132
**Comment:**
*Resource Leak: The test clears the fetch-mock call history only at the
end, so if any assertion fails before the final `clearHistory` runs, the shared
mock state will leak into subsequent tests and can cause misleading failures;
wrapping the assertions in a `try`/`finally` ensures cleanup always executes.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/components/ListView/ListView.test.jsx:
##########
@@ -133,13 +133,13 @@ const factory = (props = mockedProps) =>
// eslint-disable-next-line no-restricted-globals -- TODO: Migrate from
describe blocks
describe('ListView', () => {
beforeEach(() => {
- fetchMock.reset();
+ fetchMock.hardReset();
jest.clearAllMocks();
factory();
});
afterEach(() => {
- fetchMock.reset();
+ fetchMock.hardReset();
Review Comment:
**Suggestion:** Calling `fetchMock.hardReset()` again in `afterEach` also
wipes all globally defined routes, which can break any test logic that assumes
the shared CSRF or other global fetch-mock routes remain configured across the
lifetime of this test file; resetting just the call history is sufficient and
avoids removing required mocks. [logic error]
**Severity Level:** Minor ⚠️
```suggestion
fetchMock.reset();
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The afterEach call to fetchMock.hardReset() likewise removes globally
configured fetch-mock routes for the whole test run. That can lead to cascading
failures in later tests that expect the shared routes to remain available.
Switching to fetchMock.reset() preserves route configuration while still
clearing history, addressing a real functional test issue.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** superset-frontend/src/components/ListView/ListView.test.jsx
**Line:** 142:142
**Comment:**
*Logic Error: Calling `fetchMock.hardReset()` again in `afterEach` also
wipes all globally defined routes, which can break any test logic that assumes
the shared CSRF or other global fetch-mock routes remain configured across the
lifetime of this test file; resetting just the call history is sufficient and
avoids removing required mocks.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/components/Datasource/DatasourceModal/DatasourceModal.test.jsx:
##########
@@ -64,7 +64,7 @@ async function renderAndWait(props = mockedProps) {
}
beforeEach(() => {
- fetchMock.reset();
+ fetchMock.hardReset();
cleanup();
renderAndWait();
Review Comment:
**Suggestion:** The fetch-mock reset is performed before rendering, but the
mock routes are only registered after `renderAndWait` runs; since the component
may perform fetches during render/mount, these calls can occur when no routes
are configured (especially after `hardReset`), causing unmatched fetch errors
under fetch-mock 12. Reorder the setup so that `hardReset` and route
registration happen before `renderAndWait` is invoked. [logic error]
**Severity Level:** Minor ⚠️
##########
superset-frontend/src/components/Datasource/components/DatasourceEditor/tests/DatasourceEditor.test.jsx:
##########
@@ -94,14 +93,10 @@ describe('DatasourceEditor', () => {
// Use a Promise to track when fetchMock is called
const fetchPromise = new Promise(resolve => {
- fetchMock.get(
- DATASOURCE_ENDPOINT,
- url => {
- resolve(url);
- return [];
- },
- { overwriteRoutes: true },
- );
+ fetchMock.get(DATASOURCE_ENDPOINT, url => {
+ resolve(url);
+ return [];
+ });
Review Comment:
**Suggestion:** In the "can sync columns from source" test you first
register a GET handler for the same `DATASOURCE_ENDPOINT` in `beforeEach`, then
register another handler inside the `fetchPromise` without enabling route
overwriting; with fetch-mock this means the earlier handler continues to be
used and the tracking handler is never invoked, so `fetchPromise` may never
resolve and the test will hang or time out instead of asserting on the actual
URL. To fix this, configure the second `fetchMock.get` call with
`overwriteRoutes: true` so that it replaces the generic handler and ensures the
URL-capturing callback runs. [logic error]
**Severity Level:** Minor ⚠️
```suggestion
fetchMock.get(
DATASOURCE_ENDPOINT,
url => {
resolve(url);
return [];
},
{ overwriteRoutes: true },
);
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The test's beforeEach registers a generic handler:
fetchMock.get(DATASOURCE_ENDPOINT, []); so the later handler inside the Promise
will not replace the existing route unless overwritten or the earlier route is
removed. That means the URL-capturing callback may never be invoked and the
Promise may never resolve, making this test flaky or hanging. Adding an
overwrite option or otherwise ensuring the capture handler replaces the generic
one fixes a real logic/test issue rather than being cosmetic.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/components/Datasource/components/DatasourceEditor/tests/DatasourceEditor.test.jsx
**Line:** 96:99
**Comment:**
*Logic Error: In the "can sync columns from source" test you first
register a GET handler for the same `DATASOURCE_ENDPOINT` in `beforeEach`, then
register another handler inside the `fetchPromise` without enabling route
overwriting; with fetch-mock this means the earlier handler continues to be
used and the tracking handler is never invoked, so `fetchPromise` may never
resolve and the test will hang or time out instead of asserting on the actual
URL. To fix this, configure the second `fetchMock.get` call with
`overwriteRoutes: true` so that it replaces the generic handler and ensures the
URL-capturing callback runs.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/components/ListView/ListView.test.jsx:
##########
@@ -133,13 +133,13 @@ const factory = (props = mockedProps) =>
// eslint-disable-next-line no-restricted-globals -- TODO: Migrate from
describe blocks
describe('ListView', () => {
beforeEach(() => {
- fetchMock.reset();
+ fetchMock.hardReset();
Review Comment:
**Suggestion:** Using `fetchMock.hardReset()` in the test `beforeEach`
clears not only call history but also all globally configured routes (including
the CSRF mock set up in the shared test shim), so any code under test that
relies on those global fetch-mock routes (like SupersetClient's CSRF token
call) will fail in this test file after the hard reset; use `fetchMock.reset()`
to clear history without removing route configuration. [logic error]
**Severity Level:** Minor ⚠️
```suggestion
fetchMock.reset();
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The PR's tests call fetchMock.hardReset() in beforeEach which not only
clears call history but also removes any globally configured routes. Many test
suites (including Superset's) install shared fetch-mock routes such as a CSRF
token endpoint in a global test shim. Removing those routes can cause unrelated
network calls from the component under test to fail and produce brittle tests.
Replacing hardReset() with fetchMock.reset() (which clears call history without
wiping route configuration) fixes a real test breakage risk rather than being a
mere stylistic change.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** superset-frontend/src/components/ListView/ListView.test.jsx
**Line:** 136:136
**Comment:**
*Logic Error: Using `fetchMock.hardReset()` in the test `beforeEach`
clears not only call history but also all globally configured routes (including
the CSRF mock set up in the shared test shim), so any code under test that
relies on those global fetch-mock routes (like SupersetClient's CSRF token
call) will fail in this test file after the hard reset; use `fetchMock.reset()`
to clear history without removing route configuration.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/pages/DashboardList/DashboardList.test.jsx:
##########
@@ -82,7 +82,9 @@ fetchMock.get(dashboardEndpoint, {
});
global.URL.createObjectURL = jest.fn();
-fetchMock.get('/thumbnail', { body: new Blob(), sendAsJson: false });
+fetchMock.get('/thumbnail', {
+ body: new Blob()
Review Comment:
**Suggestion:** The mock for the thumbnail endpoint omits `sendAsJson:
false`, so `fetch-mock` will treat the `Blob` as JSON, which can cause the
response to be wrapped/serialized incorrectly and break consumers like
`ImageLoader` that expect to call `response.blob()` on a binary image response
in tests after the dependency upgrade. [possible bug]
**Severity Level:** Critical 🚨
```suggestion
body: new Blob(),
sendAsJson: false,
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
fetch-mock by default may serialize response bodies as JSON for convenience;
when the consumer expects a binary response and calls response.blob() (or code
paths that rely on binary headers/behavior), returning a JSON-serialized blob
can break the test. Adding sendAsJson: false ensures fetch-mock returns a raw
response body so consumers behave the same as a real fetch for binary content.
This directly targets a realistic test breakage and is a focused, low-risk
change.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** superset-frontend/src/pages/DashboardList/DashboardList.test.jsx
**Line:** 86:86
**Comment:**
*Possible Bug: The mock for the thumbnail endpoint omits `sendAsJson:
false`, so `fetch-mock` will treat the `Blob` as JSON, which can cause the
response to be wrapped/serialized incorrectly and break consumers like
`ImageLoader` that expect to call `response.blob()` on a binary image response
in tests after the dependency upgrade.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/features/home/DashboardTable.test.tsx:
##########
@@ -118,13 +118,10 @@ describe('DashboardTable', () => {
}),
);
- fetchMock.get(
- 'glob:*/api/v1/dashboard/*',
- {
- result: mockDashboards[0],
- },
- { overwriteRoutes: true },
- ); // Add overwriteRoutes option
+ fetchMock.get('glob:*/api/v1/dashboard/*', {
+ result: mockDashboards[0],
+ }); // Add overwriteRoutes option
Review Comment:
**Suggestion:** The `beforeEach` registers the same
`fetchMock.get('glob:*/api/v1/dashboard/*', ...)` route before every test
without setting `overwriteRoutes`, which with the upgraded fetch-mock version
can cause duplicate route registration errors; pass `{ overwriteRoutes: true }`
to allow redefinition on each run. [possible bug]
**Severity Level:** Critical 🚨
```suggestion
fetchMock.get(
'glob:*/api/v1/dashboard/*',
{
result: mockDashboards[0],
},
{ overwriteRoutes: true },
);
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The test's beforeEach registers the same fetch-mock route on every test run.
Recent versions of fetch-mock will throw if the same route is added multiple
times unless overwriteRoutes (or similar) is used. Changing the call to pass {
overwriteRoutes: true } is a small, targeted change that prevents
duplicate-registration errors and makes the tests more robust; it fixes a real
potential test flakiness/CI failure rather than being purely cosmetic.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** superset-frontend/src/features/home/DashboardTable.test.tsx
**Line:** 121:123
**Comment:**
*Possible Bug: The `beforeEach` registers the same
`fetchMock.get('glob:*/api/v1/dashboard/*', ...)` route before every test
without setting `overwriteRoutes`, which with the upgraded fetch-mock version
can cause duplicate route registration errors; pass `{ overwriteRoutes: true }`
to allow redefinition on each run.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/dashboard/components/menu/ShareMenuItems/ShareMenuItems.test.tsx:
##########
@@ -190,8 +187,7 @@ test('Click on "Share dashboard by email" and succeed',
async () => {
test('Click on "Share dashboard by email" and fail', async () => {
fetchMock.post(
`http://localhost/api/v1/dashboard/${DASHBOARD_ID}/permalink`,
- { status: 404 },
- { overwriteRoutes: true },
+ { status: 404 }
Review Comment:
**Suggestion:** When using fetch-mock v12, defining the same POST route
twice without `overwriteRoutes: true` means the second definition for the error
case may be ignored, so the "fail" test will continue to use the success
handler and never truly exercise the error path. [logic error]
**Severity Level:** Minor ⚠️
```suggestion
{ status: 404 },
{ overwriteRoutes: true }
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
The PR sets a successful fetchMock.post handler in beforeAll for the same
URL. Fetch-mock's routing behaviour varies by version, but in v12 (and some
other versions) adding a second route for the same method+url does not
automatically replace the first unless overwriteRoutes is enabled or the mock
is reset. As written, the failing test attempts to register a 404 response but
may never override the success handler, causing the test to not exercise the
error path. Adding the overwriteRoutes option (or resetting/restoring between
tests) fixes a real test logic issue rather than being merely stylistic.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/dashboard/components/menu/ShareMenuItems/ShareMenuItems.test.tsx
**Line:** 190:190
**Comment:**
*Logic Error: When using fetch-mock v12, defining the same POST route
twice without `overwriteRoutes: true` means the second definition for the error
case may be ignored, so the "fail" test will continue to use the success
handler and never truly exercise the error path.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/pages/RowLevelSecurityList/RowLevelSecurityList.test.tsx:
##########
@@ -214,11 +204,7 @@ describe('RuleList RTL', () => {
const editActionIcon = screen.queryByTestId('edit-alt');
expect(editActionIcon).not.toBeInTheDocument();
Review Comment:
**Suggestion:** The permission test overrides the `_info` endpoint but
restores it only after the expectations, so if rendering or assertions throw,
the default permission mock is never restored and later tests will run with
read-only permissions, causing cascading, hard-to-debug failures; using a
`try`/`finally` ensures the restore always happens. [resource leak]
**Severity Level:** Minor ⚠️
```suggestion
try {
await renderAndWait();
const deleteActionIcon = screen.queryByTestId('rls-list-trash-icon');
expect(deleteActionIcon).not.toBeInTheDocument();
const editActionIcon = screen.queryByTestId('edit-alt');
expect(editActionIcon).not.toBeInTheDocument();
} finally {
fetchMock.get(ruleInfoEndpoint, { permissions: ['can_read',
'can_write'] });
}
```
<details>
<summary><b>Why it matters? ⭐ </b></summary>
This test temporarily overrides the permissions endpoint and restores it at
the end. If rendering or assertions throw, the restored mock won't run and
later tests may run with read-only permissions, causing confusing failures.
Using try/finally to always restore the original permission mock is a correct
and minimal fix for test stability.
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/pages/RowLevelSecurityList/RowLevelSecurityList.test.tsx
**Line:** 198:206
**Comment:**
*Resource Leak: The permission test overrides the `_info` endpoint but
restores it only after the expectations, so if rendering or assertions throw,
the default permission mock is never restored and later tests will run with
read-only permissions, causing cascading, hard-to-debug failures; using a
`try`/`finally` ensures the restore always happens.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]