codeant-ai-for-open-source[bot] commented on code in PR #37625:
URL: https://github.com/apache/superset/pull/37625#discussion_r2761789118
##########
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx:
##########
@@ -302,24 +288,16 @@ test('clicking duplicate opens modal and submits
duplicate request', async () =>
kind: 'virtual',
};
- fetchMock.removeRoute(API_ENDPOINTS.DATASETS);
- fetchMock.get(
- API_ENDPOINTS.DATASETS,
- {
- result: [datasetToDuplicate],
- count: 1,
- },
- { name: API_ENDPOINTS.DATASETS },
- );
+ fetchMock.removeRoutes({ names: [API_ENDPOINTS.DATASETS] });
Review Comment:
**Suggestion:** In the duplicate-dataset test, the same incorrect `{ names:
[...] }` argument is passed to `fetchMock.removeRoutes`, so the original
`DATASETS` route from `setupMocks()` may still handle requests and the test's
expectation that the list refresh uses only the mocked dataset can be violated.
[logic error]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ❌ Duplicate-flow test may use wrong dataset list handler.
- ⚠️ False negatives/positives in duplicate behavior tests.
- ⚠️ Test reliability reduced in CI.
```
</details>
```suggestion
fetchMock.removeRoutes(API_ENDPOINTS.DATASETS);
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Open the 'clicking duplicate opens modal and submits duplicate request'
test in
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx; the
PR diff shows
fetchMock.removeRoutes({ names: [API_ENDPOINTS.DATASETS] }); was added at
diff line 291.
2. Run the duplicate test: it expects the component to initially fetch a
single mocked
dataset (the test's fetchMock.get for API_ENDPOINTS.DATASETS returning
datasetToDuplicate)
and then to POST to API_ENDPOINTS.DATASET_DUPLICATE. If removeRoutes did not
actually
remove the previously-registered route, the initial list may be populated
from the
original setupMocks() handler instead.
3. Observe symptoms: duplicate modal may not find the expected row, the
subsequent POST or
refresh detection (measured by
fetchMock.callHistory.calls(API_ENDPOINTS.DATASETS)) may
not reflect only the test's stubs, and assertions checking the refresh
(dataset list call
count increase) can give false positives or negatives.
4. Reproduce-check: add an early assertion after rendering that inspects
fetchMock.callHistory.calls(API_ENDPOINTS.DATASETS) and the response bodies
returned to
the component. Fix by calling fetchMock.removeRoutes(API_ENDPOINTS.DATASETS)
so the
single-dataset stub is guaranteed to be the active route.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx
**Line:** 291:291
**Comment:**
*Logic Error: In the duplicate-dataset test, the same incorrect `{
names: [...] }` argument is passed to `fetchMock.removeRoutes`, so the original
`DATASETS` route from `setupMocks()` may still handle requests and the test's
expectation that the list refresh uses only the mocked dataset can be violated.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/SqlLab/actions/sqlLab.test.ts:
##########
@@ -134,11 +144,14 @@ describe('async actions', () => {
const store = mockStore(initialState);
return store.dispatch(actions.saveQuery(query, queryId)).then(() => {
const call = fetchMock.callHistory.calls(saveQueryEndpoint)[0];
- const formData = JSON.parse(call.options.body);
+ const formData = JSON.parse(call.options.body as string);
const mappedQueryToServer = actions.convertQueryToServer(query);
+ // The 'id' field is excluded from the POST payload since it's for new
queries
Object.keys(mappedQueryToServer).forEach(key => {
Review Comment:
**Suggestion:** The test comment claims that the `id` field is excluded from
the POST payload for new queries, but the current assertion only skips checking
`id` without actually asserting that it is absent; this weakens the contract
and would allow a regression where `id` is mistakenly sent to the backend
without the test failing, so you should explicitly assert that `formData.id` is
undefined before checking the other fields. [logic error]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ❌ Backend may receive unexpected id field in saveQuery requests.
- ⚠️ Unit test fails to catch regressions adding id silently.
- ⚠️ Save query API contract integrity weakened.
```
</details>
```suggestion
expect(formData.id).toBeUndefined();
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Run the SqlLab unit tests in this repo: execute yarn test (or npm test)
which runs jest
and includes the file superset-frontend/src/SqlLab/actions/sqlLab.test.ts.
2. Observe the test 'posts the correct query object' in
superset-frontend/src/SqlLab/actions/sqlLab.test.ts exercising
actions.saveQuery(query,
queryId) (see test block starting ~line 128 and the assertion block at lines
150-155).
3. Currently the test parses the POST body (formData) and iterates
mappedQueryToServer
keys but only skips assertions for key === 'id' — it never asserts
formData.id is absent.
If actions.convertQueryToServer or actions.saveQuery mistakenly includes id
in the POST
payload, this test will still pass because no negative assertion exists.
4. To reproduce the regression concretely: modify the implementation under
test
(src/SqlLab/actions/sqlLab.ts) so that the POST payload explicitly includes
id (for
example, include id when building body), then run the single test file (jest
superset-frontend/src/SqlLab/actions/sqlLab.test.ts). The current test will
not fail,
demonstrating the weak assertion; replacing the existing snippet with the
improved_code
(adding expect(formData.id).toBeUndefined()) will cause the test to fail
when id is
present in the payload.
5. Evidence locations:
- Test file: superset-frontend/src/SqlLab/actions/sqlLab.test.ts lines
~128-156
(assertion block - see lines 150-155 in PR).
- Implementation under test:
superset-frontend/src/SqlLab/actions/sqlLab.ts (caller of
saveQuery/convertQueryToServer) — change here would trigger the
regression tested
above.
6. Conclusion: the suggestion is reproducible by introducing id into the
POST body in the
action implementation and running the existing test; adding the explicit
expect(formData.id).toBeUndefined() makes the test catch regressions.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** superset-frontend/src/SqlLab/actions/sqlLab.test.ts
**Line:** 151:151
**Comment:**
*Logic Error: The test comment claims that the `id` field is excluded
from the POST payload for new queries, but the current assertion only skips
checking `id` without actually asserting that it is absent; this weakens the
contract and would allow a regression where `id` is mistakenly sent to the
backend without the test failing, so you should explicitly assert that
`formData.id` is undefined before checking the other fields.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
##########
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx:
##########
@@ -177,12 +173,10 @@ test('500 error triggers danger toast with error
message', async () => {
test('network timeout triggers danger toast', async () => {
const addDangerToast = jest.fn();
- fetchMock.removeRoute(API_ENDPOINTS.DATASETS);
- fetchMock.get(
- API_ENDPOINTS.DATASETS,
- { throws: new Error('Network timeout') },
- { name: API_ENDPOINTS.DATASETS },
- );
+ fetchMock.removeRoutes({ names: [API_ENDPOINTS.DATASETS] });
Review Comment:
**Suggestion:** In the delete behavior test, `fetchMock.removeRoutes` is
again called with a `{ names: [...] }` object, which does not match
fetch-mock's `removeRoutes(nameOrMatcher)` signature and therefore fails to
remove the default `DATASETS` route, meaning the custom single-dataset response
might not be used and the test can exercise the wrong data. [logic error]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ❌ Delete-flow tests return unexpected dataset lists.
- ⚠️ CI test flakiness for list-related tests.
- ⚠️ Misleading test coverage for delete behavior.
```
</details>
```suggestion
fetchMock.removeRoutes(API_ENDPOINTS.DATASETS);
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Inspect the 'clicking delete opens modal with related objects count' test
in
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx; the
PR diff added
fetchMock.removeRoutes({ names: [API_ENDPOINTS.DATASETS] }); at diff line
176.
2. Run this delete test. setupMocks() earlier registers a DATASETS route;
because
removeRoutes was called with an unsupported options object, that original
route may
persist and continue to serve responses.
3. The test then registers fetchMock.get(API_ENDPOINTS.DATASETS, { result:
[datasetToDelete], count: 1 }); expecting the single-item dataset response
to be used. If
the original route remains active, the component may receive different
dataset data and
the test can either fail or pass incorrectly depending on returned data.
4. Verify reproduction by adding an assertion that inspects fetchMock.calls
or by running
the test and logging the HTTP response body the component receives;
switching to
fetchMock.removeRoutes(API_ENDPOINTS.DATASETS) ensures the original route is
removed and
the intended single-dataset stub is used.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/pages/DatasetList/DatasetList.behavior.test.tsx
**Line:** 176:176
**Comment:**
*Logic Error: In the delete behavior test, `fetchMock.removeRoutes` is
again called with a `{ names: [...] }` object, which does not match
fetch-mock's `removeRoutes(nameOrMatcher)` signature and therefore fails to
remove the default `DATASETS` route, meaning the custom single-dataset response
might not be used and the test can exercise the wrong data.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]