Copilot commented on code in PR #3427:
URL: https://github.com/apache/polaris/pull/3427#discussion_r2684548560
##########
polaris-core/src/main/java/org/apache/polaris/core/persistence/resolver/Resolver.java:
##########
@@ -249,7 +250,43 @@ public ResolverStatus resolveAll() {
do {
status = runResolvePass();
count++;
- } while (status == null && ++count < 1000);
+ } while (status == null && ++count < MAX_RESOLVE_PASSES);
+
+ // assert if status is null
+ this.diagnostics.checkNotNull(status, "cannot_resolve_all_entities");
+
+ // remember the resolver status
+ this.resolverStatus = status;
+
+ // all has been resolved
+ return status;
+ }
+
+ /**
+ * Run the resolution process without resolving the caller principal or any
roles.
+ *
+ * <p>This resolves the reference catalog (if specified), any requested
top-level entities, and
+ * the requested paths, while preserving the same validation and retry
behavior as {@link
+ * #resolveAll()}.
+ *
+ * @return the status of the resolver. If success, all requested paths have
been resolved and the
+ * getResolvedXYZ() method can be called. This method returns SUCCESS,
+ * PATH_COULD_NOT_BE_FULLY_RESOLVED, or ENTITY_COULD_NOT_BE_RESOLVED and
never returns
+ * CALLER_PRINCIPAL_DOES_NOT_EXIST.
+ */
+ public ResolverStatus resolvePathsOnly() {
+ // can only be called if the resolver has not yet been called
+ this.diagnostics.check(resolverStatus == null, "resolver_called");
+
+ // retry until a pass terminates, or we reached the maximum iteration
count. Note that we
+ // should finish normally in no more than few passes so the 1000 limit is
really to avoid
+ // spinning forever if there is a bug.
Review Comment:
The comment mentions "1000 limit" but should reference the constant
MAX_RESOLVE_PASSES instead to improve maintainability. Consider changing "the
1000 limit" to "the MAX_RESOLVE_PASSES limit" or just "this limit".
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]