Copilot commented on code in PR #6060:
URL: https://github.com/apache/shenyu/pull/6060#discussion_r2279058660
##########
shenyu-plugin/shenyu-plugin-ai/shenyu-plugin-ai-common/src/main/java/org/apache/shenyu/plugin/ai/common/spring/ai/factory/OpenAiModelFactory.java:
##########
@@ -42,7 +42,7 @@ public ChatModel createAiModel(final AiCommonConfig config) {
.build();
OpenAiChatOptions.Builder model =
OpenAiChatOptions.builder().model(config.getModel());
Optional.ofNullable(config.getTemperature()).ifPresent(model::temperature);
- Optional.ofNullable(config.getMaxTokens()).ifPresent(model::maxTokens);
+
Optional.ofNullable(config.getMaxTokens()).ifPresent(model::maxCompletionTokens);
Review Comment:
The change from `maxTokens` to `maxCompletionTokens` appears to be using a
newer Spring AI API. Ensure this change is compatible with the Spring AI
version specified in the project dependencies and that all tests pass with this
API change.
```suggestion
Optional.ofNullable(config.getMaxTokens()).ifPresent(model::maxTokens);
```
##########
shenyu-common/src/main/java/org/apache/shenyu/common/dto/convert/rule/AiProxyHandle.java:
##########
@@ -218,24 +242,199 @@ public boolean equals(final Object o) {
&& Objects.equals(model, that.model)
&& Objects.equals(temperature, that.temperature)
&& Objects.equals(maxTokens, that.maxTokens)
- && Objects.equals(stream, that.stream);
+ && Objects.equals(stream, that.stream)
+ && Objects.equals(fallbackConfig, that.fallbackConfig);
}
-
+
@Override
public int hashCode() {
- return Objects.hash(provider, baseUrl, apiKey, model, temperature,
maxTokens, stream);
+ return Objects.hash(provider, baseUrl, apiKey, model, temperature,
maxTokens, stream, fallbackConfig);
}
-
+
@Override
public String toString() {
return "AiProxyHandle{"
+ "provider='" + provider + '\''
+ ", baseUrl='" + baseUrl + '\''
- + ", apiKey='" + apiKey + '\''
+ + ", apiKey='" + maskApiKey(apiKey) + '\''
+ ", model='" + model + '\''
+ ", temperature=" + temperature
+ ", maxTokens=" + maxTokens
+ ", stream=" + stream
+ + ", fallbackConfig=" + fallbackConfig
+ '}';
}
-}
+
+ public static String maskApiKey(final String apiKey) {
+ if (Objects.isNull(apiKey) || apiKey.length() <= 7) {
+ return apiKey;
+ }
+ return apiKey.substring(0, 3) + "****" +
apiKey.substring(apiKey.length() - 4);
Review Comment:
The `maskApiKey` method should handle edge cases more robustly. If the API
key is exactly 7 characters or less, it returns the full key unmasked, which
could expose short API keys. Consider always masking at least some characters
regardless of length.
```suggestion
if (Objects.isNull(apiKey) || apiKey.isEmpty()) {
return apiKey;
}
int len = apiKey.length();
if (len <= 4) {
// Show only the first character, mask the rest
return apiKey.substring(0, 1) + "***";
} else if (len <= 7) {
// Show first and last character, mask the middle
return apiKey.substring(0, 1) + "***" + apiKey.substring(len -
1);
} else {
// Show first 3 and last 4 characters, mask the middle
return apiKey.substring(0, 3) + "****" + apiKey.substring(len -
4);
}
```
##########
shenyu-plugin/shenyu-plugin-ai/shenyu-plugin-ai-proxy-enhanced/src/main/java/org/apache/shenyu/plugin/ai/proxy/enhanced/service/AiProxyExecutorService.java:
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.shenyu.plugin.ai.proxy.enhanced.service;
+
+import org.apache.shenyu.plugin.ai.common.strategy.SimpleModelFallbackStrategy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.ai.chat.client.ChatClient;
+import org.springframework.ai.chat.model.ChatResponse;
+import org.springframework.ai.retry.NonTransientAiException;
+import org.springframework.stereotype.Service;
+import reactor.core.publisher.Flux;
+import reactor.core.publisher.Mono;
+import reactor.core.scheduler.Schedulers;
+import reactor.util.retry.Retry;
+
+import java.time.Duration;
+import java.util.Optional;
+
+/**
+ * AI proxy executor service.
+ */
+@Service
+public class AiProxyExecutorService {
+
+ private static final Logger LOG =
LoggerFactory.getLogger(AiProxyExecutorService.class);
+
+ /**
+ * Execute the AI call with retry and fallback.
+ *
+ * @param mainClient the main chat client
+ * @param fallbackClientOpt the optional fallback chat client
+ * @param requestBody the request body
+ * @return a Mono containing the ChatResponse
+ */
+ public Mono<ChatResponse> execute(final ChatClient mainClient, final
Optional<ChatClient> fallbackClientOpt, final String requestBody) {
+ final Mono<ChatResponse> mainCall = doChatCall(mainClient,
requestBody);
+
+ return mainCall
+ .retryWhen(Retry.backoff(3, Duration.ofSeconds(1))
+ .filter(throwable -> !(throwable instanceof
NonTransientAiException))
+ .onRetryExhaustedThrow((retryBackoffSpec, retrySignal)
-> {
+ LOG.warn("Retries exhausted for AI call after {}
attempts.",
+ retrySignal.totalRetries(),
retrySignal.failure());
+ return new NonTransientAiException("Retries
exhausted. Triggering fallback.",
+ retrySignal.failure());
+ }))
+ .onErrorResume(NonTransientAiException.class,
+ throwable -> handleFallback(throwable,
fallbackClientOpt, requestBody));
+ }
+
+ protected Mono<ChatResponse> doChatCall(final ChatClient client, final
String requestBody) {
+ return Mono.fromCallable(() ->
client.prompt().user(requestBody).call().chatResponse())
+ .subscribeOn(Schedulers.boundedElastic());
+ }
+
+ private Mono<ChatResponse> handleFallback(final Throwable throwable, final
Optional<ChatClient> fallbackClientOpt, final String requestBody) {
+ LOG.warn("AI main call failed or retries exhausted, attempting to
fallback...", throwable);
+
+ if (fallbackClientOpt.isEmpty()) {
+ return Mono.error(throwable);
+ }
+
+ return
SimpleModelFallbackStrategy.INSTANCE.fallback(fallbackClientOpt.get(),
requestBody, throwable);
+ }
+
+ /**
+ * Execute the AI call with retry and fallback.
+ *
+ * @param mainClient the main chat client
+ * @param fallbackClientOpt the optional fallback chat client
+ * @param requestBody the request body
+ * @return a Flux containing the ChatResponse
+ */
+ public Flux<ChatResponse> executeStream(final ChatClient mainClient, final
Optional<ChatClient> fallbackClientOpt, final String requestBody) {
+ final Flux<ChatResponse> mainStream = doChatStream(mainClient,
requestBody);
+
+ return mainStream
+ .retryWhen(Retry.max(1)
+ .onRetryExhaustedThrow((retryBackoffSpec, retrySignal)
-> {
+ LOG.warn("Retrying stream once failed. Attempts:
{}. Triggering fallback.",
+ retrySignal.totalRetries(),
retrySignal.failure());
+ return new NonTransientAiException("Stream failed
after 1 retry. Triggering fallback.");
Review Comment:
The stream retry logic uses `Retry.max(1)` which only allows one retry
attempt, while the non-stream version uses `Retry.backoff(3,
Duration.ofSeconds(1))` with 3 attempts and backoff. Consider making the retry
strategies consistent or documenting why streaming requests need different
retry behavior.
```suggestion
.retryWhen(Retry.backoff(3, Duration.ofSeconds(1))
.filter(throwable -> !(throwable instanceof
NonTransientAiException))
.onRetryExhaustedThrow((retryBackoffSpec,
retrySignal) -> {
LOG.warn("Retries exhausted for AI stream call
after {} attempts.",
retrySignal.totalRetries(),
retrySignal.failure());
return new NonTransientAiException("Retries
exhausted. Triggering fallback.",
retrySignal.failure());
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]