This is an automated email from the ASF dual-hosted git repository.

yilialin pushed a commit to branch docs-improve-ai-prmopt-template
in repository https://gitbox.apache.org/repos/asf/apisix.git

commit dad8365102c0406fa261b24ddfd3e449fcf7a541
Author: Yilia <[email protected]>
AuthorDate: Tue Jan 13 17:12:45 2026 +0800

    update
---
 docs/en/latest/plugins/ai-prompt-template.md | 219 ++++++++++++++++++++++++---
 docs/zh/latest/plugins/ai-prompt-template.md | 218 +++++++++++++++++++++++---
 2 files changed, 388 insertions(+), 49 deletions(-)

diff --git a/docs/en/latest/plugins/ai-prompt-template.md 
b/docs/en/latest/plugins/ai-prompt-template.md
index 9ca4e1f70..ad53b9713 100644
--- a/docs/en/latest/plugins/ai-prompt-template.md
+++ b/docs/en/latest/plugins/ai-prompt-template.md
@@ -5,7 +5,7 @@ keywords:
   - API Gateway
   - Plugin
   - ai-prompt-template
-description: This document contains information about the Apache APISIX 
ai-prompt-template Plugin.
+description: The `ai-prompt-template` plugin supports pre-configuring prompt 
templates that only accept user inputs in designated template variables, in a 
"fill in the blank" fashion.
 ---
 
 <!--
@@ -27,28 +27,45 @@ description: This document contains information about the 
Apache APISIX ai-promp
 #
 -->
 
+<head>
+  <link rel="canonical" href="https://docs.api7.ai/hub/ai-prompt-template"; />
+</head>
+
 ## Description
 
-The `ai-prompt-template` plugin simplifies access to LLM providers, such as 
OpenAI and Anthropic, and their models by predefining the request format
-using a template, which only allows users to pass customized values into 
template variables.
+The `ai-prompt-template` plugin simplifies access to LLM providers, such as 
OpenAI and Anthropic, and their models. It pre-configures prompt templates that 
only accept user inputs in designated template variables, in a "fill in the 
blank" fashion.
 
 ## Plugin Attributes
 
-| **Field**                             | **Required** | **Type** | 
**Description**                                                                 
                                            |
-| ------------------------------------- | ------------ | -------- | 
---------------------------------------------------------------------------------------------------------------------------
 |
-| `templates`                           | Yes          | Array    | An array 
of template objects                                                             
                                   |
-| `templates.name`                      | Yes          | String   | Name of 
the template.                                                                   
                                    |
-| `templates.template.model`            | Yes          | String   | Model of 
the AI Model, for example `gpt-4` or `gpt-3.5`. See your LLM provider API 
documentation for more available models. |
-| `templates.template.messages.role`    | Yes          | String   | Role of 
the message (`system`, `user`, `assistant`)                                     
                                    |
-| `templates.template.messages.content` | Yes          | String   | Content of 
the message.                                                                    
                                 |
+| **Field** | **Required** | **Type** | **Description** |
+| :--- | :--- | :--- | :--- |
+| `templates` | Yes | Array | An array of template objects. |
+| `templates.name` | Yes | String | Name of the template. When requesting the 
route, the request should include the template name that corresponds to the 
configured template. |
+| `templates.template` | Yes | Object | Template specification. |
+| `templates.template.model` | Yes | String | Name of the AI Model, such as 
`gpt-4` or `gpt-3.5`. See your LLM provider API documentation for more 
available models. |
+| `templates.template.messages` | Yes | Array | Template message 
specification. |
+| `templates.template.messages.role` | Yes | String | Role of the message, 
such as `system`, `user`, or `assistant`. |
+| `templates.template.messages.content` | Yes | String | Content of the 
message (prompt). |
 
-## Example usage
+## Examples
 
-Create a route with the `ai-prompt-template` plugin like so:
+The following examples will be using OpenAI as the upstream service provider. 
Before proceeding, create an [OpenAI account](https://openai.com) and an [API 
key](https://openai.com/blog/openai-api). You can optionally save the key to an 
environment variable as such:
+
+```shell
+export OPENAI_API_KEY=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   # 
replace with your API key
+```
+
+If you are working with other LLM providers, please refer to the provider's 
documentation to obtain an API key.
+
+### Configure a Template for Open Questions in Custom Complexity
+
+The following example demonstrates how to use the `ai-prompt-template` plugin 
to configure a template which can be used to answer open questions and accepts 
user-specified response complexity.
+
+Create a route to the chat completion endpoint with pre-configured prompt 
templates as such:
 
 ```shell
 curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT \
-  -H "X-API-KEY: ${ADMIN_API_KEY}" \
+  -H "X-API-KEY: ${admin_key}" \
   -d '{
     "uri": "/v1/chat/completions",
     "upstream": {
@@ -60,16 +77,27 @@ curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT \
       "pass_host": "node"
     },
     "plugins": {
+      "proxy-rewrite": {
+        "headers": {
+          "set": {
+            "Authorization": "Bearer '"$OPENAI_API_KEY"'"
+          }
+        }
+      },
       "ai-prompt-template": {
         "templates": [
           {
-            "name": "level of detail",
+            "name": "QnA with complexity",
             "template": {
               "model": "gpt-4",
               "messages": [
+                {
+                  "role": "system",
+                  "content": "Answer in {{complexity}}."
+                },
                 {
                   "role": "user",
-                  "content": "Explain about {{ topic }} in {{ level }}."
+                  "content": "Explain {{prompt}}."
                 }
               ]
             }
@@ -80,23 +108,164 @@ curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT 
\
   }'
 ```
 
+Send a POST request to the route with a sample question and desired answer 
complexity in the request body.
+
 Now send a request:
 
 ```shell
-curl http://127.0.0.1:9080/v1/chat/completions -i -XPOST  -H 'Content-Type: 
application/json' -d '{
-  "template_name": "level of detail",
-  "topic": "psychology",
-  "level": "brief"
-}' -H "Authorization: Bearer <your token here>"
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -H "Host: api.openai.com:443" \
+  -d '{
+    "template_name": "QnA with complexity",
+    "complexity": "brief",
+    "prompt": "quick sort"
+  }'
+```
+
+You should receive a response similar to the following:
+
+```json
+{
+  "choices": [
+    {
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "Quick sort is a highly efficient sorting algorithm that 
uses a divide-and-conquer approach to arrange elements in a list or array in 
order. Here’s a brief explanation:\n\n1. **Choose a Pivot**: Select an element 
from the list as a 'pivot'. Common methods include choosing the first element, 
the last element, the middle element, or a random element.\n\n2. 
**Partitioning**: Rearrange the elements in the list such that all elements 
less than the pivot are moved before it, [...]
+        "role": "assistant"
+      }
+    }
+  ],
+  "created": 1723194057,
+  "id": "chatcmpl-9uFmTYN4tfwaXZjyOQwcp0t5law4x",
+  "model": "gpt-4o-2024-05-13",
+  "object": "chat.completion",
+  "system_fingerprint": "fp_abc28019ad",
+  "usage": {
+    "completion_tokens": 234,
+    "prompt_tokens": 18,
+    "total_tokens": 252
+  }
+}
+```
+
+### Configure Multiple Templates
+
+The following example demonstrates how you can configure multiple templates on 
the same route. When requesting the route, users will be able to pass custom 
inputs to different templates by specifying the template name.
+
+The example continues with the [last 
example](#configure-a-template-for-open-questions-in-custom-complexity). Update 
the plugin with another template:
+
+```shell
+curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PATCH \
+  -H "X-API-KEY: ${admin_key}" \
+  -d '{
+    "uri": "/v1/chat/completions",
+    "upstream": {
+      "type": "roundrobin",
+      "nodes": {
+        "api.openai.com:443": 1
+      },
+      "scheme": "https",
+      "pass_host": "node"
+    },
+    "plugins": {
+      "ai-prompt-template": {
+        "templates": [
+          {
+            "name": "QnA with complexity",
+            "template": {
+              "model": "gpt-4",
+              "messages": [
+                {
+                  "role": "system",
+                  "content": "Answer in {{complexity}}."
+                },
+                {
+                  "role": "user",
+                  "content": "Explain {{prompt}}."
+                }
+              ]
+            }
+          },
+          {
+            "name": "echo",
+            "template": {
+              "model": "gpt-4",
+              "messages": [
+                {
+                  "role": "user",
+                  "content": "Echo {{prompt}}."
+                }
+              ]
+            }
+          }
+        ]
+      }
+    }
+  }'
+```
+
+You should now be able to use both templates through the same route.
+
+Send a POST request to the route and use the first template:
+
+```shell
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -H "Host: api.openai.com:443" \
+  -d '{
+    "template_name": "QnA with complexity",
+    "complexity": "brief",
+    "prompt": "quick sort"
+  }'
+```
+
+You should receive a response similar to the following:
+
+```json
+{
+  "choices": [
+    {
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "Quick sort is a highly efficient sorting algorithm that 
uses a divide-and-conquer approach to arrange elements in a list or array in 
order. Here’s a brief explanation:\n\n1. **Choose a Pivot**: Select an element 
from the list as a 'pivot'. Common methods include choosing the first element, 
the last element, the middle element, or a random element.\n\n2. 
**Partitioning**: Rearrange the elements in the list such that all elements 
less than the pivot are moved before it, [...]
+        "role": "assistant"
+      }
+    }
+  ],
+  ...
+}
+```
+
+Send a POST request to the route and use the second template:
+
+```shell
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -H "Host: api.openai.com:443" \
+  -d '{
+    # highlight-next-line
+    "template_name": "echo",
+    "prompt": "hello APISIX"
+  }'
 ```
 
-Then the request body will be modified to something like this:
+You should receive a response similar to the following:
 
 ```json
 {
-  "model": "some model",
-  "messages": [
-    { "role": "user", "content": "Explain about psychology in brief." }
-  ]
+  "choices": [
+    {
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "hello APISIX",
+        "role": "assistant"
+      }
+    }
+  ],
+  ...
 }
 ```
diff --git a/docs/zh/latest/plugins/ai-prompt-template.md 
b/docs/zh/latest/plugins/ai-prompt-template.md
index fb329280c..009788d3b 100644
--- a/docs/zh/latest/plugins/ai-prompt-template.md
+++ b/docs/zh/latest/plugins/ai-prompt-template.md
@@ -5,7 +5,7 @@ keywords:
   - API 网关
   - Plugin
   - ai-prompt-template
-description: 本文档包含有关 Apache APISIX ai-prompt-template 插件的信息。
+description: "`ai-prompt-template` 
插件支持预先配置提示词模板,这些模板仅接受用户在指定的模板变量中输入,采用“填空”的方式。"
 ---
 
 <!--
@@ -27,27 +27,138 @@ description: 本文档包含有关 Apache APISIX ai-prompt-template 插件的信
 #
 -->
 
+<head>
+  <link rel="canonical" href="https://docs.api7.ai/hub/ai-prompt-template"; />
+</head>
+
 ## 描述
 
-`ai-prompt-template` 插件通过使用模板预定义请求格式,简化了对 LLM 提供商(如 OpenAI 和 
Anthropic)及其模型的访问,只允许用户将自定义值传递到模板变量中。
+`ai-prompt-template` 插件简化了对 OpenAI、Anthropic 
等大语言模型提供商及其模型的访问。它预先配置提示词模板,这些模板仅接受用户在指定的模板变量中输入,采用“填空”的方式。
 
 ## 插件属性
 
-| **字段**                              | **必选项** | **类型** | **描述**              
                                                                                
               |
-| ------------------------------------- | ---------- | -------- | 
--------------------------------------------------------------------------------------------------------------------
 |
-| `templates`                           | 是         | Array    | 模板对象数组        
                                                                                
        |
-| `templates.name`                      | 是         | String   | 模板的名称。        
                                                                                
               |
-| `templates.template.model`            | 是         | String   | AI 模型的名称,例如 
`gpt-4` 或 `gpt-3.5`。有关更多可用模型,请参阅您的 LLM 提供商 API 文档。 |
-| `templates.template.messages.role`    | 是         | String   | 
消息的角色(`system`、`user`、`assistant`)                                              
                           |
-| `templates.template.messages.content` | 是         | String   | 消息的内容。        
                                                                                
             |
+| **字段** | **是否必填** | **类型** | **描述** |
+| :--- | :--- | :--- | :--- |
+| `templates` | 是 | 数组 | 模板对象数组。 |
+| `templates.name` | 是 | 字符串 | 模板的名称。在请求路由时,请求中应包含与所配置模板相对应的模板名称。 |
+| `templates.template` | 是 | 对象 | 模板规范。 |
+| `templates.template.model` | 是 | 字符串 | AI 模型的名称,例如 `gpt-4` 或 
`gpt-3.5`。更多可用模型请参阅 LLM 提供商的 API 文档。 |
+| `templates.template.messages` | 是 | 数组 | 模板消息规范。 |
+| `templates.template.messages.role` | 是 | 字符串 | 消息的角色,例如 `system`、`user` 或 
`assistant`。 |
+| `templates.template.messages.content` | 是 | 字符串 | 消息(提示词)的内容。 |
 
 ## 使用示例
 
-创建一个带有 `ai-prompt-template` 插件的路由,如下所示:
+以下示例将使用 OpenAI 作为上游服务提供商。开始之前,请先创建一个 OpenAI 账户 和一个 API 
密钥。你可以选择将密钥保存到环境变量中,如下所示:
+
+```shell
+export OPENAI_API_KEY=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   # 
替换为你的 API 密钥
+```
+
+如果你正在使用其他 LLM 提供商,请参阅该提供商的文档以获取 API 密钥。
+
+### 为自定义复杂度的开放式问题配置模板
+
+以下示例演示了如何使用 `ai-prompt-template` 插件配置一个模板,该模板可用于回答开放式问题并接受用户指定的回答复杂度。
+
+创建一个指向聊天补全端点的路由,并配置预定义的提示词模板:
 
 ```shell
 curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT \
-  -H "X-API-KEY: ${ADMIN_API_KEY}" \
+  -H "X-API-KEY: ${admin_key}" \
+  -d '{
+    "uri": "/v1/chat/completions",
+    "upstream": {
+      "type": "roundrobin",
+      "nodes": {
+        "api.openai.com:443": 1
+      },
+      "scheme": "https",
+      "pass_host": "node"
+    },
+    "plugins": {
+      "proxy-rewrite": {
+        "headers": {
+          "set": {
+            "Authorization": "Bearer '"$OPENAI_API_KEY"'"
+          }
+        }
+      },
+      "ai-prompt-template": {
+        "templates": [
+          {
+            "name": "QnA with complexity",
+            "template": {
+              "model": "gpt-4",
+              "messages": [
+                {
+                  "role": "system",
+                  "content": "Answer in {{complexity}}."
+                },
+                {
+                  "role": "user",
+                  "content": "Explain {{prompt}}."
+                }
+              ]
+            }
+          }
+        ]
+      }
+    }
+  }'
+```
+
+向该路由发送一个 POST 请求,在请求体中包含示例问题和期望的回答复杂。
+
+发送请求:
+
+```shell
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -H "Host: api.openai.com:443" \
+  -d '{
+    "template_name": "QnA with complexity",
+    "complexity": "brief",
+    "prompt": "quick sort"
+  }'
+```
+
+你应该会收到类似于以下的响应:
+
+```json
+{
+  "choices": [
+    {
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "Quick sort is a highly efficient sorting algorithm that 
uses a divide-and-conquer approach to arrange elements in a list or array in 
order. Here’s a brief explanation:\n\n1. **Choose a Pivot**: Select an element 
from the list as a 'pivot'. Common methods include choosing the first element, 
the last element, the middle element, or a random element.\n\n2. 
**Partitioning**: Rearrange the elements in the list such that all elements 
less than the pivot are moved before it, [...]
+        "role": "assistant"
+      }
+    }
+  ],
+  "created": 1723194057,
+  "id": "chatcmpl-9uFmTYN4tfwaXZjyOQwcp0t5law4x",
+  "model": "gpt-4o-2024-05-13",
+  "object": "chat.completion",
+  "system_fingerprint": "fp_abc28019ad",
+  "usage": {
+    "completion_tokens": 234,
+    "prompt_tokens": 18,
+    "total_tokens": 252
+  }
+}
+```
+
+### 配置多个模板
+
+以下示例演示了如何在同一条路由上配置多个模板。请求该路由时,用户将能够通过指定模板名称向不同模板传递自定义输入。
+
+该示例延续自[上一个示例](#为自定义复杂度的开放式问题配置模板)。使用另一个模板更新插件配置:
+
+```shell
+curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PATCH \
+  -H "X-API-KEY: ${admin_key}" \
   -d '{
     "uri": "/v1/chat/completions",
     "upstream": {
@@ -62,13 +173,29 @@ curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT \
       "ai-prompt-template": {
         "templates": [
           {
-            "name": "详细程度",
+            "name": "QnA with complexity",
+            "template": {
+              "model": "gpt-4",
+              "messages": [
+                {
+                  "role": "system",
+                  "content": "Answer in {{complexity}}."
+                },
+                {
+                  "role": "user",
+                  "content": "Explain {{prompt}}."
+                }
+              ]
+            }
+          },
+          {
+            "name": "echo",
             "template": {
               "model": "gpt-4",
               "messages": [
                 {
                   "role": "user",
-                  "content": "用{{ level }}的方式解释{{ topic }}。"
+                  "content": "Echo {{prompt}}."
                 }
               ]
             }
@@ -79,23 +206,66 @@ curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT \
   }'
 ```
 
-现在发送一个请求:
+现在,你应该能够通过同一条路由使用这两个模板。
+
+向路由发送一个 POST 请求,使用第一个模板:
+
+```shell
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -H "Host: api.openai.com:443" \
+  -d '{
+    "template_name": "QnA with complexity",
+    "complexity": "brief",
+    "prompt": "quick sort"
+  }'
+```
+
+你应该会收到类似于以下的响应:
+
+```json
+{
+  "choices": [
+    {
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "Quick sort is a highly efficient sorting algorithm that 
uses a divide-and-conquer approach to arrange elements in a list or array in 
order. Here’s a brief explanation:\n\n1. **Choose a Pivot**: Select an element 
from the list as a 'pivot'. Common methods include choosing the first element, 
the last element, the middle element, or a random element.\n\n2. 
**Partitioning**: Rearrange the elements in the list such that all elements 
less than the pivot are moved before it, [...]
+        "role": "assistant"
+      }
+    }
+  ],
+  ...
+}
+```
+
+向路由发送一个 POST 请求,使用第二个模板:
 
 ```shell
-curl http://127.0.0.1:9080/v1/chat/completions -i -XPOST  -H 'Content-Type: 
application/json' -d '{
-  "template_name": "详细程度",
-  "topic": "心理学",
-  "level": "简要"
-}' -H "Authorization: Bearer <your token here>"
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -H "Host: api.openai.com:443" \
+  -d '{
+    # highlight-next-line
+    "template_name": "echo",
+    "prompt": "hello APISIX"
+  }'
 ```
 
-然后请求体将被修改为类似这样:
+你应该会收到类似于以下的响应:
 
 ```json
 {
-  "model": "some model",
-  "messages": [
-    { "role": "user", "content": "用简要的方式解释心理学。" }
-  ]
+  "choices": [
+    {
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "hello APISIX",
+        "role": "assistant"
+      }
+    }
+  ],
+  ...
 }
 ```

Reply via email to