This is an automated email from the ASF dual-hosted git repository.

yilialin pushed a commit to branch docs-improve-ai-prompt-decorator
in repository https://gitbox.apache.org/repos/asf/apisix.git


The following commit(s) were added to 
refs/heads/docs-improve-ai-prompt-decorator by this push:
     new c2a104039 update
c2a104039 is described below

commit c2a104039b5179dc4cd4432e716918240b00711a
Author: Yilia <[email protected]>
AuthorDate: Fri Jan 16 11:59:51 2026 +0800

    update
---
 docs/en/latest/plugins/ai-prompt-decorator.md | 102 ++++++++++++++++---------
 docs/zh/latest/plugins/ai-prompt-decorator.md | 106 ++++++++++++++++----------
 2 files changed, 130 insertions(+), 78 deletions(-)

diff --git a/docs/en/latest/plugins/ai-prompt-decorator.md 
b/docs/en/latest/plugins/ai-prompt-decorator.md
index 44ee59e74..7bef8bb3d 100644
--- a/docs/en/latest/plugins/ai-prompt-decorator.md
+++ b/docs/en/latest/plugins/ai-prompt-decorator.md
@@ -5,7 +5,7 @@ keywords:
   - API Gateway
   - Plugin
   - ai-prompt-decorator
-description: This document contains information about the Apache APISIX 
ai-prompt-decorator Plugin.
+description: The ai-prompt-decorator Plugin decorates user prompts to LLMs by 
prefixing and appending pre-engineered prompts, streamlining API operation and 
content generation.
 ---
 
 <!--
@@ -27,83 +27,109 @@ description: This document contains information about the 
Apache APISIX ai-promp
 #
 -->
 
+<head>
+  <link rel="canonical" href="https://docs.api7.ai/hub/ai-prompt-decorator"; />
+</head>
+
 ## Description
 
-The `ai-prompt-decorator` plugin simplifies access to LLM providers, such as 
OpenAI and Anthropic, and their models by appending or prepending prompts into 
the request.
+The `ai-prompt-decorator` Plugin simplifies access to LLM providers, such as 
OpenAI and Anthropic, and their models. It modifies user input prompts by 
prefixing and appending pre-engineered prompts to set contexts in content 
generation. This practice helps the model operate within desired guidelines 
during interactions.
 
 ## Plugin Attributes
 
 | **Field**         | **Required**    | **Type** | **Description**             
                        |
 | ----------------- | --------------- | -------- | 
--------------------------------------------------- |
-| `prepend`         | Conditionally\* | Array    | An array of prompt objects 
to be prepended          |
-| `prepend.role`    | Yes             | String   | Role of the message 
(`system`, `user`, `assistant`) |
-| `prepend.content` | Yes             | String   | Content of the message. 
Minimum length: 1           |
-| `append`          | Conditionally\* | Array    | An array of prompt objects 
to be appended           |
-| `append.role`     | Yes             | String   | Role of the message 
(`system`, `user`, `assistant`) |
-| `append.content`  | Yes             | String   | Content of the message. 
Minimum length: 1           |
+| `prepend`         | Conditionally\* | Array    | An array of prompt objects 
to be prepended.          |
+| `prepend.role`    | Yes             | String   | Role of the message, such 
as `system`, `user`, or `assistant`. |
+| `prepend.content` | Yes             | String   | Content of the message 
(prompt).          |
+| `append`          | Conditionally\* | Array    | An array of prompt objects 
to be appended.           |
+| `append.role`     | Yes             | String   | Role of the message, such 
as `system`, `user`, or `assistant`. |
+| `append.content`  | Yes             | String   | Content of the message 
(prompt).          |
 
 \* **Conditionally Required**: At least one of `prepend` or `append` must be 
provided.
 
-## Example usage
+## Example
+
+The following example will be using OpenAI as the upstream service provider. 
Before proceeding, create an [OpenAI account](https://openai.com) and an [API 
key](https://openai.com/blog/openai-api). You can optionally save the key to an 
environment variable as such:
+
+```shell
+export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
+```
+
+If you are working with other LLM providers, please refer to the provider's 
documentation to obtain an API key.
+
+### Prepend and Append Messages
+
+The following example demonstrates how to configure the `ai-prompt-decorator` 
Plugin to prepend a system message and append a user message to the user input 
message.
 
-Create a route with the `ai-prompt-decorator` plugin like so:
+Create a Route to the chat completion endpoint with pre-configured prompt 
templates as such:
 
 ```shell
 curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT \
-  -H "X-API-KEY: ${ADMIN_API_KEY}" \
+  -H "X-API-KEY: ${admin_key}" \
   -d '{
     "uri": "/v1/chat/completions",
     "plugins": {
+      "ai-proxy": {
+        "provider": "openai",
+        "auth": {
+          "header": {
+            "Authorization": "Bearer '"$OPENAI_API_KEY"'"
+          }
+        }
+      },
       "ai-prompt-decorator": {
         "prepend":[
           {
             "role": "system",
-            "content": "I have exams tomorrow so explain conceptually and 
briefly"
+            "content": "Answer briefly and conceptually."
           }
         ],
         "append":[
           {
-            "role": "system",
-            "content": "End the response with an analogy."
+            "role": "user",
+            "content": "End the answer with a simple analogy."
           }
         ]
       }
-    },
-    "upstream": {
-      "type": "roundrobin",
-      "nodes": {
-        "api.openai.com:443": 1
-      },
-      "pass_host": "node",
-      "scheme": "https"
     }
   }'
 ```
 
-Now send a request:
+Send a POST request to the Route specifying the model and a sample message in 
the request body:
 
 ```shell
-curl http://127.0.0.1:9080/v1/chat/completions -i -XPOST  -H 'Content-Type: 
application/json' -d '{
-  "model": "gpt-4",
-  "messages": [{ "role": "user", "content": "What is TLS Handshake?" }]
-}' -H "Authorization: Bearer <your token here>"
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -d '{
+    "model": "gpt-4",
+    "messages": [{ "role": "user", "content": "What is mTLS authentication?" }]
+  }'
 ```
 
-Then the request body will be modified to something like this:
+You should receive a response similar to the following:
 
 ```json
 {
-  "model": "gpt-4",
-  "messages": [
+  "choices": [
     {
-      "role": "system",
-      "content": "I have exams tomorrow so explain conceptually and briefly"
-    },
-    { "role": "user", "content": "What is TLS Handshake?" },
-    {
-      "role": "system",
-      "content": "End the response with an analogy."
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "Mutual TLS (mTLS) authentication is a security protocol 
that ensures both the client and server authenticate each other's identity 
before establishing a connection. This mutual authentication is achieved 
through the exchange and verification of digital certificates, which are 
cryptographically signed credentials proving each party's identity. In contrast 
to standard TLS, where only the server is authenticated, mTLS adds an 
additional layer of trust by verifying the cl [...]
+        "role": "assistant"
+      }
     }
-  ]
+  ],
+  "created": 1723193502,
+  "id": "chatcmpl-9uFdWDlwKif6biCt9DpG0xgedEamg",
+  "model": "gpt-4o-2024-05-13",
+  "object": "chat.completion",
+  "system_fingerprint": "fp_abc28019ad",
+  "usage": {
+    "completion_tokens": 124,
+    "prompt_tokens": 31,
+    "total_tokens": 155
+  }
 }
 ```
diff --git a/docs/zh/latest/plugins/ai-prompt-decorator.md 
b/docs/zh/latest/plugins/ai-prompt-decorator.md
index cf78a87ff..7866a4329 100644
--- a/docs/zh/latest/plugins/ai-prompt-decorator.md
+++ b/docs/zh/latest/plugins/ai-prompt-decorator.md
@@ -5,7 +5,7 @@ keywords:
   - API 网关
   - Plugin
   - ai-prompt-decorator
-description: 本文档包含有关 Apache APISIX ai-prompt-decorator 插件的信息。
+description: ai-prompt-decorator 插件插件通过前缀和后缀附加预先设计的提示词来装饰用户向大语言模型提交的提示,从而简化 
API 操作和内容生成流程。
 ---
 
 <!--
@@ -27,83 +27,109 @@ description: 本文档包含有关 Apache APISIX ai-prompt-decorator 插件的
 #
 -->
 
+<head>
+  <link rel="canonical" href="https://docs.api7.ai/hub/ai-prompt-decorator"; />
+</head>
+
 ## 描述
 
-`ai-prompt-decorator` 插件通过在请求中追加或前置提示,简化了对 LLM 提供商(如 OpenAI 和 
Anthropic)及其模型的访问。
+`ai-prompt-decorator` 插件简化了对 OpenAI、Anthropic 
等大语言模型提供商及其模型的访问。它通过在前缀和后缀附加预先设计的提示词来修饰用户输入的提示,从而为内容生成设置上下文。这种做法有助于模型在交互过程中按照期望的指导原则运行。
 
 ## 插件属性
 
-| **字段**          | **必选项**      | **类型** | **描述**                             
        |
+| **字段**          | **是否必填**      | **类型** | **描述**                            
         |
 | ----------------- | --------------- | -------- | 
-------------------------------------------- |
-| `prepend`         | 条件必选\*      | Array    | 要前置的提示对象数组                      
   |
-| `prepend.role`    | 是              | String   | 
消息的角色(`system`、`user`、`assistant`) |
-| `prepend.content` | 是              | String   | 消息的内容。最小长度:1                 
     |
-| `append`          | 条件必选\*      | Array    | 要追加的提示对象数组                      
   |
-| `append.role`     | 是              | String   | 
消息的角色(`system`、`user`、`assistant`) |
-| `append.content`  | 是              | String   | 消息的内容。最小长度:1                 
     |
+| `prepend`         | 条件性必填\*      | Array    | 需要前置添加的提示对象数组。组。               
          |
+| `prepend.role`    | 是              | String   | 消息的角色,例如`system`、`user` 或 
`assistant`。 |
+| `prepend.content` | 是              | String   | 消息的内容(提示词)。                  
   |
+| `append`          | 条件性必填\*      | Array    | 需要后置添加的提示对象数组。                 
        |
+| `append.role`     | 是              | String   | 消息的角色,例如`system`、`user` 或 
`assistant`。 |
+| `append.content`  | 是              | String   | 消息的内容(提示词)。                  
    |
+
+\* **条件性必填**:`prepend` 和 `append` 中至少需要提供一个。
+
+## 示例
+
+以下示例将使用 OpenAI 作为上游服务提供商。在开始前,请先创建一个 [OpenAI 账户](https://openai.com)和 [API 
密钥](https://openai.com/blog/openai-api)。你可以选择将密钥保存到环境变量中,如下所示:
+
+```shell
+export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
+```
+
+如果你使用的是其他大语言模型提供商,请参考其文档获取 API 密钥。
 
-\* **条件必选**:必须提供 `prepend` 或 `append` 中的至少一个。
+### 前置与后置消息
 
-## 使用示例
+以下示例演示了如何配置 `ai-prompt-decorator` 插件,以在用户输入消息前添加系统消息,并在其后附加用户消息。
 
-创建一个带有 `ai-prompt-decorator` 插件的路由,如下所示:
+创建一个路由,指向聊天补全端点,并配置预先设置的提示模板,如下所示:
 
 ```shell
 curl "http://127.0.0.1:9180/apisix/admin/routes/1"; -X PUT \
-  -H "X-API-KEY: ${ADMIN_API_KEY}" \
+  -H "X-API-KEY: ${admin_key}" \
   -d '{
     "uri": "/v1/chat/completions",
     "plugins": {
+      "ai-proxy": {
+        "provider": "openai",
+        "auth": {
+          "header": {
+            "Authorization": "Bearer '"$OPENAI_API_KEY"'"
+          }
+        }
+      },
       "ai-prompt-decorator": {
         "prepend":[
           {
             "role": "system",
-            "content": "我明天有考试,所以请简要地从概念上解释"
+            "content": "请简要且概念性地回答。"
           }
         ],
         "append":[
           {
-            "role": "system",
-            "content": "用一个类比来结束回答。"
+            "role": "user",
+            "content": "在回答结尾用一个简单的类比来总结。"
           }
         ]
       }
-    },
-    "upstream": {
-      "type": "roundrobin",
-      "nodes": {
-        "api.openai.com:443": 1
-      },
-      "pass_host": "node",
-      "scheme": "https"
     }
   }'
 ```
 
-现在发送一个请求:
+向该路由发送一个 POST 请求,在请求体中指定模型和一个示例消息:
 
 ```shell
-curl http://127.0.0.1:9080/v1/chat/completions -i -XPOST  -H 'Content-Type: 
application/json' -d '{
-  "model": "gpt-4",
-  "messages": [{ "role": "user", "content": "什么是 TLS 握手?" }]
-}' -H "Authorization: Bearer <your token here>"
+curl "http://127.0.0.1:9080/v1/chat/completions"; -X POST \
+  -H "Content-Type: application/json" \
+  -d '{
+    "model": "gpt-4",
+    "messages": [{ "role": "user", "content": "什么是 mTLS 认证?" }]
+  }'
 ```
 
-然后请求体将被修改为类似这样:
+你应该会收到类似以下的响应:
 
 ```json
 {
-  "model": "gpt-4",
-  "messages": [
+  "choices": [
     {
-      "role": "system",
-      "content": "我明天有考试,所以请简要地从概念上解释"
-    },
-    { "role": "user", "content": "什么是 TLS 握手?" },
-    {
-      "role": "system",
-      "content": "用一个类比来结束回答。"
+      "finish_reason": "stop",
+      "index": 0,
+      "message": {
+        "content": "双向 TLS (mTLS) 
认证是一种安全协议,确保客户端和服务器在建立连接前相互验证对方身份。这种双向认证是通过交换和验证数字证书来实现的,这些证书是经过密码学签名的凭证,用于证明各方的身份。与标准
 TLS 仅认证服务器不同,mTLS 增加了额外的信任层,也对客户端进行验证,为敏感通信提供更高的安全性。\n\n可以把 mTLS 
想象成两位朋友在俱乐部见面时的秘密握手。双方都必须知道这个握手动作才能进入,确保他们在入场前彼此认出并互相信任。",
+        "role": "assistant"
+      }
     }
-  ]
+  ],
+  "created": 1723193502,
+  "id": "chatcmpl-9uFdWDlwKif6biCt9DpG0xgedEamg",
+  "model": "gpt-4o-2024-05-13",
+  "object": "chat.completion",
+  "system_fingerprint": "fp_abc28019ad",
+  "usage": {
+    "completion_tokens": 124,
+    "prompt_tokens": 31,
+    "total_tokens": 155
+  }
 }
 ```

Reply via email to