branch: elpa/gptel
commit c8b9ced9451273689b74fa1abaea5e09c5c6d152
Author: Karthik Chikmagalur <[email protected]>
Commit: Karthik Chikmagalur <[email protected]>

    gptel: Remove "which see" from commentary and README
    
    Remove the use of "which see" in the package commentary, README
    and NEWS.  This usage of q.v. is somewhat formal and archaic, and
    serving mostly to confuse readers. (#1247)
    
    "which see" is still used in some function docstrings for now.
    
    * NEWS:
    * README.org:
    * gptel.el:
---
 NEWS       | 10 +++++-----
 README.org |  6 +++---
 gptel.el   | 22 ++++++++++------------
 3 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/NEWS b/NEWS
index b1be8010ac..a025457fe7 100644
--- a/NEWS
+++ b/NEWS
@@ -390,12 +390,12 @@
   ~claude-opus-4-20250514~.
 
 - Add support for AWS Bedrock models.  You can create an AWS Bedrock
-  gptel backend with ~gptel-make-bedrock~, which see.  Please note:
-  AWS Bedrock support requires Curl 8.9.0 or higher.
+  gptel backend with ~gptel-make-bedrock~.  Please note: AWS Bedrock
+  support requires Curl 8.9.0 or higher.
 
-- You can now create an xAI backend with ~gptel-make-xai~, which see.
-  (xAI was supported before but the model configuration is now handled
-  for you by this function.)
+- You can now create an xAI backend with ~gptel-make-xai~.  (xAI was
+  supported before but the model configuration is now handled for you
+  by this function.)
 
 - Add support for GitHub Copilot Chat.  See the README and
   ~gptel-make-gh-copilot~.  Please note: this is only the chat
diff --git a/README.org b/README.org
index 72c47967fa..8d069ab19b 100644
--- a/README.org
+++ b/README.org
@@ -1657,7 +1657,7 @@ Anywhere in Emacs: Turn on =gptel-highlight-mode=.  See 
its documentation for cu
 
 In dedicated chat buffers: you can additionally customize 
=gptel-prompt-prefix-alist= and =gptel-response-prefix-alist=, which are 
prefixes inserted before the prompt and response.  You can set a different pair 
for each major-mode.
 
-For more custom formatting: Use =gptel-pre-response-hook= and 
=gptel-post-response-functions=, which see.
+For more custom formatting, use =gptel-pre-response-hook= and 
=gptel-post-response-functions=.
 
 #+html: </details>
 #+html: <details><summary>
@@ -1943,7 +1943,7 @@ Other LLM clients for Emacs include
 
 *Libraries*:
 
-- [[https://github.com/ahyatt/llm][llm]]: llm provides a uniform API across 
language model providers for building LLM clients in Emacs, and is intended as 
a library for use by package authors.  For similar scripting purposes, gptel 
provides the command =gptel-request=, which see.
+- [[https://github.com/ahyatt/llm][llm]]: llm provides a uniform API across 
language model providers for building LLM clients in Emacs, and is intended as 
a library for use by package authors.  For similar scripting purposes, gptel 
provides the command =gptel-request=.
 
 *Chat clients*:
 
@@ -2019,7 +2019,7 @@ These differ from full "agentic" use in that the 
interactions are "one-shot", no
 
 - Possible breakage, see #120: If streaming responses stop working for you 
after upgrading to v0.5, try reinstalling gptel and deleting its native comp 
eln cache in =native-comp-eln-load-path=.
 
-- The user option =gptel-host= is deprecated.  If the defaults don't work for 
you, use =gptel-make-openai= (which see) to customize server settings.
+- The user option =gptel-host= is deprecated.  If the defaults don't work for 
you, use =gptel-make-openai= to customize server settings.
 
 - =gptel-api-key-from-auth-source= now searches for the API key using the host 
address for the active LLM backend, /i.e./ "api.openai.com" when using ChatGPT. 
 You may need to update your =~/.authinfo=.
 
diff --git a/gptel.el b/gptel.el
index 3c5b142e49..cced91d593 100644
--- a/gptel.el
+++ b/gptel.el
@@ -68,22 +68,20 @@
 ;;
 ;; ChatGPT is configured out of the box.  For the other sources:
 ;;
-;; - For Azure: define a gptel-backend with `gptel-make-azure', which see.
-;; - For Gemini: define a gptel-backend with `gptel-make-gemini', which see.
-;; - For Anthropic (Claude): define a gptel-backend with 
`gptel-make-anthropic',
-;;   which see.
-;; - For AI/ML API, Together.ai, Anyscale, Groq, OpenRouter, DeepSeek, 
Cerebras or
-;;   Github Models: define a gptel-backend with `gptel-make-openai', which see.
-;; - For PrivateGPT: define a backend with `gptel-make-privategpt', which see.
-;; - For Perplexity: define a backend with `gptel-make-perplexity', which see.
-;; - For Deepseek: define a backend with `gptel-make-deepseek', which see.
-;; - For Kagi: define a gptel-backend with `gptel-make-kagi', which see.
+;; - For Azure: define a gptel-backend with `gptel-make-azure'.
+;; - For Gemini: define a gptel-backend with `gptel-make-gemini'.
+;; - For Anthropic (Claude): define a gptel-backend with 
`gptel-make-anthropic'.
+;; - For AI/ML API, Together.ai, Anyscale, Groq, OpenRouter, DeepSeek, Cerebras
+;;   or Github Models: define a gptel-backend with `gptel-make-openai'.
+;; - For PrivateGPT: define a backend with `gptel-make-privategpt'.
+;; - For Perplexity: define a backend with `gptel-make-perplexity'.
+;; - For Deepseek: define a backend with `gptel-make-deepseek'.
+;; - For Kagi: define a gptel-backend with `gptel-make-kagi'.
 ;;
 ;; For local models using Ollama, Llama.cpp or GPT4All:
 ;;
 ;; - The model has to be running on an accessible address (or localhost)
-;; - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all',
-;;   which see.
+;; - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all'.
 ;; - Llama.cpp or Llamafiles: Define a gptel-backend with `gptel-make-openai'.
 ;;
 ;; Consult the package README for examples and more help with configuring

Reply via email to