This is an automated email from the ASF dual-hosted git repository.
vgalaxies pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git
The following commit(s) were added to refs/heads/master by this push:
new 5e7803ce chore: update version to 1.5.0 (#385)
5e7803ce is described below
commit 5e7803ce9385b69df4f91846c0ef4049926425cc
Author: imbajin <[email protected]>
AuthorDate: Fri Dec 13 00:46:13 2024 +0800
chore: update version to 1.5.0 (#385)
Co-authored-by: VGalaxies <[email protected]>
---
.../changelog/hugegraph-1.3.0-release-notes.md | 2 +-
.../changelog/hugegraph-1.5.0-release-notes.md | 2 +-
content/cn/docs/config/config-authentication.md | 4 +-
content/cn/docs/quickstart/hugegraph-client.md | 2 +-
content/cn/docs/quickstart/hugegraph-hubble.md | 8 ++--
content/cn/docs/quickstart/hugegraph-loader.md | 10 ++---
content/cn/docs/quickstart/hugegraph-server.md | 20 +++++-----
content/en/docs/config/config-authentication.md | 8 ++--
content/en/docs/introduction/README.md | 6 +--
content/en/docs/quickstart/hugegraph-ai.md | 6 +--
content/en/docs/quickstart/hugegraph-client.md | 10 ++---
content/en/docs/quickstart/hugegraph-computer.md | 8 ++--
content/en/docs/quickstart/hugegraph-hubble.md | 24 ++++++------
content/en/docs/quickstart/hugegraph-loader.md | 44 +++++++++++-----------
content/en/docs/quickstart/hugegraph-server.md | 24 ++++++------
15 files changed, 89 insertions(+), 89 deletions(-)
diff --git a/content/cn/docs/changelog/hugegraph-1.3.0-release-notes.md
b/content/cn/docs/changelog/hugegraph-1.3.0-release-notes.md
index 7eedc8c3..aa84047f 100644
--- a/content/cn/docs/changelog/hugegraph-1.3.0-release-notes.md
+++ b/content/cn/docs/changelog/hugegraph-1.3.0-release-notes.md
@@ -9,7 +9,7 @@ weight: 4
1. 优先在 `hugegraph/toolchain/commons`软件中使用 Java 11, 此次是这些模块最后一次主版本兼容 Java 8
了。(computer 则仅支持 Java11)
2. 另外相比 Java11, 使用 Java8 会失去一些**安全性**的保障,我们推荐生产或对外网暴露访问的环境使用 Java11 并开启 [Auth
权限认证](/cn/docs/config/config-authentication/)
-**1.3.0** 是最后兼容 **Java 8** 的版本,在下一个 1.5.0 版本发布 --
[PD/Store](https://github.com/apache/incubator-hugegraph/issues/2265)
合入主分支时就会全面使用 Java 11 (除`client`外).
+**1.3.0** 是最后兼容 **Java 8** 的版本,在 1.5.0 开始就会全面使用 Java 11 (除`client`外).
PS: 未来 HugeGraph 组件的版本会朝着 `Java 11 -> Java 17 -> Java 21` 演进
diff --git a/content/cn/docs/changelog/hugegraph-1.5.0-release-notes.md
b/content/cn/docs/changelog/hugegraph-1.5.0-release-notes.md
index 8a7ca703..1e29e29c 100644
--- a/content/cn/docs/changelog/hugegraph-1.5.0-release-notes.md
+++ b/content/cn/docs/changelog/hugegraph-1.5.0-release-notes.md
@@ -15,6 +15,6 @@ Please check the release details/contributor in each
repository:
### 运行环境/版本说明
-1. 相较于 **1.3.0**,**1.5.0** 的 `hugegraph` 仅支持 Java 11
+1. 相较于 **1.3.0**,**1.5.0** 及后的 `hugegraph` 仅支持 Java 11
PS: 未来 HugeGraph 组件的版本会朝着 `Java 11 -> Java 17 -> Java 21` 演进
diff --git a/content/cn/docs/config/config-authentication.md
b/content/cn/docs/config/config-authentication.md
index 7e8f28f4..0f0aa13c 100644
--- a/content/cn/docs/config/config-authentication.md
+++ b/content/cn/docs/config/config-authentication.md
@@ -123,7 +123,7 @@ bin/start-hugegraph.sh
在 `docker run` 中添加环境变量 `PASSWORD=123456`(密码可以自由设置)即可开启鉴权模式::
```bash
-docker run -itd -e PASSWORD=123456 --name=server -p 8080:8080
hugegraph/hugegraph:1.3.0
+docker run -itd -e PASSWORD=123456 --name=server -p 8080:8080
hugegraph/hugegraph:1.5.0
```
#### 2. 采用 docker-compose
@@ -134,7 +134,7 @@ docker run -itd -e PASSWORD=123456 --name=server -p
8080:8080 hugegraph/hugegrap
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
ports:
- 8080:8080
diff --git a/content/cn/docs/quickstart/hugegraph-client.md
b/content/cn/docs/quickstart/hugegraph-client.md
index df44ec4c..5934cfdf 100644
--- a/content/cn/docs/quickstart/hugegraph-client.md
+++ b/content/cn/docs/quickstart/hugegraph-client.md
@@ -48,7 +48,7 @@ weight: 4
<groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-client</artifactId>
<!-- Update to the latest release version -->
- <version>1.3.0</version>
+ <version>1.5.0</version>
</dependency>
</dependencies>
```
diff --git a/content/cn/docs/quickstart/hugegraph-hubble.md
b/content/cn/docs/quickstart/hugegraph-hubble.md
index 6e022340..e784c488 100644
--- a/content/cn/docs/quickstart/hugegraph-hubble.md
+++ b/content/cn/docs/quickstart/hugegraph-hubble.md
@@ -53,7 +53,7 @@ weight: 3
>
> 若 hubble 和 server 在同一 docker 网络下,**推荐**直接使用`container_name` (如下例的 `server`)
> 作为主机名。或者也可以使用 **宿主机 IP** 作为主机名,此时端口号为宿主机给 server 配置的端口
-我们可以使用 `docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble:1.2.0`
快速启动 [hubble](https://hub.docker.com/r/hugegraph/hubble).
+我们可以使用 `docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble:1.5.0`
快速启动 [hubble](https://hub.docker.com/r/hugegraph/hubble).
或者使用 docker-compose 启动 hubble,另外如果 hubble 和 server 在同一个 docker 网络下,可以使用 server
的 contain_name 进行访问,而不需要宿主机的 ip
@@ -63,13 +63,13 @@ weight: 3
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
ports:
- 8080:8080
hubble:
- image: hugegraph/hubble:1.2.0
+ image: hugegraph/hubble:1.5.0
container_name: hubble
ports:
- 8088:8088
@@ -79,7 +79,7 @@ services:
>
> 1. `hugegraph-hubble` 的 docker 镜像是一个便捷发布版本,用于快速测试试用 hubble,并非**ASF
> 官方发布物料包的方式**。你可以从 [ASF Release Distribution
> Policy](https://infra.apache.org/release-distribution.html#dockerhub)
> 中得到更多细节。
>
-> 2. **生产环境**推荐使用 `release tag`(如 `1.2.0`) 稳定版。使用 `latest` tag 默认对应 master
最新代码。
+> 2. **生产环境**推荐使用 `release tag`(如 `1.5.0`) 稳定版。使用 `latest` tag 默认对应 master
最新代码。
#### 2.2 下载 toolchain 二进制包
diff --git a/content/cn/docs/quickstart/hugegraph-loader.md
b/content/cn/docs/quickstart/hugegraph-loader.md
index d077f6f0..13c61ad3 100644
--- a/content/cn/docs/quickstart/hugegraph-loader.md
+++ b/content/cn/docs/quickstart/hugegraph-loader.md
@@ -31,7 +31,7 @@ HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源
#### 2.1 使用 Docker 镜像 (便于**测试**)
-我们可以使用 `docker run -itd --name loader hugegraph/loader:1.3.0` 部署 loader
服务。对于需要加载的数据,则可以通过挂载 `-v /path/to/data/file:/loader/file` 或者`docker
cp`的方式将文件复制到 loader 容器内部。
+我们可以使用 `docker run -itd --name loader hugegraph/loader:1.5.0` 部署 loader
服务。对于需要加载的数据,则可以通过挂载 `-v /path/to/data/file:/loader/file` 或者 `docker cp`
的方式将文件复制到 loader 容器内部。
或者使用 docker-compose 启动 loader, 启动命令为 `docker-compose up -d`, 样例的
docker-compose.yml 如下所示:
@@ -40,19 +40,19 @@ version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
ports:
- 8080:8080
hubble:
- image: hugegraph/hubble:1.2.0
+ image: hugegraph/hubble:1.5.0
container_name: hubble
ports:
- 8088:8088
loader:
- image: hugegraph/loader:1.3.0
+ image: hugegraph/loader:1.5.0
container_name: loader
# mount your own data here
# volumes:
@@ -66,7 +66,7 @@ services:
>
> 1. hugegraph-loader 的 docker 镜像是一个便捷版本,用于快速启动 loader,并不是**官方发布物料包方式**。你可以从
> [ASF Release Distribution
> Policy](https://infra.apache.org/release-distribution.html#dockerhub)
> 中得到更多细节。
>
-> 2. 推荐使用 `release tag`(如 `1.2.0`) 以获取稳定版。使用 `latest` tag 可以使用开发中的最新功能。
+> 2. 推荐使用 `release tag` (如 `1.5.0`) 以获取稳定版。使用 `latest` tag 可以使用开发中的最新功能。
#### 2.2 下载已编译的压缩包
diff --git a/content/cn/docs/quickstart/hugegraph-server.md
b/content/cn/docs/quickstart/hugegraph-server.md
index bd79b6c5..de1541b1 100644
--- a/content/cn/docs/quickstart/hugegraph-server.md
+++ b/content/cn/docs/quickstart/hugegraph-server.md
@@ -39,12 +39,12 @@ Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存
可参考 [Docker
部署方式](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-dist/docker/README.md)。
-我们可以使用 `docker run -itd --name=server -p 8080:8080 hugegraph/hugegraph:1.3.0`
去快速启动一个内置了 `RocksDB` 的 `Hugegraph server`.
+我们可以使用 `docker run -itd --name=server -p 8080:8080 hugegraph/hugegraph:1.5.0`
去快速启动一个内置了 `RocksDB` 的 `Hugegraph server`.
可选项:
1. 可以使用 `docker exec -it server bash` 进入容器完成一些操作
-2. 可以使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD="true"
hugegraph/hugegraph:1.3.0` 在启动的时候预加载一个**内置的**样例图。可以通过 `RESTful API`
进行验证。具体步骤可以参考
[5.1.1](/cn/docs/quickstart/hugegraph-server/#511-%E5%90%AF%E5%8A%A8-server-%E7%9A%84%E6%97%B6%E5%80%99%E5%88%9B%E5%BB%BA%E7%A4%BA%E4%BE%8B%E5%9B%BE)
+2. 可以使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD="true"
hugegraph/hugegraph:1.5.0` 在启动的时候预加载一个**内置的**样例图。可以通过 `RESTful API`
进行验证。具体步骤可以参考
[5.1.1](/cn/docs/quickstart/hugegraph-server/#511-%E5%90%AF%E5%8A%A8-server-%E7%9A%84%E6%97%B6%E5%80%99%E5%88%9B%E5%BB%BA%E7%A4%BA%E4%BE%8B%E5%9B%BE)
3. 可以使用 `-e PASSWORD=123456` 设置是否开启鉴权模式以及 admin 的密码,具体步骤可以参考 [Config
Authentication](/cn/docs/config/config-authentication#使用-docker-时开启鉴权模式)
如果使用 docker desktop,则可以按照如下的方式设置可选项:
@@ -59,7 +59,7 @@ Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
# environment:
# - PRELOAD=true 为可选参数,为 True 时可以在启动的时候预加载一个内置的样例图
@@ -72,12 +72,12 @@ services:
>
> 1. hugegraph 的 docker 镜像是一个便捷版本,用于快速启动 hugegraph,并不是**官方发布物料包方式**。你可以从 [ASF
> Release Distribution
> Policy](https://infra.apache.org/release-distribution.html#dockerhub)
> 中得到更多细节。
>
-> 2. 推荐使用 `release tag` (如 `1.3.0/1.5.0`) 以获取稳定版。使用 `latest` tag 可以使用开发中的最新功能。
+> 2. 推荐使用 `release tag` (如 `1.5.0/1.x.0`) 以获取稳定版。使用 `latest` tag 可以使用开发中的最新功能。
#### 3.2 下载 tar 包
```bash
-# use the latest version, here is 1.3.0 for example
+# use the latest version, here is 1.5.0 for example
wget
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
tar zxf *hugegraph*.tar.gz
```
@@ -104,7 +104,7 @@ mvn package -DskipTests
```bash
......
-[INFO] Reactor Summary for hugegraph 1.3.0:
+[INFO] Reactor Summary for hugegraph 1.5.0:
[INFO]
[INFO] hugegraph .......................................... SUCCESS [ 2.405 s]
[INFO] hugegraph-core ..................................... SUCCESS [ 13.405 s]
@@ -132,8 +132,8 @@ mvn package -DskipTests
HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和
HugeGraph-Hubble,最新的 HugeGraph-Toolchain 中已经包含所有的这些工具,直接下载它解压就有工具包集合了
```bash
-# download toolchain package, it includes loader + tool + hubble, please check
the latest version (here is 1.3.0)
-wget
https://downloads.apache.org/incubator/hugegraph/1.3.0/apache-hugegraph-toolchain-incubating-1.3.0.tar.gz
+# download toolchain package, it includes loader + tool + hubble, please check
the latest version (here is 1.5.0)
+wget
https://downloads.apache.org/incubator/hugegraph/1.5.0/apache-hugegraph-toolchain-incubating-1.5.0.tar.gz
tar zxf *hugegraph-*.tar.gz
# enter the tool's package
cd *hugegraph*/*tool*
@@ -516,7 +516,7 @@ volumes:
1. 使用`docker run`
- 使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true
hugegraph/hugegraph:1.3.0`
+ 使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true
hugegraph/hugegraph:1.5.0`
2. 使用`docker-compose`
@@ -526,7 +526,7 @@ volumes:
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
environment:
- PRELOAD=true
diff --git a/content/en/docs/config/config-authentication.md
b/content/en/docs/config/config-authentication.md
index 512c63df..46b945f0 100644
--- a/content/en/docs/config/config-authentication.md
+++ b/content/en/docs/config/config-authentication.md
@@ -99,9 +99,9 @@ After the authentication configuration completed, enter the
**admin password** o
If deployed based on Docker image or if HugeGraph has already been initialized
and needs to be converted to authentication mode,
relevant graph data needs to be deleted and HugeGraph needs to be restarted.
If there is already business data in the diagram,
-it is temporarily **not possible** to directly convert the authentication mode
(version<=1.2.0 )
+it is temporarily **not possible** to directly convert the authentication mode
(version<=1.2.0)
-> Improvements for this feature have been included in the latest release
(available in latest docker image), please refer to [PR
2411](https://github.com/apache/incubator-hugegraph/pull/2411). Seamless
switching is now available.
+> Improvements for this feature have been included in the latest release
(available in the latest docker image), please refer to [PR
2411](https://github.com/apache/incubator-hugegraph/pull/2411). Seamless
switching is now available.
```bash
# stop the hugeGraph firstly
@@ -130,7 +130,7 @@ The steps are as follows:
To enable authentication mode, add the environment variable `PASSWORD=123456`
(you can freely set the password) in the `docker run` command:
```bash
-docker run -itd -e PASSWORD=123456 --name=server -p 8080:8080
hugegraph/hugegraph:1.3.0
+docker run -itd -e PASSWORD=123456 --name=server -p 8080:8080
hugegraph/hugegraph:1.5.0
```
#### 2. Use docker-compose
@@ -141,7 +141,7 @@ Use `docker-compose` and set the environment variable
`PASSWORD=123456`:
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.2.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
ports:
- 8080:8080
diff --git a/content/en/docs/introduction/README.md
b/content/en/docs/introduction/README.md
index d9d71b23..8b4bf869 100644
--- a/content/en/docs/introduction/README.md
+++ b/content/en/docs/introduction/README.md
@@ -31,7 +31,7 @@ The functions of this system include but are not limited to:
- Supports batch import of data from multiple data sources (including local
files, HDFS files, MySQL databases, and other data sources), and supports
import of multiple file formats (including TXT, CSV, JSON, and other formats)
- With a visual operation interface, it can be used for operation, analysis,
and display diagrams, reducing the threshold for users to use
- Optimized graph interface: shortest path (Shortest Path), K-step connected
subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized
recommendation algorithm PersonalRank, etc.
-- Implemented based on Apache TinkerPop3 framework, supports Gremlin graph
query language
+- Implemented based on the Apache TinkerPop3 framework, supports Gremlin graph
query language
- Support attribute graph, attributes can be added to vertices and edges, and
support rich attribute types
- Has independent schema metadata information, has powerful graph modeling
capabilities, and facilitates third-party system integration
- Support multi-vertex ID strategy: support primary key ID, support automatic
ID generation, support user-defined string ID, support user-defined digital ID
@@ -44,8 +44,8 @@ The functions of this system include but are not limited to:
- [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server is
the core part of the HugeGraph project, containing Core, Backend, API and other
submodules;
- Core: Implements the graph engine, connects to the Backend module
downwards, and supports the API module upwards;
- - Backend: Implements the storage of graph data to the backend, supports
backends including: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL and
PostgreSQL, users can choose one according to the actual situation;
- - API: Built-in REST Server, provides RESTful API to users, and is fully
compatible with Gremlin queries. (Supports distributed storage and computation
pushdown)
+ - Backend: Implements the storage of graph data to the backend, supports
backends including Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL and
PostgreSQL, users can choose one according to the actual situation;
+ - API: Built-in REST Server provides RESTful API to users and is fully
compatible with Gremlin queries. (Supports distributed storage and computation
pushdown)
- [HugeGraph-Toolchain](https://github.com/apache/hugegraph-toolchain):
(Toolchain)
- [HugeGraph-Client](/docs/quickstart/hugegraph-client): HugeGraph-Client
provides a RESTful API client for connecting to HugeGraph-Server, currently
only the Java version is implemented, users of other languages can implement it
themselves;
- [HugeGraph-Loader](/docs/quickstart/hugegraph-loader): HugeGraph-Loader is
a data import tool based on HugeGraph-Client, which transforms ordinary text
data into vertices and edges of the graph and inserts them into the graph
database;
diff --git a/content/en/docs/quickstart/hugegraph-ai.md
b/content/en/docs/quickstart/hugegraph-ai.md
index 03bec9b8..f6fbcc97 100644
--- a/content/en/docs/quickstart/hugegraph-ai.md
+++ b/content/en/docs/quickstart/hugegraph-ai.md
@@ -50,7 +50,7 @@ with large models, integration with graph machine learning
components, etc., to
7. After running the web demo, the config file `.env` will be automatically
generated at the path `hugegraph-llm/.env`. Additionally, a prompt-related
configuration file `config_prompt.yaml` will also be generated at the path
`hugegraph-llm/src/hugegraph_llm/resources/demo/config_prompt.yaml`.
- You can modify the content on the web page, and it will be automatically
saved to the configuration file after the corresponding feature is triggered.
You can also modify the file directly without restarting the web application;
simply refresh the page to load your latest changes.
+ You can modify the content on the web page, and it will be automatically
saved to the configuration file after the corresponding feature is triggered.
You can also modify the file directly without restarting the web application;
refresh the page to load your latest changes.
(Optional)To regenerate the config file, you can use `config.generate`
with `-u` or `--update`.
```bash
@@ -77,13 +77,13 @@ with large models, integration with graph machine learning
components, etc., to
- Docs:
- text: Build rag index from plain text
- file: Upload file(s) which should be <u>TXT</u> or <u>.docx</u> (Multiple
files can be selected together)
-- [Schema](https://hugegraph.apache.org/docs/clients/restful-api/schema/):
(Accept **2 types**)
+- [Schema](https://hugegraph.apache.org/docs/clients/restful-api/schema/):
(Except **2 types**)
- User-defined Schema (JSON format, follow the
[template](https://github.com/apache/incubator-hugegraph-ai/blob/aff3bbe25fa91c3414947a196131be812c20ef11/hugegraph-llm/src/hugegraph_llm/config/config_data.py#L125)
to modify it)
- Specify the name of the HugeGraph graph instance, it will automatically
get the schema from it (like
**"hugegraph"**)
- Graph extract head: The user-defined prompt of graph extracting
-- If already exist the graph data, you should click "**Rebuild vid Index**" to
update the index
+- If it already exists the graph data, you should click "**Rebuild vid
Index**" to update the index

diff --git a/content/en/docs/quickstart/hugegraph-client.md
b/content/en/docs/quickstart/hugegraph-client.md
index f480a1f2..5506e742 100644
--- a/content/en/docs/quickstart/hugegraph-client.md
+++ b/content/en/docs/quickstart/hugegraph-client.md
@@ -6,7 +6,7 @@ weight: 4
### 1 Overview Of Hugegraph
-[HugeGraph-Client](https://github.com/apache/hugegraph-toolchain) sends HTTP
request to HugeGraph-Server to obtain and parse the execution result of Server.
+[HugeGraph-Client](https://github.com/apache/hugegraph-toolchain) sends HTTP
request to HugeGraph-Server to get and parse the execution result of Server.
We support HugeGraph-Client for
Java/Go/[Python](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-python-client)
language.
You can use [Client-API](/cn/docs/clients/hugegraph-client) to write code to
operate HugeGraph, such as adding, deleting, modifying, and querying schema and
graph data, or executing gremlin statements.
@@ -14,7 +14,7 @@ You can use [Client-API](/cn/docs/clients/hugegraph-client)
to write code to ope
### 2 What You Need
-- Java 11 (also support Java 8)
+- Java 11 (also supports Java 8)
- Maven 3.5+
### 3 How To Use
@@ -22,7 +22,7 @@ You can use [Client-API](/cn/docs/clients/hugegraph-client)
to write code to ope
The basic steps to use HugeGraph-Client are as follows:
- Build a new Maven project by IDEA or Eclipse
-- Add HugeGraph-Client dependency in pom file;
+- Add HugeGraph-Client dependency in a pom file;
- Create an object to invoke the interface of HugeGraph-Client
See the complete example in the following section for the detail.
@@ -34,7 +34,7 @@ See the complete example in the following section for the
detail.
Using IDEA or Eclipse to create the project:
- [Build by
Eclipse](http://www.vogella.com/tutorials/EclipseMaven/article.html)
-- [Build by Intellij
Idea](https://vaadin.com/docs/-/part/framework/getting-started/getting-started-idea.html)
+- [Build by IntelliJ
IDEA](https://vaadin.com/docs/-/part/framework/getting-started/getting-started-idea.html)
#### 4.2 Add Hugegraph-Client Dependency In POM
@@ -44,7 +44,7 @@ Using IDEA or Eclipse to create the project:
<groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-client</artifactId>
<!-- Update to the latest release version -->
- <version>1.3.0</version>
+ <version>1.5.0</version>
</dependency>
</dependencies>
```
diff --git a/content/en/docs/quickstart/hugegraph-computer.md
b/content/en/docs/quickstart/hugegraph-computer.md
index 1be7fc33..f2a77d4e 100644
--- a/content/en/docs/quickstart/hugegraph-computer.md
+++ b/content/en/docs/quickstart/hugegraph-computer.md
@@ -6,12 +6,12 @@ weight: 6
## 1 HugeGraph-Computer Overview
-The
[`HugeGraph-Computer`](https://github.com/apache/incubator-hugegraph-computer)
is a distributed graph processing system for HugeGraph (OLAP). It is an
implementation of [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf).
It runs on Kubernetes framework.
+The
[`HugeGraph-Computer`](https://github.com/apache/incubator-hugegraph-computer)
is a distributed graph processing system for HugeGraph (OLAP). It is an
implementation of [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf).
It runs on a Kubernetes framework.
### Features
- Support distributed MPP graph computing, and integrates with HugeGraph as
graph input/output storage.
-- Based on BSP(Bulk Synchronous Parallel) model, an algorithm performs
computing through multiple parallel iterations, every iteration is a superstep.
+- Based on BSP (Bulk Synchronous Parallel) model, an algorithm performs
computing through multiple parallel iterations, every iteration is a superstep.
- Auto memory management. The framework will never be OOM(Out of Memory) since
it will split some data to disk if it doesn't have enough memory to hold all
the data.
- The part of edges or the messages of super node can be in memory, so you
will never lose it.
- You can load the data from HDFS or HugeGraph, or any other system.
@@ -82,7 +82,7 @@ bin/start-computer.sh -d local -r worker
3.1.5.1 Enable `OLAP` index query for server
-If OLAP index is not enabled, it needs to enable, more reference:
[modify-graphs-read-mode](/docs/clients/restful-api/graphs/#634-modify-graphs-read-mode-this-operation-requires-administrator-privileges)
+If OLAP index is not enabled, it needs to enable. More reference:
[modify-graphs-read-mode](/docs/clients/restful-api/graphs/#634-modify-graphs-read-mode-this-operation-requires-administrator-privileges)
```http
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
@@ -98,7 +98,7 @@ curl
"http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3" | gunz
### 3.2 Run PageRank algorithm in Kubernetes
-> To run algorithm with HugeGraph-Computer you need to deploy HugeGraph-Server
first
+> To run algorithm with HugeGraph-Computer, you need to deploy
HugeGraph-Server first
#### 3.2.1 Install HugeGraph-Computer CRD
diff --git a/content/en/docs/quickstart/hugegraph-hubble.md
b/content/en/docs/quickstart/hugegraph-hubble.md
index 2664e8eb..e491eaa5 100644
--- a/content/en/docs/quickstart/hugegraph-hubble.md
+++ b/content/en/docs/quickstart/hugegraph-hubble.md
@@ -28,7 +28,7 @@ The metadata modeling module realizes the construction and
management of graph m
##### Graph Analysis
-By inputting the graph traversal language Gremlin, high-performance general
analysis of graph data can be realized, and functions such as customized
multidimensional path query of vertices can be provided, and three kinds of
graph result display methods are provided, including: graph form, table form,
Json form, and multidimensional display. The data form meets the needs of
various scenarios used by users. It provides functions such as running records
and collection of common statements, [...]
+By inputting the graph traversal language Gremlin, high-performance general
analysis of graph data can be realized, and functions such as customized
multidimensional path query of vertices can be provided, and three kinds of
graph result display methods are provided, including: graph form, table form,
Json form, and multidimensional display. The data form meets the needs of
various scenarios used by users. It provides functions such as running records
and collection of common statements, [...]
##### Task Management
@@ -64,7 +64,7 @@ There are three ways to deploy `hugegraph-hubble`
>
> If `hubble` and `server` is in the same docker network, we **recommend**
> using the `container_name` (in our example, it is `server`) as the hostname,
> and `8080` as the port. Or you can use the **host IP** as the hostname, and
> the port is configured by the host for the server.
-We can use `docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble:1.2.0`
to quick start [hubble](https://hub.docker.com/r/hugegraph/hubble).
+We can use `docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble:1.5.0`
to quick start [hubble](https://hub.docker.com/r/hugegraph/hubble).
Alternatively, you can use Docker Compose to start `hubble`. Additionally, if
`hubble` and the graph are in the same Docker network, you can access the graph
using the container name of the graph, eliminating the need for the host
machine's IP address.
@@ -74,13 +74,13 @@ Use `docker-compose up -d`,`docker-compose.yml` is
following:
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
ports:
- 8080:8080
hubble:
- image: hugegraph/hubble:1.2.0
+ image: hugegraph/hubble:1.5.0
container_name: hubble
ports:
- 8088:8088
@@ -90,7 +90,7 @@ services:
>
> 1. The docker image of hugegraph-hubble is a convenience release to start
> hugegraph-hubble quickly, but not **official distribution** artifacts. You
> can find more details from [ASF Release Distribution
> Policy](https://infra.apache.org/release-distribution.html#dockerhub).
>
-> 2. Recommand to use `release tag`(like `1.2.0`) for the stable version. Use
`latest` tag to experience the newest functions in development.
+> 2. Recommend to use `release tag`(like `1.5.0`) for the stable version. Use
`latest` tag to experience the newest functions in development.
#### 2.2 Download the Toolchain binary package
@@ -148,7 +148,7 @@ Run `hubble`
bin/start-hubble.sh -d
```
-### 3 Platform Workflow
+### 3 Platform Workflows
The module usage process of the platform is as follows:
@@ -176,7 +176,7 @@ Create graph by filling in the content as follows:
> **Special Note**: If you are starting `hubble` with Docker, and `hubble` and
> the server are on the same host. When configuring the hostname for the graph
> on the Hubble web page, please do not directly set it to
> `localhost/127.0.0.1`. If `hubble` and `server` is in the same docker
> network, we **recommend** using the `container_name` (in our example, it is
> `graph`) as the hostname, and `8080` as the port. Or you can use the **host
> IP** as the hostname, and the port is configured by the h [...]
##### 4.1.2 Graph Access
-Realize the information access of the graph space. After entering, you can
perform operations such as multidimensional query analysis, metadata
management, data import, and algorithm analysis of the graph.
+Realize the information access to the graph space. After entering, you can
perform operations such as multidimensional query analysis, metadata
management, data import, and algorithm analysis of the graph.
<center>
<img src="/docs/images/images-hubble/312图访问.png" alt="image">
@@ -401,7 +401,7 @@ By switching the entrance on the left, flexibly switch the
operation space of mu
##### 4.4.3 Graph Analysis and Processing
-HugeGraph supports Gremlin, a graph traversal query language of Apache
TinkerPop3. Gremlin is a general graph database query language. By entering
Gremlin statements and clicking execute, you can perform query and analysis
operations on graph data, and create and delete vertices/edges. , vertex/edge
attribute modification, etc.
+HugeGraph supports Gremlin, a graph traversal query language of Apache
TinkerPop3. Gremlin is a general graph database query language. By entering
Gremlin statements and clicking execute, you can perform query and analysis
operations on graph data, and create and delete vertices/edges. vertex/edge
attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3
kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
@@ -426,11 +426,11 @@ Support zoom, center, full screen, export and other
operations.
##### 4.4.4 Data Details
-Click the vertex/edge entity to view the data details of the vertex/edge,
including: vertex/edge type, vertex ID, attribute and corresponding value,
expand the information display dimension of the graph, and improve the
usability.
+Click the vertex/edge entity to view the data details of the vertex/edge,
including vertex/edge type, vertex ID, attribute and corresponding value,
expand the information display dimension of the graph, and improve the
usability.
##### 4.4.5 Multidimensional Path Query of Graph Results
-In addition to the global query, in-depth customized query and hidden
operations can be performed for the vertices in the query result to realize
customized mining of graph results.
+In addition to the global query, an in-depth customized query and hidden
operations can be performed for the vertices in the query result to realize
customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be
displayed, inquired, hidden, etc.
- Expand: Click to display the vertices associated with the selected point.
@@ -493,7 +493,7 @@ Left navigation:
- algorithm: OLAP algorithm task
- remove_schema: remove metadata
- rebuild_index: rebuild the index
-2. The list displays the asynchronous task information of the current graph,
including: task ID, task name, task type, creation time, time-consuming,
status, operation, and realizes the management of asynchronous tasks.
+2. The list displays the asynchronous task information of the current graph,
including task ID, task name, task type, creation time, time-consuming, status,
operation, and realizes the management of asynchronous tasks.
3. Support filtering by task type and status
4. Support searching for task ID and task name
5. Asynchronous tasks can be deleted or deleted in batches
@@ -525,7 +525,7 @@ Click to view the entry to jump to the task management
list, as follows:
4. View the results
-- The results are displayed in the form of json
+- The results are displayed in the form of JSON
##### 4.5.4 OLAP algorithm tasks
diff --git a/content/en/docs/quickstart/hugegraph-loader.md
b/content/en/docs/quickstart/hugegraph-loader.md
index 36189057..a39e3b5e 100644
--- a/content/en/docs/quickstart/hugegraph-loader.md
+++ b/content/en/docs/quickstart/hugegraph-loader.md
@@ -10,7 +10,7 @@ HugeGraph-Loader is the data import component of HugeGraph,
which can convert da
Currently supported data sources include:
- Local disk file or directory, supports TEXT, CSV and JSON format files,
supports compressed files
-- HDFS file or directory, supports compressed files
+- HDFS file or directory supports compressed files
- Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL
Server
Local disk files and HDFS files support resumable uploads.
@@ -29,7 +29,7 @@ There are two ways to get HugeGraph-Loader:
#### 2.1 Use Docker image (Convenient for Test/Dev)
-We can deploy the loader service using `docker run -itd --name loader
hugegraph/loader:1.3.0`. For the data that needs to be loaded, it can be copied
into the loader container either by mounting `-v
/path/to/data/file:/loader/file` or by using `docker cp`.
+We can deploy the loader service using `docker run -itd --name loader
hugegraph/loader:1.5.0`. For the data that needs to be loaded, it can be copied
into the loader container either by mounting `-v
/path/to/data/file:/loader/file` or by using `docker cp`.
Alternatively, to start the loader using docker-compose, the command is
`docker-compose up -d`. An example of the docker-compose.yml is as follows:
@@ -56,7 +56,7 @@ The specific data loading process can be referenced under
[4.5 User Docker to lo
> Note:
> 1. The docker image of hugegraph-loader is a convenience release to start
> hugegraph-loader quickly, but not **official distribution** artifacts. You
> can find more details from [ASF Release Distribution
> Policy](https://infra.apache.org/release-distribution.html#dockerhub).
>
-> 2. Recommand to use `release tag`(like `1.2.0`) for the stable version. Use
`latest` tag to experience the newest functions in development.
+> 2. Recommend to use `release tag`(like `1.5.0`) for the stable version. Use
`latest` tag to experience the newest functions in development.
#### 2.2 Download the compiled archive
@@ -159,7 +159,7 @@ The data sources currently supported by HugeGraph-Loader
include:
The user can specify a local disk file as the data source. If the data is
scattered in multiple files, a certain directory is also supported as the data
source, but multiple directories are not supported as the data source for the
time being.
-For example: my data is scattered in multiple files, part-0, part-1 ...
part-n. To perform the import, it must be ensured that they are placed in one
directory. Then in the loader's mapping file, specify `path` as the directory.
+For example, my data is scattered in multiple files, part-0, part-1 ...
part-n. To perform the import, it must be ensured that they are placed in one
directory. Then in the loader's mapping file, specify `path` as the directory.
Supported file formats include:
@@ -199,11 +199,11 @@ Currently supported compressed file types include: GZIP,
BZ2, XZ, LZMA, SNAPPY_R
###### 3.2.1.3 Mainstream relational database
-The loader also supports some relational databases as data sources, and
currently supports MySQL, PostgreSQL, Oracle and SQL Server.
+The loader also supports some relational databases as data sources, and
currently supports MySQL, PostgreSQL, Oracle, and SQL Server.
However, the requirements for the table structure are relatively strict at
present. If **association query** needs to be done during the import process,
such a table structure is not allowed. The associated query means: after
reading a row of the table, it is found that the value of a certain column
cannot be used directly (such as a foreign key), and you need to do another
query to determine the true value of the column.
-For example: Suppose there are three tables, person, software and created
+For example, Suppose there are three tables, person, software and created
```
// person schema
@@ -274,9 +274,9 @@ The mapping file of the input source is used to describe
how to establish the ma
Specifically, each mapping block contains **an input source** and multiple
**vertex mapping** and **edge mapping** blocks, and the input source block
corresponds to the `local disk file or directory`, ` HDFS file or directory`
and `relational database` are responsible for describing the basic information
of the data source, such as where the data is, what format, what is the
delimiter, etc. The vertex map/edge map is bound to the input source, which
columns of the input source can be sel [...]
-In the simplest terms, each mapping block describes: where is the file to be
imported, which type of vertices/edges each line of the file is to be used as,
which columns of the file need to be imported, and the corresponding
vertices/edges of these columns. what properties etc.
+In the simplest terms, each mapping block describes: where is the file to be
imported, which type of vertices/edges each line of the file is to be used as
which columns of the file need to be imported, and the corresponding
vertices/edges of these columns. what properties, etc.
-> Note: The format of the mapping file before version 0.11.0 and the format
after 0.11.0 has changed greatly. For the convenience of expression, the
mapping file (format) before 0.11.0 is called version 1.0, and the version
after 0.11.0 is version 2.0 . And unless otherwise specified, the "map file"
refers to version 2.0.
+> Note: The format of the mapping file before version 0.11.0 and the format
after 0.11.0 has changed greatly. For the convenience of expression, the
mapping file (format) before 0.11.0 is called version 1.0, and the version
after 0.11.0 is version 2.0. And unless otherwise specified, the "map file"
refers to version 2.0.
@@ -310,7 +310,7 @@ In the simplest terms, each mapping block describes: where
is the file to be imp
Two versions of the mapping file are given directly here (the above graph
model and data file are described)
<details>
-<summary>Click to expand/collapse mapping file for version 2.0</summary>
+<summary>Click to expand/collapse the mapping file for version 2.0</summary>
```json
{
@@ -518,7 +518,7 @@ Two versions of the mapping file are given directly here
(the above graph model
<br/>
<details>
-<summary>Click to expand/collapse mapping file for version 1.0</summary>
+<summary>Click to expand/collapse the mapping file for version 1.0</summary>
```json
{
@@ -578,7 +578,7 @@ Two versions of the mapping file are given directly here
(the above graph model
</details>
<br/>
-The 1.0 version of the mapping file is centered on the vertex and edge, and
sets the input source; while the 2.0 version is centered on the input source,
and sets the vertex and edge mapping. Some input sources (such as a file) can
generate both vertices and edges. If you write in the 1.0 format, you need to
write an input block in each of the vertex and edge mapping blocks. The two
input blocks are exactly the same ; and the 2.0 version only needs to write
input once. Therefore, compare [...]
+The 1.0 version of the mapping file is centered on the vertex and edge, and
sets the input source; while the 2.0 version is centered on the input source,
and sets the vertex and edge mapping. Some input sources (such as a file) can
generate both vertices and edges. If you write in the 1.0 format, you need to
write an input block in each of the vertex and edge mapping blocks. The two
input blocks are exactly the same; and the 2.0 version only needs to write
input once. Therefore, compared [...]
In the bin directory of hugegraph-loader-{version}, there is a script tool
`mapping-convert.sh` that can directly convert the mapping file of version 1.0
to version 2.0. The usage is as follows:
@@ -597,7 +597,7 @@ Input sources are currently divided into four categories:
FILE, HDFS, JDBC and K
- id: The id of the input source. This field is used to support some internal
functions. It is not required (it will be automatically generated if it is not
filled in). It is strongly recommended to write it, which is very helpful for
debugging;
- skip: whether to skip the input source, because the JSON file cannot add
comments, if you do not want to import an input source during a certain import,
but do not want to delete the configuration of the input source, you can set it
to true to skip it, the default is false, not required;
- input: input source map block, composite structure
- - type: input source type, file or FILE must be filled;
+ - type: an input source type, file or FILE must be filled;
- path: the path of the local file or directory, the absolute path or the
relative path relative to the mapping file, it is recommended to use the
absolute path, required;
- file_filter: filter files with compound conditions from `path`, compound
structure, currently only supports configuration extensions, represented by
child node `extensions`, the default is "*", which means to keep all files;
- format: the format of the local file, the optional values are CSV,
TEXT and JSON, which must be uppercase and required;
@@ -689,7 +689,7 @@ schema: required
- delimiter: delimiter of the file line, default is comma "," as delimiter,
JSON files do not need to specify, optional;
- charset: encoding charset of the file, default is UTF-8, optional;
- date_format: customized date format, default value is yyyy-MM-dd HH:mm:ss,
optional; if the date is presented in the form of timestamp, this item must be
written as timestamp (fixed);
-- extra_date_formats: a customized list of other date formats, empty by
default, optional; each item in the list is an alternate date format to the
date_format specified date format;
+- extra_date_formats: a customized list of another date formats, empty by
default, optional; each item in the list is an alternate date format to the
date_format specified date format;
- time_zone: set which time zone the date data is in, default is GMT+8,
optional;
- skipped_line: the line you want to skip, composite structure, currently can
only configure the regular expression of the line to be skipped, described by
the child node regex, the default is not to skip any line, optional;
- early_stop: the record pulled from Kafka broker at a certain time is empty,
stop the task, default is false, only for debugging, optional;
@@ -819,7 +819,7 @@ Sibling `struct-example/load-progress 2019-10-10 12:30:30`.
> Note: The generation of progress files is independent of whether
> --incremental-mode is turned on or not, and a progress file is generated at
> the end of each import.
-If the data file formats are all legal and the import task is stopped by the
user (CTRL + C or kill, kill -9 is not supported), that is to say, if there is
no error record, the next import only needs to be set
+If the data file formats are all legal and the import task is stopped by the
user (CTRL + C or kill, kill -9 is not supported), that is to say, if there is
no error record, the next import only needs to be set to
Continue for the breakpoint.
But if the limit of --max-parse-errors or --max-insert-errors is reached
because too much data is invalid or network abnormality is reached, Loader will
record these original rows that failed to insert into
@@ -827,7 +827,7 @@ In the failed file, after the user modifies the data lines
in the failed file, s
Of course, if there is still a problem with the modified data line, it will be
logged again to the failure file (don't worry about duplicate lines).
Each vertex map or edge map will generate its own failure file when data
insertion fails. The failure file is divided into a parsing failure file
(suffix .parse-error) and an insertion failure file (suffix .insert-error).
-They are stored in the `${struct}/current` directory. For example, there is a
vertex mapping person and an edge mapping knows in the mapping file, each of
which has some error lines. When the Loader exits, you will see the following
files in the `${struct}/current` directory:
+They are stored in the `${struct}/current` directory. For example, there is a
vertex mapping person, and an edge mapping knows in the mapping file, each of
which has some error lines. When the Loader exits, you will see the following
files in the `${struct}/current` directory:
- person-b4cd32ab.parse-error: Vertex map person parses wrong data
- person-b4cd32ab.insert-error: Vertex map person inserts wrong data
@@ -838,7 +838,7 @@ They are stored in the `${struct}/current` directory. For
example, there is a ve
##### 3.4.3 logs directory file description
-The log and error data during program execution will be written into
hugegraph-loader.log file.
+The log and error data during program execution will be written into the
hugegraph-loader.log file.
##### 3.4.4 Execute command
@@ -892,7 +892,7 @@ Edge file: `example/file/edge_created.json`
#### 4.2 Write schema
<details>
-<summary>Click to expand/collapse schema file:
example/file/schema.groovy</summary>
+<summary>Click to expand/collapse the schema file:
example/file/schema.groovy</summary>
```groovy
schema.propertyKey("name").asText().ifNotExist().create();
@@ -1026,7 +1026,7 @@ If you just want to try out the loader, you can import
the built-in example data
If using custom data, before importing data with the loader, we need to copy
the data into the container.
-First, following the steps in [4.1-4.3](#41-prepare-data), we can prepare the
data and then use `docker cp` to copy the prepared data into the loader
container.
+First, following the steps in [4.1–4.3](#41-prepare-data), we can prepare the
data and then use `docker cp` to copy the prepared data into the loader
container.
Suppose we've prepared the corresponding dataset following the above steps,
stored in the `hugegraph-dataset` folder with the following file structure:
@@ -1055,9 +1055,9 @@ edge_created.json edge_knows.json schema.groovy
struct.json vertex_person.cs
Taking the built-in example dataset as an example, we can use the following
command to load the data.
-If you need to import your custom dataset, you just need to modify the paths
for `-f` (data script) and `-s` (schema) configurations.
+If you need to import your custom dataset, you need to modify the paths for
`-f` (data script) and `-s` (schema) configurations.
-"You can refer to [3.4.1 Parameter description](#341-parameter-description)
for the rest of the parameters.
+You can refer to [3.4.1-Parameter description](#341-parameter-description) for
the rest of the parameters.
```bash
docker exec -it loader bin/hugegraph-loader.sh -g hugegraph -f
example/file/struct.json -s example/file/schema.groovy -h server -p 8080
@@ -1071,7 +1071,7 @@ docker exec -it loader bin/hugegraph-loader.sh -g
hugegraph -f /loader/dataset/s
> If `loader` and `server` are in the same Docker network, you can specify `-h
> {server_container_name}`; otherwise, you need to specify the IP of the
> `server` host (in our example, `server_container_name` is `server`).
-Then we can obverse the result:
+Then we can see the result:
```bash
HugeGraphLoader worked in NORMAL MODE
@@ -1125,7 +1125,7 @@ The results of the execution will be similar to those
shown in [4.5.1](#451-use-
> HugeGraph Toolchain version: toolchain-1.0.0
>
The parameters of `spark-loader` are divided into two parts. Note: Because the
abbreviations of
-these two parameter names have overlapping parts, please use the full name of
the parameter.
+these two-parameter names have overlapping parts, please use the full name of
the parameter.
And there is no need to guarantee the order between the two parameters.
- hugegraph parameters (Reference: [hugegraph-loader parameter
description](https://hugegraph.apache.org/docs/quickstart/hugegraph-loader/#341-parameter-description)
)
- Spark task submission parameters (Reference: [Submitting
Applications](https://spark.apache.org/docs/3.3.0/submitting-applications.html#content))
diff --git a/content/en/docs/quickstart/hugegraph-server.md
b/content/en/docs/quickstart/hugegraph-server.md
index 75d914e3..af585c60 100644
--- a/content/en/docs/quickstart/hugegraph-server.md
+++ b/content/en/docs/quickstart/hugegraph-server.md
@@ -42,11 +42,11 @@ There are four ways to deploy HugeGraph-Server components:
<!-- 3.1 is linked by another place. if change 3.1's title, please check -->
You can refer to [Docker deployment
guide](https://hub.docker.com/r/hugegraph/hugegraph).
-We can use `docker run -itd --name=graph -p 8080:8080
hugegraph/hugegraph:1.3.0` to quickly start an inner `HugeGraph server` with
`RocksDB` in background.
+We can use `docker run -itd --name=graph -p 8080:8080
hugegraph/hugegraph:1.5.0` to quickly start an inner `HugeGraph server` with
`RocksDB` in background.
Optional:
1. use `docker exec -it graph bash` to enter the container to do some
operations.
-2. use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true"
hugegraph/hugegraph:1.3.0` to start with a **built-in** example graph. We can
use `RESTful API` to verify the result. The detailed step can refer to
[5.1.7](#517-create-an-example-graph-when-startup)
+2. use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true"
hugegraph/hugegraph:1.5.0` to start with a **built-in** example graph. We can
use `RESTful API` to verify the result. The detailed step can refer to
[5.1.7](#517-create-an-example-graph-when-startup)
3. use `-e PASSWORD=123456` to enable auth mode and set the password for
admin. You can find more details from [Config
Authentication](/docs/config/config-authentication#Use-docker-to-enble-authentication-mode)
If you use docker desktop, you can set the option like:
@@ -60,7 +60,7 @@ Also, if we want to manage the other Hugegraph related
instances in one file, we
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
# environment:
# - PRELOAD=true
@@ -75,13 +75,13 @@ services:
>
> 1. The docker image of hugegraph is a convenience release to start hugegraph
> quickly, but not **official distribution** artifacts. You can find more
> details from [ASF Release Distribution
> Policy](https://infra.apache.org/release-distribution.html#dockerhub).
>
-> 2. Recommend to use `release tag`(like `1.3.0`/`1.5.0`) for the stable
version. Use `latest` tag to experience the newest functions in development.
+> 2. Recommend to use `release tag`(like `1.5.0`/`1.5.0`) for the stable
version. Use `latest` tag to experience the newest functions in development.
#### 3.2 Download the binary tar tarball
You could download the binary tarball from the download page of ASF site like
this:
```bash
-# use the latest version, here is 1.3.0 for example
+# use the latest version, here is 1.5.0 for example
wget
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
tar zxf *hugegraph*.tar.gz
@@ -122,7 +122,7 @@ The execution log is as follows:
```bash
......
-[INFO] Reactor Summary for hugegraph 1.3.0:
+[INFO] Reactor Summary for hugegraph 1.5.0:
[INFO]
[INFO] hugegraph .......................................... SUCCESS [ 2.405 s]
[INFO] hugegraph-core ..................................... SUCCESS [ 13.405 s]
@@ -153,8 +153,8 @@ Of course, you should download the tarball of
`HugeGraph-Toolchain` first.
```bash
# download toolchain binary package, it includes loader + tool + hubble
-# please check the latest version (e.g. here is 1.3.0)
-wget
https://downloads.apache.org/incubator/hugegraph/1.3.0/apache-hugegraph-toolchain-incubating-1.3.0.tar.gz
+# please check the latest version (e.g. here is 1.5.0)
+wget
https://downloads.apache.org/incubator/hugegraph/1.5.0/apache-hugegraph-toolchain-incubating-1.5.0.tar.gz
tar zxf *hugegraph-*.tar.gz
# enter the tool's package
@@ -522,7 +522,7 @@ volumes:
hugegraph-data:
```
-In this yaml file, configuration parameters related to Cassandra need to be
passed as environment variables in the format of `hugegraph.<parameter_name>`.
+In this YAML file, configuration parameters related to Cassandra need to be
passed as environment variables in the format of `hugegraph.<parameter_name>`.
Specifically, in the configuration file `hugegraph.properties` , there are
settings like `backend=xxx` and `cassandra.host=xxx`. To configure these
settings during the process of passing environment variables, we need to
prepend `hugegraph.` to these configurations, like `hugegraph.backend` and
`hugegraph.cassandra.host`.
@@ -532,11 +532,11 @@ The rest of the configurations can be referenced under [4
config](#4-config)
##### 5.2.2 Create example graph when starting server
-Set the environment variable `PRELOAD=true` when starting Docker in order to
load data during the execution of the startup script.
+Set the environment variable `PRELOAD=true` when starting Docker to load data
during the execution of the startup script.
1. Use `docker run`
- Use `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true
hugegraph/hugegraph:1.3.0`
+ Use `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true
hugegraph/hugegraph:1.5.0`
2. Use `docker-compose`
@@ -546,7 +546,7 @@ Set the environment variable `PRELOAD=true` when starting
Docker in order to loa
version: '3'
services:
server:
- image: hugegraph/hugegraph:1.3.0
+ image: hugegraph/hugegraph:1.5.0
container_name: server
environment:
- PRELOAD=true