This is an automated email from the ASF dual-hosted git repository.
ming pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git
The following commit(s) were added to refs/heads/master by this push:
new 656bcbd6 fix: update all org.apache packages (#287)
656bcbd6 is described below
commit 656bcbd64aab9896ec9cb3e32557b0cc00b5f637
Author: imbajin <[email protected]>
AuthorDate: Tue Sep 19 14:14:13 2023 +0800
fix: update all org.apache packages (#287)
---
content/cn/docs/clients/restful-api/task.md | 6 +-
content/cn/docs/config/config-authentication.md | 26 ++++-----
content/cn/docs/config/config-guide.md | 18 +++---
content/cn/docs/config/config-option.md | 14 ++---
content/cn/docs/guides/custom-plugin.md | 74 ++++++++++++-------------
content/cn/docs/quickstart/hugegraph-spark.md | 12 ++--
content/en/docs/clients/restful-api/task.md | 12 ++--
content/en/docs/config/config-authentication.md | 20 ++++---
content/en/docs/config/config-guide.md | 2 +-
content/en/docs/config/config-option.md | 14 ++---
content/en/docs/guides/custom-plugin.md | 34 ++++++------
content/en/docs/quickstart/hugegraph-spark.md | 12 ++--
12 files changed, 123 insertions(+), 121 deletions(-)
diff --git a/content/cn/docs/clients/restful-api/task.md
b/content/cn/docs/clients/restful-api/task.md
index f4905014..7ed90ac9 100644
--- a/content/cn/docs/clients/restful-api/task.md
+++ b/content/cn/docs/clients/restful-api/task.md
@@ -39,7 +39,7 @@ GET
http://localhost:8080/graphs/hugegraph/tasks?status=success
"task_retries": 0,
"id": 2,
"task_type": "gremlin",
- "task_callable":
"com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
+ "task_callable":
"org.apache.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input":
"{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}]
}
@@ -72,7 +72,7 @@ GET http://localhost:8080/graphs/hugegraph/tasks/2
"task_retries": 0,
"id": 2,
"task_type": "gremlin",
- "task_callable": "com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
+ "task_callable": "org.apache.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input":
"{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}
```
@@ -113,7 +113,7 @@ DELETE http://localhost:8080/graphs/hugegraph/tasks/2
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
```
-> 请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
+> 请保证在 10 秒内发送该请求,如果超过 10 秒发送,任务可能已经执行完成,无法取消。
##### Response Status
diff --git a/content/cn/docs/config/config-authentication.md
b/content/cn/docs/config/config-authentication.md
index ee129f15..5c07301b 100644
--- a/content/cn/docs/config/config-authentication.md
+++ b/content/cn/docs/config/config-authentication.md
@@ -7,7 +7,7 @@ weight: 3
### 概述
HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了两套权限模式:
1. 简单的`ConfigAuthenticator`模式,通过本地配置文件存储用户名和密码 (仅支持单 GraphServer)
-2. 完备的`StandardAuthenticator`模式,支持多用户认证、以及细粒度的权限访问控制,采用基于 “用户-用户组-操作-资源” 的 4
层设计,灵活控制用户角色与权限 (支持多 GraphServer)
+2. 完备的`StandardAuthenticator`模式,支持多用户认证、以及细粒度的权限访问控制,采用基于“用户 - 用户组 - 操作 - 资源”的
4 层设计,灵活控制用户角色与权限 (支持多 GraphServer)
其中 `StandardAuthenticator` 模式的几个核心设计:
- 初始化时创建超级管理员 (`admin`) 用户,后续通过超级管理员创建其它用户,新创建的用户被分配足够权限后,可以创建或管理更多的用户
@@ -33,15 +33,15 @@ GET
http://localhost:8080/graphs/hugegraph/schema/vertexlabels
Authorization: Basic admin xxxx
```
-#### StandardAuthenticator模式
+#### StandardAuthenticator 模式
`StandardAuthenticator`模式是通过在数据库后端存储用户信息来支持用户认证和权限控制,该实现基于数据库存储的用户的名称与密码进行认证(密码已被加密),基于用户的角色来细粒度控制用户权限。下面是具体的配置流程(重启服务生效):
在配置文件`gremlin-server.yaml`中配置`authenticator`及其`rest-server`文件路径:
```yaml
authentication: {
- authenticator: com.baidu.hugegraph.auth.StandardAuthenticator,
- authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
+ authenticator: org.apache.hugegraph.auth.StandardAuthenticator,
+ authenticationHandler: org.apache.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
```
@@ -49,11 +49,11 @@ authentication: {
在配置文件`rest-server.properties`中配置`authenticator`及其`graph_store`信息:
```properties
-auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
+auth.authenticator=org.apache.hugegraph.auth.StandardAuthenticator
auth.graph_store=hugegraph
# auth client config
-# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
+# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置,地址填写 AuthServer 的 IP:RPC 端口
#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
```
其中,`graph_store`配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
@@ -61,12 +61,12 @@ auth.graph_store=hugegraph
在配置文件`hugegraph{n}.properties`中配置`gremlin.graph`信息:
```properties
-gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy
```
然后详细的权限 API 调用和说明请参考 [Authentication-API](/docs/clients/restful-api/auth) 文档
-#### ConfigAuthenticator模式
+#### ConfigAuthenticator 模式
`ConfigAuthenticator`模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态`tokens`来验证用户是否合法。下面是具体的配置流程(重启服务生效):
@@ -74,8 +74,8 @@ gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
```yaml
authentication: {
- authenticator: com.baidu.hugegraph.auth.ConfigAuthenticator,
- authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
+ authenticator: org.apache.hugegraph.auth.ConfigAuthenticator,
+ authenticationHandler: org.apache.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
```
@@ -83,7 +83,7 @@ authentication: {
在配置文件`rest-server.properties`中配置`authenticator`及其`tokens`信息:
```properties
-auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
+auth.authenticator=org.apache.hugegraph.auth.ConfigAuthenticator
auth.admin_token=token-value-a
auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
```
@@ -91,9 +91,9 @@ auth.user_tokens=[hugegraph1:token-value-1,
hugegraph2:token-value-2]
在配置文件`hugegraph{n}.properties`中配置`gremlin.graph`信息:
```properties
-gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy
```
### 自定义用户认证系统
-如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口`com.baidu.hugegraph.auth.HugeAuthenticator`即可,然后修改配置文件中`authenticator`配置项指向该实现。
+如果需要支持更加灵活的用户系统,可自定义 authenticator 进行扩展,自定义 authenticator
实现接口`org.apache.hugegraph.auth.HugeAuthenticator`即可,然后修改配置文件中`authenticator`配置项指向该实现。
diff --git a/content/cn/docs/config/config-guide.md
b/content/cn/docs/config/config-guide.md
index d50f8d32..88de1afc 100644
--- a/content/cn/docs/config/config-guide.md
+++ b/content/cn/docs/config/config-guide.md
@@ -10,10 +10,10 @@ weight: 1
主要的配置文件包括:gremlin-server.yaml、rest-server.properties 和 hugegraph.properties
-HugeGraphServer 内部集成了 GremlinServer 和 RestServer,而 gremlin-server.yaml 和
rest-server.properties 就是用来配置这两个Server的。
+HugeGraphServer 内部集成了 GremlinServer 和 RestServer,而 gremlin-server.yaml 和
rest-server.properties 就是用来配置这两个 Server 的。
--
[GremlinServer](http://tinkerpop.apache.org/docs/3.2.3/reference/#gremlin-server):GremlinServer接受用户的gremlin语句,解析后转而调用Core的代码。
-- RestServer:提供RESTful API,根据不同的HTTP请求,调用对应的Core
API,如果用户请求体是gremlin语句,则会转发给GremlinServer,实现对图数据的操作。
+-
[GremlinServer](http://tinkerpop.apache.org/docs/3.2.3/reference/#gremlin-server):GremlinServer
接受用户的 gremlin 语句,解析后转而调用 Core 的代码。
+- RestServer:提供 RESTful API,根据不同的 HTTP 请求,调用对应的 Core API,如果用户请求体是 gremlin
语句,则会转发给 GremlinServer,实现对图数据的操作。
下面对这三个配置文件逐一介绍。
@@ -26,7 +26,7 @@ gremlin-server.yaml 文件默认的内容如下:
#host: 127.0.0.1
#port: 8182
-# Gremlin查询中的超时时间(以毫秒为单位)
+# Gremlin 查询中的超时时间(以毫秒为单位)
evaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer
@@ -141,9 +141,9 @@ ssl: {
用户可以通过 [Gremlin-Console](/clients/gremlin-console.html) 快速体验 HugeGraph
的特性,但是不支持大规模数据导入,
推荐使用 HTTP 的通信方式,HugeGraph 的外围组件都是基于 HTTP 实现的;
-默认GremlinServer是服务在 localhost:8182,如果需要修改,配置 host、port 即可
+默认 GremlinServer 是服务在 localhost:8182,如果需要修改,配置 host、port 即可
-- host:部署 GremlinServer 机器的机器名或 IP,目前 HugeGraphServer
不支持分布式部署,且GremlinServer不直接暴露给用户;
+- host:部署 GremlinServer 机器的机器名或 IP,目前 HugeGraphServer 不支持分布式部署,且 GremlinServer
不直接暴露给用户;
- port:部署 GremlinServer 机器的端口;
同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port
@@ -183,7 +183,7 @@ hugegraph.properties 是一类文件,因为如果系统存在多个图,则
```properties
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
+gremlin.graph=org.apache.hugegraph.HugeFactory
# cache config
#schema.cache_capacity=100000
@@ -272,13 +272,13 @@ cassandra.password=
- gremlin.graph:GremlinServer 的启动入口,用户不要修改此项;
- backend:使用的后端存储,可选值有 memory、cassandra、scylladb、mysql、hbase、postgresql 和
rocksdb;
-- serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为
text、cassandra、scylladb 和
binary;(注:rocksdb后端值需是binary,其他后端backend与serializer值需保持一致,如hbase后端该值为hbase)
+- serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为
text、cassandra、scylladb 和 binary;(注:rocksdb 后端值需是 binary,其他后端 backend 与
serializer 值需保持一致,如 hbase 后端该值为 hbase)
- store:图存储到后端使用的数据库名,在 cassandra 和 scylladb 中就是 keyspace 名,此项的值与
GremlinServer 和 RestServer 中的图名并无关系,但是出于直观考虑,建议仍然使用相同的名字;
- cassandra.host:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的
seeds;
- cassandra.port:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的
native port;
- rocksdb.data_path:backend 为 rocksdb 时此项才有意义,rocksdb 的数据目录
- rocksdb.wal_path:backend 为 rocksdb 时此项才有意义,rocksdb 的日志目录
-- admin.token:
通过一个token来获取服务器的配置信息,例如:<http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55>
+- admin.token: 通过一个 token
来获取服务器的配置信息,例如:<http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55>
### 5 多图配置
diff --git a/content/cn/docs/config/config-option.md
b/content/cn/docs/config/config-option.md
index f20413b5..6481a6bb 100644
--- a/content/cn/docs/config/config-option.md
+++ b/content/cn/docs/config/config-option.md
@@ -15,7 +15,7 @@ weight: 2
| graphs | hugegraph: conf/hugegraph.properties
| The map of graphs
with name and config file path. |
| scriptEvaluationTimeout | 30000
| The timeout for
gremlin script execution(millisecond). |
| channelizer |
org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer
| Indicates the protocol which the Gremlin Server
provides service. |
-| authentication | authenticator:
com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens:
conf/rest-server.properties} | The authenticator and config(contains tokens
path) of authentication mechanism. |
+| authentication | authenticator:
org.apache.hugegraph.auth.StandardAuthenticator, config: {tokens:
conf/rest-server.properties} | The authenticator and config(contains tokens
path) of authentication mechanism. |
### Rest Server & API 配置项
@@ -41,10 +41,10 @@ weight: 2
| batch.max_vertices_per_batch | 500
| The maximum number of vertices submitted per batch.
|
| batch.max_write_ratio | 50
| The maximum thread ratio for batch writing, only take effect if the
batch.max_write_threads is 0.
|
| batch.max_write_threads | 0
| The maximum threads for batch writing, if the value is 0, the actual
value will be set to batch.max_write_ratio * restserver.max_worker_threads.
|
-| auth.authenticator |
| The class path of authenticator implementation. e.g.,
com.baidu.hugegraph.auth.StandardAuthenticator, or
com.baidu.hugegraph.auth.ConfigAuthenticator.
|
-| auth.admin_token | 162f7848-0b6d-4faf-b557-3a0797869c55
| Token for administrator operations, only for
com.baidu.hugegraph.auth.ConfigAuthenticator.
|
-| auth.graph_store | hugegraph
| The name of graph used to store authentication information, like
users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
|
-| auth.user_tokens |
[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with
name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
|
+| auth.authenticator |
| The class path of authenticator implementation. e.g.,
org.apache.hugegraph.auth.StandardAuthenticator, or
org.apache.hugegraph.auth.ConfigAuthenticator.
|
+| auth.admin_token | 162f7848-0b6d-4faf-b557-3a0797869c55
| Token for administrator operations, only for
org.apache.hugegraph.auth.ConfigAuthenticator.
|
+| auth.graph_store | hugegraph
| The name of graph used to store authentication information, like
users, only for org.apache.hugegraph.auth.StandardAuthenticator.
|
+| auth.user_tokens |
[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with
name and password, only for org.apache.hugegraph.auth.ConfigAuthenticator.
|
| auth.audit_log_rate | 1000.0
| The max rate of audit log output per user, default value is 1000
records per second.
|
| auth.cache_capacity | 10240
| The max cache capacity of each auth cache item.
|
| auth.cache_expire | 600
| The expiration time in seconds of vertex cache.
|
@@ -59,7 +59,7 @@ weight: 2
| config option | default value
| description
[...]
|---------------------------------------|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
-| gremlin.graph |
com.baidu.hugegraph.HugeFactory | Gremlin entrance to create
graph.
[...]
+| gremlin.graph |
org.apache.hugegraph.HugeFactory | Gremlin entrance to create
graph.
[...]
| backend | rocksdb
| The data store type, available values are [memory, rocksdb,
cassandra, scylladb, hbase, mysql].
[...]
| serializer | binary
| The serializer for backend store, available values are [text, binary,
cassandra, hbase, mysql].
[...]
| store | hugegraph
| The database name like Cassandra Keyspace.
[...]
@@ -274,6 +274,6 @@ weight: 2
其它与 MySQL 后端一致。
-> PostgreSQL 后端的 driver 和 url 应该设置为:
+> PostgreSQL 后端的 driver 和 url 应该设置为:
> - `jdbc.driver=org.postgresql.Driver`
> - `jdbc.url=jdbc:postgresql://localhost:5432/`
diff --git a/content/cn/docs/guides/custom-plugin.md
b/content/cn/docs/guides/custom-plugin.md
index 7a74ae71..ce11904e 100644
--- a/content/cn/docs/guides/custom-plugin.md
+++ b/content/cn/docs/guides/custom-plugin.md
@@ -1,14 +1,14 @@
---
-title: "HugeGraph Plugin机制及插件扩展流程"
+title: "HugeGraph Plugin 机制及插件扩展流程"
linkTitle: "HugeGraph Plugin"
weight: 3
---
### 背景
-1. HugeGraph不仅开源开放,而且要做到简单易用,一般用户无需更改源码也能轻松增加插件扩展功能。
-2. HugeGraph支持多种内置存储后端,也允许用户无需更改现有源码的情况下扩展自定义后端。
-3. HugeGraph支持全文检索,全文检索功能涉及到各语言分词,目前已内置8种中文分词器,也允许用户无需更改现有源码的情况下扩展自定义分词器。
+1. HugeGraph 不仅开源开放,而且要做到简单易用,一般用户无需更改源码也能轻松增加插件扩展功能。
+2. HugeGraph 支持多种内置存储后端,也允许用户无需更改现有源码的情况下扩展自定义后端。
+3. HugeGraph 支持全文检索,全文检索功能涉及到各语言分词,目前已内置 8 种中文分词器,也允许用户无需更改现有源码的情况下扩展自定义分词器。
### 可扩展维度
@@ -21,21 +21,21 @@ weight: 3
### 插件实现机制
-1. HugeGraph提供插件接口HugeGraphPlugin,通过Java SPI机制支持插件化
-2.
HugeGraph提供了4个扩展项注册函数:`registerOptions()`、`registerBackend()`、`registerSerializer()`、`registerAnalyzer()`
-3. 插件实现者实现相应的Options、Backend、Serializer或Analyzer的接口
-4. 插件实现者实现HugeGraphPlugin接口的`register()`方法,在该方法中注册上述第3点所列的具体实现类,并打成jar包
-5. 插件使用者将jar包放在HugeGraph Server安装目录的`plugins`目录下,修改相关配置项为插件自定义值,重启即可生效
+1. HugeGraph 提供插件接口 HugeGraphPlugin,通过 Java SPI 机制支持插件化
+2. HugeGraph 提供了 4
个扩展项注册函数:`registerOptions()`、`registerBackend()`、`registerSerializer()`、`registerAnalyzer()`
+3. 插件实现者实现相应的 Options、Backend、Serializer 或 Analyzer 的接口
+4. 插件实现者实现 HugeGraphPlugin 接口的`register()`方法,在该方法中注册上述第 3 点所列的具体实现类,并打成 jar 包
+5. 插件使用者将 jar 包放在 HugeGraph Server 安装目录的`plugins`目录下,修改相关配置项为插件自定义值,重启即可生效
### 插件实现流程实例
-#### 1 新建一个maven项目
+#### 1 新建一个 maven 项目
##### 1.1 项目名称取名:hugegraph-plugin-demo
-##### 1.2 添加`hugegraph-core` Jar包依赖
+##### 1.2 添加`hugegraph-core` Jar 包依赖
-maven pom.xml详细内容如下:
+maven pom.xml 详细内容如下:
```xml
<?xml version="1.0" encoding="UTF-8"?>
@@ -45,7 +45,7 @@ maven pom.xml详细内容如下:
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-plugin-demo</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
@@ -54,7 +54,7 @@ maven pom.xml详细内容如下:
<dependencies>
<dependency>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-core</artifactId>
<version>${project.version}</version>
</dependency>
@@ -67,12 +67,12 @@ maven pom.xml详细内容如下:
##### 2.1 扩展自定义后端
-###### 2.1.1 实现接口BackendStoreProvider
+###### 2.1.1 实现接口 BackendStoreProvider
-- 可实现接口:`com.baidu.hugegraph.backend.store.BackendStoreProvider`
-- 或者继承抽象类:`com.baidu.hugegraph.backend.store.AbstractBackendStoreProvider`
+- 可实现接口:`org.apache.hugegraph.backend.store.BackendStoreProvider`
+- 或者继承抽象类:`org.apache.hugegraph.backend.store.AbstractBackendStoreProvider`
-以RocksDB后端RocksDBStoreProvider为例:
+以 RocksDB 后端 RocksDBStoreProvider 为例:
```java
public class RocksDBStoreProvider extends AbstractBackendStoreProvider {
@@ -103,9 +103,9 @@ public class RocksDBStoreProvider extends
AbstractBackendStoreProvider {
}
```
-###### 2.1.2 实现接口BackendStore
+###### 2.1.2 实现接口 BackendStore
-BackendStore接口定义如下:
+BackendStore 接口定义如下:
```java
public interface BackendStore {
@@ -150,7 +150,7 @@ public interface BackendStore {
###### 2.1.3 扩展自定义序列化器
-序列化器必须继承抽象类:`com.baidu.hugegraph.backend.serializer.AbstractSerializer`(`implements
GraphSerializer, SchemaSerializer`)
+序列化器必须继承抽象类:`org.apache.hugegraph.backend.serializer.AbstractSerializer`(`implements
GraphSerializer, SchemaSerializer`)
主要接口的定义如下:
```java
@@ -183,11 +183,11 @@ public interface SchemaSerializer {
增加自定义后端时,可能需要增加新的配置项,实现流程主要包括:
-- 增加配置项容器类,并实现接口`com.baidu.hugegraph.config.OptionHolder`
+- 增加配置项容器类,并实现接口`org.apache.hugegraph.config.OptionHolder`
- 提供单例方法`public static OptionHolder
instance()`,并在对象初始化时调用方法`OptionHolder.registerOptions()`
- 增加配置项声明,单值配置项类型为`ConfigOption`、多值配置项类型为`ConfigListOption`
-以RocksDB配置项定义为例:
+以 RocksDB 配置项定义为例:
```java
public class RocksDBOptions extends OptionHolder {
@@ -239,16 +239,16 @@ public class RocksDBOptions extends OptionHolder {
##### 2.2 扩展自定义分词器
-分词器需要实现接口`com.baidu.hugegraph.analyzer.Analyzer`,以实现一个SpaceAnalyzer空格分词器为例。
+分词器需要实现接口`org.apache.hugegraph.analyzer.Analyzer`,以实现一个 SpaceAnalyzer 空格分词器为例。
```java
-package com.baidu.hugegraph.plugin;
+package org.apache.hugegraph.plugin;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
-import com.baidu.hugegraph.analyzer.Analyzer;
+import org.apache.hugegraph.analyzer.Analyzer;
public class SpaceAnalyzer implements Analyzer {
@@ -262,7 +262,7 @@ public class SpaceAnalyzer implements Analyzer {
#### 3. 实现插件接口,并进行注册
插件注册入口为`HugeGraphPlugin.register()`,自定义插件必须实现该接口方法,在其内部注册上述定义好的扩展项。
-接口`com.baidu.hugegraph.plugin.HugeGraphPlugin`定义如下:
+接口`org.apache.hugegraph.plugin.HugeGraphPlugin`定义如下:
```java
public interface HugeGraphPlugin {
@@ -277,7 +277,7 @@ public interface HugeGraphPlugin {
}
```
-并且HugeGraphPlugin提供了4个静态方法用于注册扩展项:
+并且 HugeGraphPlugin 提供了 4 个静态方法用于注册扩展项:
- registerOptions(String name, String classPath):注册配置项
- registerBackend(String name, String classPath):注册后端(BackendStoreProvider)
@@ -285,10 +285,10 @@ public interface HugeGraphPlugin {
- registerAnalyzer(String name, String classPath):注册分词器
-下面以注册SpaceAnalyzer分词器为例:
+下面以注册 SpaceAnalyzer 分词器为例:
```java
-package com.baidu.hugegraph.plugin;
+package org.apache.hugegraph.plugin;
public class DemoPlugin implements HugeGraphPlugin {
@@ -304,13 +304,13 @@ public class DemoPlugin implements HugeGraphPlugin {
}
```
-#### 4. 配置SPI入口
+#### 4. 配置 SPI 入口
-1. 确保services目录存在:hugegraph-plugin-demo/resources/META-INF/services
-2. 在services目录下建立文本文件:com.baidu.hugegraph.plugin.HugeGraphPlugin
-3. 文件内容如下:com.baidu.hugegraph.plugin.DemoPlugin
+1. 确保 services 目录存在:hugegraph-plugin-demo/resources/META-INF/services
+2. 在 services 目录下建立文本文件:org.apache.hugegraph.plugin.HugeGraphPlugin
+3. 文件内容如下:org.apache.hugegraph.plugin.DemoPlugin
-#### 5. 打Jar包
+#### 5. 打 Jar 包
-通过maven打包,在项目目录下执行命令`mvn package`,在target目录下会生成Jar包文件。
-使用时将该Jar包拷到`plugins`目录,重启服务即可生效。
\ No newline at end of file
+通过 maven 打包,在项目目录下执行命令`mvn package`,在 target 目录下会生成 Jar 包文件。
+使用时将该 Jar 包拷到`plugins`目录,重启服务即可生效。
\ No newline at end of file
diff --git a/content/cn/docs/quickstart/hugegraph-spark.md
b/content/cn/docs/quickstart/hugegraph-spark.md
index 6a799fcc..e7afc8bb 100644
--- a/content/cn/docs/quickstart/hugegraph-spark.md
+++ b/content/cn/docs/quickstart/hugegraph-spark.md
@@ -7,7 +7,7 @@ weight: 7
> Note: HugeGraph-Spark 已经停止维护, 不再更新, 请转向使用 hugegraph-computer, 感谢理解
-### 1 HugeGraph-Spark概述 (Deprecated)
+### 1 HugeGraph-Spark 概述 (Deprecated)
HugeGraph-Spark 是一个连接 HugeGraph 和 Spark GraphX 的工具,能够读取 HugeGraph 中的数据并转换成
Spark GraphX 的 RDD,然后执行 GraphX 中的各种图算法。
@@ -75,15 +75,15 @@ HugeGraph-Spark 提供了两种添加配置项的方法:
导入 hugegraph 相关类
```scala
-scala> import com.baidu.hugegraph.spark._
-import com.baidu.hugegraph.spark._
+scala> import org.apache.hugegraph.spark._
+import org.apache.hugegraph.spark._
```
初始化 graph 对象(GraphX RDD),并创建 snapshot
```scala
scala> val graph = sc.hugeGraph("hugegraph", "http://localhost:8080")
-org.apache.spark.graphx.Graph[com.baidu.hugegraph.spark.structure.HugeSparkVertex,com.baidu.hugegraph.spark.structure.HugeSparkEdge]
= org.apache.spark.graphx.impl.GraphImpl@1418a1bd
+org.apache.spark.graphx.Graph[org.apache.hugegraph.spark.structure.HugeSparkVertex,org.apache.hugegraph.spark.structure.HugeSparkEdge]
= org.apache.spark.graphx.impl.GraphImpl@1418a1bd
```
如果已经配置过`spark.hugegraph.server.url`参数,可以省略第二个参数,直接通过`val graph =
sc.hugeGraph("hugegraph")`调用即可。
@@ -117,7 +117,7 @@
sc.makeRDD(top10).join(graph.vertices).collect().foreach(println)
##### PageRank
-PageRank的结果仍为一个图,包含`vertices` 与 `edges`。
+PageRank 的结果仍为一个图,包含`vertices` 与 `edges`。
```scala
val ranks = graph.pageRank(0.0001)
@@ -129,4 +129,4 @@ val ranks = graph.pageRank(0.0001)
val top10 = ranks.vertices.top(10)
```
-更多 GraphX 的 API 请参考 [spark graphx官网](http://spark.apache.org/graphx/)。
+更多 GraphX 的 API 请参考 [spark graphx 官网](http://spark.apache.org/graphx/)。
diff --git a/content/en/docs/clients/restful-api/task.md
b/content/en/docs/clients/restful-api/task.md
index 2477de2c..099db720 100644
--- a/content/en/docs/clients/restful-api/task.md
+++ b/content/en/docs/clients/restful-api/task.md
@@ -11,7 +11,7 @@ weight: 13
##### Params
- status: the status of asyncTasks
-- limit:the max number of tasks to return
+- limit: the max number of tasks to return
##### Method & Url
@@ -39,7 +39,7 @@ GET
http://localhost:8080/graphs/hugegraph/tasks?status=success
"task_retries": 0,
"id": 2,
"task_type": "gremlin",
- "task_callable":
"com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
+ "task_callable":
"org.apache.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input":
"{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}]
}
@@ -72,12 +72,12 @@ GET http://localhost:8080/graphs/hugegraph/tasks/2
"task_retries": 0,
"id": 2,
"task_type": "gremlin",
- "task_callable": "com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
+ "task_callable": "org.apache.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input":
"{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}
```
-#### 7.1.3 Delete task information of an async task,**won't delete the task
itself**
+#### 7.1.3 Delete task information of an async task,**won't delete the task
itself**
##### Method & Url
@@ -93,7 +93,7 @@ DELETE http://localhost:8080/graphs/hugegraph/tasks/2
#### 7.1.4 Cancel an async task, **the task should be able to be canceled**
-If you already created an async task via [Gremlin API](../gremlin) as follows:
+If you already created an async task via [Gremlin API](../gremlin) as follows:
```groovy
"for (int i = 0; i < 10; i++) {" +
@@ -112,7 +112,7 @@ If you already created an async task via [Gremlin
API](../gremlin) as follows:
```
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
```
-> cancel it in 10s. if more than 10s,the task may already be finished, then
can't be cancelled.
+> cancel it in 10s. if more than 10s, the task may already be finished, then
can't be cancelled.
##### Response Status
diff --git a/content/en/docs/config/config-authentication.md
b/content/en/docs/config/config-authentication.md
index 4343f9a0..a9157914 100644
--- a/content/en/docs/config/config-authentication.md
+++ b/content/en/docs/config/config-authentication.md
@@ -40,8 +40,8 @@ Configure the `authenticator` and its `rest-server` file path
in the `gremlin-se
```yaml
authentication: {
- authenticator: com.baidu.hugegraph.auth.StandardAuthenticator,
- authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
+ authenticator: org.apache.hugegraph.auth.StandardAuthenticator,
+ authenticationHandler: org.apache.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
```
@@ -49,7 +49,7 @@ authentication: {
Configure the `authenticator` and `graph_store` information in the
`rest-server.properties` configuration file:
```properties
-auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
+auth.authenticator=org.apache.hugegraph.auth.StandardAuthenticator
auth.graph_store=hugegraph
# Auth Client Config
@@ -62,7 +62,7 @@ In the above configuration, the `graph_store` option
specifies which graph to us
In the `hugegraph{n}.properties` configuration file, configure the
`gremlin.graph` information:
```properties
-gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy
```
For detailed API calls and explanations regarding permissions, please refer to
the [Authentication-API](/docs/clients/restful-api/auth) documentation.
@@ -75,8 +75,8 @@ Configure the `authenticator` and its `rest-server` file path
in the `gremlin-se
```yaml
authentication: {
- authenticator: com.baidu.hugegraph.auth.ConfigAuthenticator,
- authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
+ authenticator: org.apache.hugegraph.auth.ConfigAuthenticator,
+ authenticationHandler: org.apache.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
```
@@ -84,7 +84,7 @@ authentication: {
Configure the `authenticator` and its `tokens` information in the
`rest-server.properties` configuration file:
```properties
-auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
+auth.authenticator=org.apache.hugegraph.auth.ConfigAuthenticator
auth.admin_token=token-value-a
auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
```
@@ -92,9 +92,11 @@ auth.user_tokens=[hugegraph1:token-value-1,
hugegraph2:token-value-2]
In the `hugegraph{n}.properties` configuration file, configure the
`gremlin.graph` information:
```properties
-gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy
```
### Custom User Authentication System
-If you need to support a more flexible user system, you can customize the
authenticator for extension. Simply implement the
`com.baidu.hugegraph.auth.HugeAuthenticator` interface with your custom
authenticator, and then modify the `authenticator` configuration item in the
configuration file to point to your implementation.
+If you need to support a more flexible user system, you can customize the
authenticator for extension.
+Simply implement the `org.apache.hugegraph.auth.HugeAuthenticator` interface
with your custom authenticator,
+and then modify the `authenticator` configuration item in the configuration
file to point to your implementation.
diff --git a/content/en/docs/config/config-guide.md
b/content/en/docs/config/config-guide.md
index 8750c34f..3d826fcd 100644
--- a/content/en/docs/config/config-guide.md
+++ b/content/en/docs/config/config-guide.md
@@ -181,7 +181,7 @@ server.role=master
```properties
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
+gremlin.graph=org.apache.hugegraph.HugeFactory
# cache config
#schema.cache_capacity=100000
diff --git a/content/en/docs/config/config-option.md
b/content/en/docs/config/config-option.md
index fce4c070..65bf9141 100644
--- a/content/en/docs/config/config-option.md
+++ b/content/en/docs/config/config-option.md
@@ -15,7 +15,7 @@ Corresponding configuration file `gremlin-server.yaml`
| graphs | hugegraph: conf/hugegraph.properties
| The map of graphs
with name and config file path. |
| scriptEvaluationTimeout | 30000
| The timeout for
gremlin script execution(millisecond). |
| channelizer |
org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer
| Indicates the protocol which the Gremlin Server
provides service. |
-| authentication | authenticator:
com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens:
conf/rest-server.properties} | The authenticator and config(contains tokens
path) of authentication mechanism. |
+| authentication | authenticator:
org.apache.hugegraph.auth.StandardAuthenticator, config: {tokens:
conf/rest-server.properties} | The authenticator and config(contains tokens
path) of authentication mechanism. |
### Rest Server & API Config Options
@@ -41,10 +41,10 @@ Corresponding configuration file `rest-server.properties`
| batch.max_vertices_per_batch | 500
| The maximum number of vertices submitted per batch.
|
| batch.max_write_ratio | 50
| The maximum thread ratio for batch writing, only take effect if the
batch.max_write_threads is 0.
|
| batch.max_write_threads | 0
| The maximum threads for batch writing, if the value is 0, the actual
value will be set to batch.max_write_ratio * restserver.max_worker_threads.
|
-| auth.authenticator |
| The class path of authenticator implementation. e.g.,
com.baidu.hugegraph.auth.StandardAuthenticator, or
com.baidu.hugegraph.auth.ConfigAuthenticator.
|
-| auth.admin_token | 162f7848-0b6d-4faf-b557-3a0797869c55
| Token for administrator operations, only for
com.baidu.hugegraph.auth.ConfigAuthenticator.
|
-| auth.graph_store | hugegraph
| The name of graph used to store authentication information, like
users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
|
-| auth.user_tokens |
[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with
name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
|
+| auth.authenticator |
| The class path of authenticator implementation. e.g.,
org.apache.hugegraph.auth.StandardAuthenticator, or
org.apache.hugegraph.auth.ConfigAuthenticator.
|
+| auth.admin_token | 162f7848-0b6d-4faf-b557-3a0797869c55
| Token for administrator operations, only for
org.apache.hugegraph.auth.ConfigAuthenticator.
|
+| auth.graph_store | hugegraph
| The name of graph used to store authentication information, like
users, only for org.apache.hugegraph.auth.StandardAuthenticator.
|
+| auth.user_tokens |
[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with
name and password, only for org.apache.hugegraph.auth.ConfigAuthenticator.
|
| auth.audit_log_rate | 1000.0
| The max rate of audit log output per user, default value is 1000
records per second.
|
| auth.cache_capacity | 10240
| The max cache capacity of each auth cache item.
|
| auth.cache_expire | 600
| The expiration time in seconds of vertex cache.
|
@@ -55,11 +55,11 @@ Corresponding configuration file `rest-server.properties`
### Basic Config Options
-Basic Config Options and Backend Config Options correspond to configuration
files:{graph-name}.properties,such as `hugegraph.properties`
+Basic Config Options and Backend Config Options correspond to configuration
files:{graph-name}.properties, such as `hugegraph.properties`
| config option | default value
| description
[...]
|---------------------------------------|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
-| gremlin.graph |
com.baidu.hugegraph.HugeFactory | Gremlin entrance to create
graph.
[...]
+| gremlin.graph |
org.apache.hugegraph.HugeFactory | Gremlin entrance to create
graph.
[...]
| backend | rocksdb
| The data store type, available values are [memory, rocksdb,
cassandra, scylladb, hbase, mysql].
[...]
| serializer | binary
| The serializer for backend store, available values are [text, binary,
cassandra, hbase, mysql].
[...]
| store | hugegraph
| The database name like Cassandra Keyspace.
[...]
diff --git a/content/en/docs/guides/custom-plugin.md
b/content/en/docs/guides/custom-plugin.md
index 4608b550..af3acb14 100644
--- a/content/en/docs/guides/custom-plugin.md
+++ b/content/en/docs/guides/custom-plugin.md
@@ -48,7 +48,7 @@ The details of maven pom.xml are as follows:
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-plugin-demo</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
@@ -57,7 +57,7 @@ The details of maven pom.xml are as follows:
<dependencies>
<dependency>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-core</artifactId>
<version>${project.version}</version>
</dependency>
@@ -72,8 +72,8 @@ The details of maven pom.xml are as follows:
###### 2.1.1 Implement the interface BackendStoreProvider
-- Realizable interfaces:
`com.baidu.hugegraph.backend.store.BackendStoreProvider`
-- Or inherit an abstract
class:`com.baidu.hugegraph.backend.store.AbstractBackendStoreProvider`
+- Realizable interfaces:
`org.apache.hugegraph.backend.store.BackendStoreProvider`
+- Or inherit an abstract
class:`org.apache.hugegraph.backend.store.AbstractBackendStoreProvider`
Take the RocksDB backend RocksDBStoreProvider as an example:
@@ -153,7 +153,7 @@ public interface BackendStore {
###### 2.1.3 Extending custom serializers
-The serializer must inherit the abstract class:
`com.baidu.hugegraph.backend.serializer.AbstractSerializer`
+The serializer must inherit the abstract class:
`org.apache.hugegraph.backend.serializer.AbstractSerializer`
( `implements GraphSerializer, SchemaSerializer`) The main interface is
defined as follows:
```java
@@ -186,7 +186,7 @@ public interface SchemaSerializer {
When adding a custom backend, it may be necessary to add new configuration
items. The implementation process mainly includes:
-- Add a configuration item container class and implement the interface
`com.baidu.hugegraph.config.OptionHolder`
+- Add a configuration item container class and implement the interface
`org.apache.hugegraph.config.OptionHolder`
- Provide a singleton method `public static OptionHolder instance()`, and call
the method when the object is initialized `OptionHolder.registerOptions()`
- Add configuration item declaration, single-value configuration item type is
`ConfigOption`, multi-value configuration item type is `ConfigListOption`
@@ -242,16 +242,16 @@ public class RocksDBOptions extends OptionHolder {
##### 2.2 Extend custom tokenizer
-The tokenizer needs to implement the interface
`com.baidu.hugegraph.analyzer.Analyzer`, take implementing a SpaceAnalyzer
space tokenizer as an example.
+The tokenizer needs to implement the interface
`org.apache.hugegraph.analyzer.Analyzer`, take implementing a SpaceAnalyzer
space tokenizer as an example.
```java
-package com.baidu.hugegraph.plugin;
+package org.apache.hugegraph.plugin;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
-import com.baidu.hugegraph.analyzer.Analyzer;
+import org.apache.hugegraph.analyzer.Analyzer;
public class SpaceAnalyzer implements Analyzer {
@@ -265,7 +265,7 @@ public class SpaceAnalyzer implements Analyzer {
#### 3. Implement the plug-in interface and register it
The plug-in registration entry is `HugeGraphPlugin.register()`, the custom
plug-in must implement this interface method, and register the extension
-items defined above inside it. The interface
`com.baidu.hugegraph.plugin.HugeGraphPlugin` is defined as follows:
+items defined above inside it. The interface
`org.apache.hugegraph.plugin.HugeGraphPlugin` is defined as follows:
```java
public interface HugeGraphPlugin {
@@ -282,16 +282,16 @@ public interface HugeGraphPlugin {
And HugeGraphPlugin provides 4 static methods for registering extensions:
-- registerOptions(String name, String classPath):register configuration items
-- registerBackend(String name, String classPath):register backend
(BackendStoreProvider)
-- registerSerializer(String name, String classPath):register serializer
-- registerAnalyzer(String name, String classPath):register tokenizer
+- registerOptions(String name, String classPath): register configuration items
+- registerBackend(String name, String classPath): register backend
(BackendStoreProvider)
+- registerSerializer(String name, String classPath): register serializer
+- registerAnalyzer(String name, String classPath): register tokenizer
The following is an example of registering the SpaceAnalyzer tokenizer:
```java
-package com.baidu.hugegraph.plugin;
+package org.apache.hugegraph.plugin;
public class DemoPlugin implements HugeGraphPlugin {
@@ -310,8 +310,8 @@ public class DemoPlugin implements HugeGraphPlugin {
#### 4. Configure SPI entry
1. Make sure the services directory exists:
hugegraph-plugin-demo/resources/META-INF/services
-2. Create a text file in the services directory:
com.baidu.hugegraph.plugin.HugeGraphPlugin
-3. The content of the file is as follows: com.baidu.hugegraph.plugin.DemoPlugin
+2. Create a text file in the services directory:
org.apache.hugegraph.plugin.HugeGraphPlugin
+3. The content of the file is as follows:
org.apache.hugegraph.plugin.DemoPlugin
#### 5. Make Jar package
diff --git a/content/en/docs/quickstart/hugegraph-spark.md
b/content/en/docs/quickstart/hugegraph-spark.md
index 6f3f8a3f..2184e093 100644
--- a/content/en/docs/quickstart/hugegraph-spark.md
+++ b/content/en/docs/quickstart/hugegraph-spark.md
@@ -5,7 +5,7 @@ draft: true
weight: 8
---
-### 1 HugeGraph-Spark概述 (Deprecated)
+### 1 HugeGraph-Spark 概述 (Deprecated)
HugeGraph-Spark 是一个连接 HugeGraph 和 Spark GraphX 的工具,能够读取 HugeGraph 中的数据并转换成
Spark GraphX 的 RDD,然后执行 GraphX 中的各种图算法。 (WARNING: Deprecated Now! Use
HugeGraph-Computer instead)
@@ -73,15 +73,15 @@ HugeGraph-Spark 提供了两种添加配置项的方法:
导入 hugegraph 相关类
```scala
-scala> import com.baidu.hugegraph.spark._
-import com.baidu.hugegraph.spark._
+scala> import org.apache.hugegraph.spark._
+import org.apache.hugegraph.spark._
```
初始化 graph 对象(GraphX RDD),并创建 snapshot
```scala
scala> val graph = sc.hugeGraph("hugegraph", "http://localhost:8080")
-org.apache.spark.graphx.Graph[com.baidu.hugegraph.spark.structure.HugeSparkVertex,com.baidu.hugegraph.spark.structure.HugeSparkEdge]
= org.apache.spark.graphx.impl.GraphImpl@1418a1bd
+org.apache.spark.graphx.Graph[org.apache.hugegraph.spark.structure.HugeSparkVertex,org.apache.hugegraph.spark.structure.HugeSparkEdge]
= org.apache.spark.graphx.impl.GraphImpl@1418a1bd
```
如果已经配置过`spark.hugegraph.server.url`参数,可以省略第二个参数,直接通过`val graph =
sc.hugeGraph("hugegraph")`调用即可。
@@ -115,7 +115,7 @@
sc.makeRDD(top10).join(graph.vertices).collect().foreach(println)
##### PageRank
-PageRank的结果仍为一个图,包含`vertices` 与 `edges`。
+PageRank 的结果仍为一个图,包含`vertices` 与 `edges`。
```scala
val ranks = graph.pageRank(0.0001)
@@ -127,4 +127,4 @@ val ranks = graph.pageRank(0.0001)
val top10 = ranks.vertices.top(10)
```
-更多 GraphX 的 API 请参考 [spark graphx官网](http://spark.apache.org/graphx/)。
+更多 GraphX 的 API 请参考 [spark graphx 官网](http://spark.apache.org/graphx/)。