This is an automated email from the ASF dual-hosted git repository. jin pushed a commit to branch enhance-release in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git
commit c2fd43ecbd1ae3d50edcb9dce6ddcd7bfb34ad9c Author: imbajin <[email protected]> AuthorDate: Wed Mar 27 17:28:26 2024 +0800 doc: a string of enhancement for the website/sec page --- .github/PULL_REQUEST_TEMPLATE.md | 18 ++++++ README.md | 2 +- config.toml | 18 +++--- content/cn/docs/clients/restful-api/auth.md | 50 ++++++++--------- content/cn/docs/clients/restful-api/graphs.md | 12 ++-- content/cn/docs/config/config-authentication.md | 9 ++- content/cn/docs/config/config-option.md | 4 +- content/cn/docs/introduction/README.md | 64 ++++++++++++---------- content/cn/docs/quickstart/hugegraph-hubble.md | 19 ++++--- content/en/docs/clients/restful-api/graphs.md | 12 ++-- content/en/docs/config/config-authentication.md | 6 +- content/en/docs/config/config-option.md | 4 +- content/en/docs/introduction/README.md | 33 ++++++----- content/en/docs/quickstart/hugegraph-hubble.md | 24 +++++--- content/en/docs/quickstart/hugegraph-server.md | 16 +++--- contribution.md | 29 +++++----- themes/docsy/layouts/partials/community_links.html | 5 +- 17 files changed, 194 insertions(+), 131 deletions(-) diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 00000000..63344449 --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,18 @@ +<!-- + Thank you very much for contributing to Apache HugeGraph, we are happy that you want to help us improve it! + + Some tips for you: + 1. If this is your first time to submit PR, please read the + [contributing guidelines](https://github.com/apache/incubator-hugegraph-doc/blob/master/contribution.md) + + 2. If a PR fix/close an issue, type the message "close xxx" below (Remember to update both EN & CN doc) + + 3. Build the website locally after you finish the PR, and check if the changes are correct, THX~ +--> + +## Purpose of the PR + +- close #xxx <!-- or "fix #xxx", "link #xxx" --> + +<!-- Better to paste the screenshot diff here, "xxx" is the ID-link of related issue, e.g: #1024 --> + diff --git a/README.md b/README.md index 3911385a..47120790 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -## Introduction of HugeGraph +## Build/Test/Contribute to website Please visit the [contribution doc](./contribution.md) to get start, include theme/website description & settings~ diff --git a/config.toml b/config.toml index 7f05e278..dc2370ff 100644 --- a/config.toml +++ b/config.toml @@ -250,21 +250,21 @@ enable = false desc = "Unleash your ideas through file configuration" [[params.links.user]] name ="WeChat" - url = "https://github.com/apache/incubator-hugegraph#contact-us" + url = "https://github.com/apache/hugegraph#contact-us" icon = "fa fa-comments" desc = "Follow us on WeChat to get the latest news" [[params.links.developer]] name = "GitHub" - url = "https://github.com/apache/incubator-hugegraph" + url = "https://github.com/apache/hugegraph" icon = "fab fa-github" - desc = "Development takes place here!" -#[[params.links.developer]] -# name = "Slack" -# url = "https://example.org/slack" -# icon = "fab fa-slack" -# desc = "Chat with other project developers" + desc = "Development takes place here~" +[[params.links.developer]] + name = "Security mailing list" + url = "mailto:[email protected]" + icon = "fab fa-slack" + desc = "Report SEC problems" [[params.links.developer]] name = "Developer mailing list" url = "../docs/contribution-guidelines/subscribe/" icon = "fa fa-envelope" - desc = "Discuss development issues around the project" + desc = "Discuss community issues around the project" diff --git a/content/cn/docs/clients/restful-api/auth.md b/content/cn/docs/clients/restful-api/auth.md index 2064a202..cee6c309 100644 --- a/content/cn/docs/clients/restful-api/auth.md +++ b/content/cn/docs/clients/restful-api/auth.md @@ -6,13 +6,13 @@ weight: 16 ### 10.1 用户认证与权限控制 -> 开启权限及相关配置请先参考 [权限配置](/docs/config/config-authentication/) 文档 +> 开启权限及相关配置请先参考 [权限配置](/cn/docs/config/config-authentication/) 文档 ##### 用户认证与权限控制概述: -HugeGraph支持多用户认证、以及细粒度的权限访问控制,采用基于“用户-用户组-操作-资源”的4层设计,灵活控制用户角色与权限。 -资源描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括type、label、properties三个要素,共有18种type、 -任意label、任意properties的组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系。用户可以属于一个或多个用户组, -每个用户组可以拥有对任意个资源的操作权限,操作类型包括:读、写、删除、执行等种类。 HugeGraph支持动态创建用户、用户组、资源, +HugeGraph 支持多用户认证、以及细粒度的权限访问控制,采用基于“用户 - 用户组 - 操作 - 资源”的 4 层设计,灵活控制用户角色与权限。 +资源描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括 type、label、properties 三个要素,共有 18 种 type、 +任意 label、任意 properties 的组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系。用户可以属于一个或多个用户组, +每个用户组可以拥有对任意个资源的操作权限,操作类型包括:读、写、删除、执行等种类。HugeGraph 支持动态创建用户、用户组、资源, 支持动态分配或取消权限。初始化数据库时超级管理员用户被创建,后续可通过超级管理员创建各类角色用户,新创建的用户如果被分配足够权限后,可以由其创建或管理更多的用户。 ##### 举例说明: @@ -21,7 +21,7 @@ city: Beijing}) 描述:用户'boss'拥有对'graph1'图中北京人的读权限。 ##### 接口说明: -用户认证与权限控制接口包括5类:UserAPI、GroupAPI、TargetAPI、BelongAPI、AccessAPI。 +用户认证与权限控制接口包括 5 类:UserAPI、GroupAPI、TargetAPI、BelongAPI、AccessAPI。 ### 10.2 用户(User)API 用户接口包括:创建用户,删除用户,修改用户,和查询用户相关信息接口。 @@ -114,7 +114,7 @@ PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test ``` ##### Request Body -修改user_name、user_password和user_phone +修改 user_name、user_password 和 user_phone ```json { "user_name": "test", @@ -330,7 +330,7 @@ PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant ``` ##### Request Body -修改group_description +修改 group_description ```json { "group_name": "grant", @@ -424,8 +424,8 @@ GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all ``` ### 10.4 资源(Target)API -资源描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括type、label、properties三个要素,共有18种type、 -任意label、任意properties的组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系。 +资源描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括 type、label、properties 三个要素,共有 18 种 type、 +任意 label、任意 properties 的组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系。 资源接口包括:资源的创建、删除、修改和查询。 #### 10.4.1 创建资源 @@ -434,17 +434,17 @@ GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all - target_name: 资源名称 - target_graph: 资源图 - target_url: 资源地址 -- target_resources: 资源定义(列表) +- target_resources: 资源定义 (列表) -target_resources可以包括多个target_resource,以列表的形式存储。 -每个target_resource包含: -- type:可选值 VERTEX, EDGE等, 可填ALL,则表示可以是顶点或边; +target_resources 可以包括多个 target_resource,以列表的形式存储。 +每个 target_resource 包含: +- type:可选值 VERTEX, EDGE 等,可填 ALL,则表示可以是顶点或边; - label:可选值,⼀个顶点或边类型的名称,可填*,则表示任意类型; -- properties:map类型,可包含多个属性的键值对,必须匹配所有属性值,属性值⽀持填条件范围(age: - P.gte(18)),properties如果为null表示任意属性均可,如果属性名和属性值均为‘*ʼ也表示任意属性均可。 +- properties:map 类型,可包含多个属性的键值对,必须匹配所有属性值,属性值⽀持填条件范围(age: + P.gte(18)),properties 如果为 null 表示任意属性均可,如果属性名和属性值均为‘*ʼ也表示任意属性均可。 如精细资源:"target_resources": [{"type":"VERTEX","label":"person","properties":{"city":"Beijing","age":"P.gte(20)"}}]** -资源定义含义:类型是'person'的顶点,且城市属性是'Beijing',年龄属性大于等于20。 +资源定义含义:类型是'person'的顶点,且城市属性是'Beijing',年龄属性大于等于 20。 ##### Request Body @@ -533,7 +533,7 @@ PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin ``` ##### Request Body -修改资源定义中的type +修改资源定义中的 type ```json { "target_name": "gremlin", @@ -757,7 +757,7 @@ PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:gran ``` ##### Request Body -修改belong_description +修改 belong_description ```json { "belong_description": "update test" @@ -852,10 +852,10 @@ GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all ``` ### 10.6 赋权(Access)API -给用户组赋予资源的权限,主要包含:读操作(READ)、写操作(WRITE)、删除操作(DELETE)、执行操作(EXECUTE)等。 +给用户组赋予资源的权限,主要包含:读操作 (READ)、写操作 (WRITE)、删除操作 (DELETE)、执行操作 (EXECUTE) 等。 赋权接口包括:赋权的创建、删除、修改和查询。 -#### 10.6.1 创建赋权(用户组赋予资源的权限) +#### 10.6.1 创建赋权 (用户组赋予资源的权限) ##### Params @@ -865,10 +865,10 @@ GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all - access_description: 赋权描述 access_permission: -- READ:读操作,所有的查询,包括查询Schema、查顶点/边,查询顶点和边的数量VERTEX_AGGR/EDGE_AGGR,也包括读图的状态STATUS、变量VAR、任务TASK等; -- WRITE:写操作,所有的创建、更新操作,包括给Schema增加property key,给顶点增加或更新属性等; +- READ:读操作,所有的查询,包括查询 Schema、查顶点/边,查询顶点和边的数量 VERTEX_AGGR/EDGE_AGGR,也包括读图的状态 STATUS、变量 VAR、任务 TASK 等; +- WRITE:写操作,所有的创建、更新操作,包括给 Schema 增加 property key,给顶点增加或更新属性等; - DELETE:删除操作,包括删除元数据、删除顶点/边; -- EXECUTE:执⾏操作,包括执⾏Gremlin语句、执⾏Task、执⾏metadata函数; +- EXECUTE:执⾏操作,包括执⾏ Gremlin 语句、执⾏ Task、执⾏ metadata 函数; ##### Request Body @@ -945,7 +945,7 @@ PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:al ``` ##### Request Body -修改access_description +修改 access_description ```json { "access_description": "test" diff --git a/content/cn/docs/clients/restful-api/graphs.md b/content/cn/docs/clients/restful-api/graphs.md index 176c37ee..07327e6f 100644 --- a/content/cn/docs/clients/restful-api/graphs.md +++ b/content/cn/docs/clients/restful-api/graphs.md @@ -93,9 +93,10 @@ gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy backend=rocksdb serializer=binary store=hugegraph_clone -rocksdb.data_path=./hg2 -rocksdb.wal_path=./hg2 +rocksdb.data_path=./rks-data-xx +rocksdb.wal_path=./rks-data-xx ``` +> Note: 存储路径不能与现有图相同(使用不同的目录) ##### Response Status @@ -117,7 +118,7 @@ rocksdb.wal_path=./hg2 ##### Method & Url ``` -POST http://localhost:8080/graphs/hugegraph2 +POST http://localhost:8080/graphs/hugegraph-xx ``` ##### Request Body @@ -127,9 +128,10 @@ gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy backend=rocksdb serializer=binary store=hugegraph2 -rocksdb.data_path=./hg2 -rocksdb.wal_path=./hg2 +rocksdb.data_path=./rks-data-xx +rocksdb.wal_path=./rks-data-xx ``` +> Note: 存储路径不能与现有图相同(使用不同的目录) ##### Response Status diff --git a/content/cn/docs/config/config-authentication.md b/content/cn/docs/config/config-authentication.md index a76de93f..beb56cbc 100644 --- a/content/cn/docs/config/config-authentication.md +++ b/content/cn/docs/config/config-authentication.md @@ -5,7 +5,9 @@ weight: 3 --- ### 概述 -HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了完备的`StandardAuthenticator`权限模式,支持多用户认证、以及细粒度的权限访问控制,采用基于“用户 - 用户组 - 操作 - 资源”的 4 层设计,灵活控制用户角色与权限 (支持多 GraphServer) + +HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了完备的`StandardAuthenticator`权限模式,支持多用户认证、 +以及细粒度的权限访问控制,采用基于“用户 - 用户组 - 操作 - 资源”的 4 层设计,灵活控制用户角色与权限 (支持多 GraphServer) `StandardAuthenticator` 模式的几个核心设计: - 初始化时创建超级管理员 (`admin`) 用户,后续通过超级管理员创建其它用户,新创建的用户被分配足够权限后,可以创建或管理更多的用户 @@ -22,7 +24,10 @@ user(name=xx) -belong-> group(name=xx) -access(read)-> target(graph=graph1, reso ### 配置用户认证 -HugeGraph 默认**不启用**用户认证功能,需通过修改配置文件来启用该功能。内置实现了`StandardAuthenticator`模式,该模式支持多用户认证与细粒度权限控制。此外,开发者可以自定义实现`HugeAuthenticator`接口来对接自身的权限系统。 +HugeGraph 目前默认**未启用**用户认证功能,需通过修改配置文件来启用该功能。(Note: 如果在生产环境/外网使用, +请使用 **Java11** 版本 + 开启权限避免安全相关隐患) + +目前已内置实现了`StandardAuthenticator`模式,该模式支持多用户认证与细粒度权限控制。此外,开发者可以自定义实现`HugeAuthenticator`接口来对接自身的权限系统。 用户认证方式均采用 [HTTP Basic Authentication](https://zh.wikipedia.org/wiki/HTTP%E5%9F%BA%E6%9C%AC%E8%AE%A4%E8%AF%81) ,简单说就是在发送 HTTP 请求时在 `Authentication` 设置选择 `Basic` 然后输入对应的用户名和密码,对应 HTTP 明文如下所示 : diff --git a/content/cn/docs/config/config-option.md b/content/cn/docs/config/config-option.md index a18be14b..9aaa8bec 100644 --- a/content/cn/docs/config/config-option.md +++ b/content/cn/docs/config/config-option.md @@ -181,8 +181,8 @@ weight: 2 | backend | | Must be set to `rocksdb`. [...] | serializer | | Must be set to `binary`. [...] | rocksdb.data_disks | [] | The optimized disks for storing data of RocksDB. The format of each element: `STORE/TABLE: /path/disk`.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search [...] -| rocksdb.data_path | rocksdb-data | The path for storing data of RocksDB. [...] -| rocksdb.wal_path | rocksdb-data | The path for storing WAL of RocksDB. [...] +| rocksdb.data_path | rocksdb-data/data | The path for storing data of RocksDB. [...] +| rocksdb.wal_path | rocksdb-data/wal | The path for storing WAL of RocksDB. [...] | rocksdb.allow_mmap_reads | false | Allow the OS to mmap file for reading sst tables. [...] | rocksdb.allow_mmap_writes | false | Allow the OS to mmap file for writing. [...] | rocksdb.block_cache_capacity | 8388608 | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache. [...] diff --git a/content/cn/docs/introduction/README.md b/content/cn/docs/introduction/README.md index 20200689..5a89a1b5 100644 --- a/content/cn/docs/introduction/README.md +++ b/content/cn/docs/introduction/README.md @@ -6,59 +6,65 @@ weight: 1 ### Summary -HugeGraph是一款易用、高效、通用的开源图数据库系统(Graph Database,[GitHub项目地址](https://github.com/apache/hugegraph)), +Apache HugeGraph 是一款易用、高效、通用的开源图数据库系统(Graph Database,[GitHub 项目地址](https://github.com/apache/hugegraph)), 实现了[Apache TinkerPop3](https://tinkerpop.apache.org)框架及完全兼容[Gremlin](https://tinkerpop.apache.org/gremlin.html)查询语言, -具备完善的工具链组件,助力用户轻松构建基于图数据库之上的应用和产品。HugeGraph支持百亿以上的顶点和边快速导入,并提供毫秒级的关联关系查询能力(OLTP), +具备完善的工具链组件,助力用户轻松构建基于图数据库之上的应用和产品。HugeGraph 支持百亿以上的顶点和边快速导入,并提供毫秒级的关联关系查询能力(OLTP), 并支持大规模分布式图分析(OLAP)。 -HugeGraph典型应用场景包括深度关系探索、关联分析、路径搜索、特征抽取、数据聚类、社区检测、 -知识图谱等,适用业务领域有如网络安全、电信诈骗、金融风控、广告推荐、社交网络和智能机器人等。 +HugeGraph 典型应用场景包括深度关系探索、关联分析、路径搜索、特征抽取、数据聚类、社区检测、知识图谱等, +适用业务领域有如网络安全、电信诈骗、金融风控、广告推荐、社交网络和智能机器人等。 本系统的主要应用场景是解决反欺诈、威胁情报、黑产打击等业务的图数据存储和建模分析需求,在此基础上逐步扩展及支持了更多的通用图应用。 ### Features -HugeGraph支持在线及离线环境下的图操作,支持批量导入数据,支持高效的复杂关联关系分析,并且能够与大数据平台无缝集成。 -HugeGraph支持多用户并行操作,用户可输入Gremlin查询语句,并及时得到图查询结果,也可在用户程序中调用HugeGraph API进行图分析或查询。 +HugeGraph 支持在线及离线环境下的图操作,支持批量导入数据,支持高效的复杂关联关系分析,并且能够与大数据平台无缝集成。 +HugeGraph 支持多用户并行操作,用户可输入 Gremlin 查询语句,并及时得到图查询结果,也可在用户程序中调用 HugeGraph API 进行图分析或查询。 本系统具备如下特点: -- 易用:HugeGraph支持Gremlin图查询语言与RESTful API,同时提供图检索常用接口,具备功能齐全的周边工具,轻松实现基于图的各种查询分析运算。 -- 高效:HugeGraph在图存储和图计算方面做了深度优化,提供多种批量导入工具,轻松完成百亿级数据快速导入,通过优化过的查询达到图检索的毫秒级响应。支持数千用户并发的在线实时操作。 -- 通用:HugeGraph支持Apache Gremlin标准图查询语言和Property Graph标准图建模方法,支持基于图的OLTP和OLAP方案。集成Apache Hadoop及Apache Spark大数据平台。 +- 易用:HugeGraph 支持 Gremlin 图查询语言与 RESTful API,同时提供图检索常用接口,具备功能齐全的周边工具,轻松实现基于图的各种查询分析运算。 +- 高效:HugeGraph 在图存储和图计算方面做了深度优化,提供多种批量导入工具,轻松完成百亿级数据快速导入,通过优化过的查询达到图检索的毫秒级响应。支持数千用户并发的在线实时操作。 +- 通用:HugeGraph 支持 Apache Gremlin 标准图查询语言和 Property Graph 标准图建模方法,支持基于图的 OLTP 和 OLAP 方案。集成 Apache Hadoop 及 Apache Spark 大数据平台。 - 可扩展:支持分布式存储、数据多副本及横向扩容,内置多种后端存储引擎,也可插件式轻松扩展后端存储引擎。 -- 开放:HugeGraph代码开源(Apache 2 License),客户可自主修改定制,选择性回馈开源社区。 +- 开放:HugeGraph 代码开源(Apache 2 License),客户可自主修改定制,选择性回馈开源社区。 本系统的功能包括但不限于: -- 支持从多数据源批量导入数据(包括本地文件、HDFS文件、MySQL数据库等数据源),支持多种文件格式导入(包括TXT、CSV、JSON等格式) +- 支持从多数据源批量导入数据 (包括本地文件、HDFS 文件、MySQL 数据库等数据源),支持多种文件格式导入 (包括 TXT、CSV、JSON 等格式) - 具备可视化操作界面,可用于操作、分析及展示图,降低用户使用门槛 -- 优化的图接口:最短路径(Shortest Path)、K步连通子图(K-neighbor)、K步到达邻接点(K-out)、个性化推荐算法PersonalRank等 -- 基于Apache TinkerPop3框架实现,支持Gremlin图查询语言 +- 优化的图接口:最短路径 (Shortest Path)、K 步连通子图 (K-neighbor)、K 步到达邻接点 (K-out)、个性化推荐算法 PersonalRank 等 +- 基于 Apache TinkerPop3 框架实现,支持 Gremlin 图查询语言 - 支持属性图,顶点和边均可添加属性,支持丰富的属性类型 -- 具备独立的Schema元数据信息,拥有强大的图建模能力,方便第三方系统集成 -- 支持多顶点ID策略:支持主键ID、支持自动生成ID、支持用户自定义字符串ID、支持用户自定义数字ID +- 具备独立的 Schema 元数据信息,拥有强大的图建模能力,方便第三方系统集成 +- 支持多顶点 ID 策略:支持主键 ID、支持自动生成 ID、支持用户自定义字符串 ID、支持用户自定义数字 ID - 可以对边和顶点的属性建立索引,支持精确查询、范围查询、全文检索 -- 存储系统采用插件方式,支持RocksDB、Cassandra、ScyllaDB、HBase、MySQL、PostgreSQL、Palo以及InMemory等 -- 与Hadoop、Spark GraphX等大数据系统集成,支持Bulk Load操作 -- 支持高可用HA、数据多副本、备份恢复、监控等 +- 存储系统采用插件方式,支持 RocksDB(单机/集群)、Cassandra、ScyllaDB、HBase、MySQL、PostgreSQL、Palo 以及 Memory 等 +- 与 HDFS、Spark/Flink、GraphX 等大数据系统集成,支持 BulkLoad 操作导入海量数据 +- 支持高可用 HA、数据多副本、备份恢复、监控、分布式 Trace 等 ### Modules -- [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server是HugeGraph项目的核心部分,包含Core、Backend、API等子模块; - - Core:图引擎实现,向下连接Backend模块,向上支持API模块; - - Backend:实现将图数据存储到后端,支持的后端包括:Memory、Cassandra、ScyllaDB、RocksDB、HBase、MySQL及PostgreSQL,用户根据实际情况选择一种即可; - - API:内置REST Server,向用户提供RESTful API,同时完全兼容Gremlin查询。 -- [HugeGraph-Client](/docs/quickstart/hugegraph-client):HugeGraph-Client提供了RESTful API的客户端,用于连接HugeGraph-Server,目前仅实现Java版,其他语言用户可自行实现; -- [HugeGraph-Loader](/docs/quickstart/hugegraph-loader):HugeGraph-Loader是基于HugeGraph-Client的数据导入工具,将普通文本数据转化为图形的顶点和边并插入图形数据库中; -- [HugeGraph-Computer](/docs/quickstart/hugegraph-computer):HugeGraph-Computer 是分布式图处理系统 (OLAP). 它是 [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf) 的一个实现. 它可以运行在 Kubernetes 上; -- [HugeGraph-Hubble](/docs/quickstart/hugegraph-hubble):HugeGraph-Hubble是HugeGraph的Web可视化管理平台,一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程; -- [HugeGraph-Tools](/docs/quickstart/hugegraph-tools):HugeGraph-Tools是HugeGraph的部署和管理工具,包括管理图、备份/恢复、Gremlin执行等功能。 +- [HugeGraph-Server](/cn/docs/quickstart/hugegraph-server): HugeGraph-Server 是 HugeGraph 项目的核心部分,包含 Core、Backend、API 等子模块; + - Core:图引擎实现,向下连接 Backend 模块,向上支持 API 模块; + - Backend:实现将图数据存储到后端,支持的后端包括:Memory、Cassandra、ScyllaDB、RocksDB、HBase、MySQL 及 PostgreSQL,用户根据实际情况选择一种即可; + - API:内置 REST Server,向用户提供 RESTful API,同时完全兼容 Gremlin 查询。(支持分布式存储和计算下推) +- [HugeGraph-Toolchain](https://github.com/apache/hugegraph-toolchain): (工具链) + - [HugeGraph-Client](/cn/docs/quickstart/hugegraph-client):HugeGraph-Client 提供了 RESTful API 的客户端,用于连接 HugeGraph-Server,目前仅实现 Java 版,其他语言用户可自行实现; + - [HugeGraph-Loader](/cn/docs/quickstart/hugegraph-loader):HugeGraph-Loader 是基于 HugeGraph-Client 的数据导入工具,将普通文本数据转化为图形的顶点和边并插入图形数据库中; + - [HugeGraph-Hubble](/cn/docs/quickstart/hugegraph-hubble):HugeGraph-Hubble 是 HugeGraph 的 Web +可视化管理平台,一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程; + - [HugeGraph-Tools](/cn/docs/quickstart/hugegraph-tools):HugeGraph-Tools 是 HugeGraph 的部署和管理工具,包括管理图、备份/恢复、Gremlin 执行等功能。 +- [HugeGraph-Computer](/cn/docs/quickstart/hugegraph-computer):HugeGraph-Computer 是分布式图处理系统 (OLAP). + 它是 [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf) 的一个实现。它可以运行在 Kubernetes/Yarn + 等集群上,支持超大规模图计算。 +- [HugeGraph-AI(Beta)](/cn/docs/quickstart/hugegraph-ai):HugeGraph-AI 是 HugeGraph 独立的 AI + 组件,提供了图神经网络的训练和推理功能,LLM/Graph RAG 结合/Python-Client 等相关组件,持续更新 ing。 ### Contact Us - [GitHub Issues](https://github.com/apache/incubator-hugegraph/issues): 使用途中出现问题或提供功能性建议,可通过此反馈 (推荐) -- 邮件反馈: [[email protected]](mailto:[email protected]) ([邮箱订阅方式](https://hugegraph.apache.org/docs/contribution-guidelines/subscribe/)) -- 微信公众号: Apache HugeGraph, 欢迎扫描下方二维码加入我们! +- 邮件反馈:[[email protected]](mailto:[email protected]) ([邮箱订阅方式](https://hugegraph.apache.org/docs/contribution-guidelines/subscribe/)) +- 微信公众号:Apache HugeGraph, 欢迎扫描下方二维码加入我们! <img src="https://github.com/apache/incubator-hugegraph-doc/blob/master/assets/images/wechat.png?raw=true" alt="QR png" width="300"/> diff --git a/content/cn/docs/quickstart/hugegraph-hubble.md b/content/cn/docs/quickstart/hugegraph-hubble.md index 2143083f..f2f1c4ba 100644 --- a/content/cn/docs/quickstart/hugegraph-hubble.md +++ b/content/cn/docs/quickstart/hugegraph-hubble.md @@ -20,10 +20,6 @@ HugeGraph 是一款面向分析型,支持批量操作的图数据库系统, 元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。 -##### 数据导入 - -数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。 - ##### 图分析 通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。 @@ -32,6 +28,14 @@ HugeGraph 是一款面向分析型,支持批量操作的图数据库系统, 对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。 +##### 数据导入 (BETA) + +> 注: 数据导入功能目前适合初步试用,正式数据导入请使用 [hugegraph-loader](/cn/docs/quickstart/hugegraph-loader), 性能/稳定性/功能全面许多 + +数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务, +实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容, +就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。 + ### 2 部署 有三种方式可以部署`hugegraph-hubble` @@ -70,7 +74,7 @@ services: > 注意: > -> 1. `hugegraph-hubble` 的 docker 镜像是一个便捷发布版本,用于快速测试试用 hubble,并非**ASF官方发布物料包的方式**。你可以从 [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub) 中得到更多细节。 +> 1. `hugegraph-hubble` 的 docker 镜像是一个便捷发布版本,用于快速测试试用 hubble,并非**ASF 官方发布物料包的方式**。你可以从 [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub) 中得到更多细节。 > > 2. **生产环境**推荐使用 `release tag`(如 `1.2.0`) 稳定版。使用 `latest` tag 默认对应 master > 最新代码。 @@ -138,8 +142,9 @@ git clone https://github.com/apache/hugegraph-toolchain.git cd incubator-hugegraph-toolchain sudo pip install -r hugegraph-hubble/hubble-dist/assembly/travis/requirements.txt mvn install -pl hugegraph-client,hugegraph-loader -am -Dmaven.javadoc.skip=true -DskipTests -ntp + cd hugegraph-hubble -mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp +mvn -e package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp cd apache-hugegraph-hubble-incubating* ``` @@ -308,7 +313,7 @@ bin/start-hubble.sh -d #### 4.3 数据导入 -> **注意**:目前推荐使用 [hugegraph-loader](/cn/docs/quickstart/hugegraph-loader) 进行正式数据导入, hubble 内置的导入用来做**测试**和**简单上手** +> **注意**:目前推荐使用 [hugegraph-loader](/cn/docs/quickstart/hugegraph-loader) 进行正式数据导入,hubble 内置的导入用来做**测试**和**简单上手** 数据导入的使用流程如下: diff --git a/content/en/docs/clients/restful-api/graphs.md b/content/en/docs/clients/restful-api/graphs.md index 913a1d49..ac2508cb 100644 --- a/content/en/docs/clients/restful-api/graphs.md +++ b/content/en/docs/clients/restful-api/graphs.md @@ -99,10 +99,12 @@ gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy backend=rocksdb serializer=binary store=hugegraph_clone -rocksdb.data_path=./hg2 -rocksdb.wal_path=./hg2 +rocksdb.data_path=./rks-data-xx +rocksdb.wal_path=./rks-data-xx ``` +> Note: the data/wal_path can't be the same as the existing graph (use separate directories) + ##### Response Status ```javascript @@ -133,10 +135,12 @@ gremlin.graph=org.apache.hugegraph.auth.HugeFactoryAuthProxy backend=rocksdb serializer=binary store=hugegraph2 -rocksdb.data_path=./hg2 -rocksdb.wal_path=./hg2 +rocksdb.data_path=./rks-data-xx +rocksdb.wal_path=./rks-data-xx ``` +> Note: the data/wal_path can't be the same as the existing graph (use separate directories) + ##### Response Status ```javascript diff --git a/content/en/docs/config/config-authentication.md b/content/en/docs/config/config-authentication.md index cc7b3a1a..64aed02a 100644 --- a/content/en/docs/config/config-authentication.md +++ b/content/en/docs/config/config-authentication.md @@ -24,7 +24,11 @@ user(name=xx) -belong-> group(name=xx) -access(read)-> target(graph=graph1, reso ### Configure User Authentication -By default, HugeGraph does **not enable** user authentication. You need to modify the configuration file to enable this feature. HugeGraph provides built-in authentication mode: `StandardAuthenticator`. This mode supports multi-user authentication and fine-grained permission control. Additionally, developers can implement their own `HugeAuthenticator` interface to integrate with their existing authentication systems. +By default, HugeGraph does **not enable** user authentication, and it needs to be enabled by +modifying the configuration file (Note: If used in a production environment or over the internet, +please use a **Java11** version and enable **auth-system** to avoid security risks.)" + +You need to modify the configuration file to enable this feature. HugeGraph provides built-in authentication mode: `StandardAuthenticator`. This mode supports multi-user authentication and fine-grained permission control. Additionally, developers can implement their own `HugeAuthenticator` interface to integrate with their existing authentication systems. HugeGraph authentication modes adopt [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication). In simple terms, when sending an HTTP request, you need to set the `Authentication` header to `Basic` and provide the corresponding username and password. The corresponding HTTP plaintext format is as follows: diff --git a/content/en/docs/config/config-option.md b/content/en/docs/config/config-option.md index 65bf9141..2b986108 100644 --- a/content/en/docs/config/config-option.md +++ b/content/en/docs/config/config-option.md @@ -181,8 +181,8 @@ Other options are consistent with the Cassandra backend. | backend | | Must be set to `rocksdb`. [...] | serializer | | Must be set to `binary`. [...] | rocksdb.data_disks | [] | The optimized disks for storing data of RocksDB. The format of each element: `STORE/TABLE: /path/disk`.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search [...] -| rocksdb.data_path | rocksdb-data | The path for storing data of RocksDB. [...] -| rocksdb.wal_path | rocksdb-data | The path for storing WAL of RocksDB. [...] +| rocksdb.data_path | rocksdb-data/data | The path for storing data of RocksDB. [...] +| rocksdb.wal_path | rocksdb-data/wal | The path for storing WAL of RocksDB. [...] | rocksdb.allow_mmap_reads | false | Allow the OS to mmap file for reading sst tables. [...] | rocksdb.allow_mmap_writes | false | Allow the OS to mmap file for writing. [...] | rocksdb.block_cache_capacity | 8388608 | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache. [...] diff --git a/content/en/docs/introduction/README.md b/content/en/docs/introduction/README.md index 05d12d0f..c8aabaed 100644 --- a/content/en/docs/introduction/README.md +++ b/content/en/docs/introduction/README.md @@ -6,8 +6,8 @@ weight: 1 ### Summary -HugeGraph is an easy-to-use, efficient, general-purpose open source graph database system(Graph Database, [GitHub project address](https://github.com/hugegraph/hugegraph)), -implemented the [Apache TinkerPop3](https://tinkerpop.apache.org) framework and is fully compatible with the [Gremlin](https://tinkerpop.apache.org/gremlin.html) query language, +Apache HugeGraph is an easy-to-use, efficient, general-purpose open source graph database system +(Graph Database, [GitHub project address](https://github.com/hugegraph/hugegraph)), implemented the [Apache TinkerPop3](https://tinkerpop.apache.org) framework and is fully compatible with the [Gremlin](https://tinkerpop.apache.org/gremlin.html) query language, With complete toolchain components, it helps users easily build applications and products based on graph databases. HugeGraph supports fast import of more than 10 billion vertices and edges, and provides millisecond-level relational query capability (OLTP). It supports large-scale distributed graph computing (OLAP). @@ -36,21 +36,26 @@ The functions of this system include but are not limited to: - Has independent schema metadata information, has powerful graph modeling capabilities, and facilitates third-party system integration - Support multi-vertex ID strategy: support primary key ID, support automatic ID generation, support user-defined string ID, support user-defined digital ID - The attributes of edges and vertices can be indexed to support precise query, range query, and full-text search -- The storage system adopts plug-in mode, supporting RocksDB, Cassandra, ScyllaDB, HBase, MySQL, PostgreSQL, Palo, and InMemory, etc. -- Integrate with big data systems such as Hadoop and Spark GraphX, and support Bulk Load operations -- Support high availability HA, multiple copies of data, backup recovery, monitoring, etc. +- The storage system adopts a plug-in method, supporting RocksDB (standalone/cluster), Cassandra, ScyllaDB, HBase, MySQL, PostgreSQL, Palo and Memory, etc. +- Integrated with big data systems such as HDFS, Spark/Flink, GraphX, etc., supports BulkLoad operation to import massive data. +- Supports HA(high availability), multiple data replicas, backup and recovery, monitoring, distributed Trace, etc. ### Modules -- [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server is the core part of the HugeGraph project, including submodules such as Core, Backend, and API; - - Core: Graph engine implementation, connecting the Backend module downward and supporting the API module upward; - - Backend: Realize the storage of graph data to the backend. The supported backends include: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL, and PostgreSQL. Users can choose one according to the actual situation; - - API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin query. -- [HugeGraph-Client](/docs/quickstart/hugegraph-client): HugeGraph-Client provides a RESTful API client for connecting to HugeGraph-Server. Currently, only Java version is implemented. Users of other languages can implement it by themselves; -- [HugeGraph-Loader](/docs/quickstart/hugegraph-loader): HugeGraph-Loader is a data import tool based on HugeGraph-Client, which converts ordinary text data into graph vertices and edges and inserts them into graph database; -- [HugeGraph-Computer](/docs/quickstart/hugegraph-computer): HugeGraph-Computer is a distributed graph processing system for HugeGraph (OLAP). It is an implementation of [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf). It runs on the Kubernetes framework; -- [HugeGraph-Hubble](/docs/quickstart/hugegraph-hubble): HugeGraph-Hubble is HugeGraph's web visualization management platform, a one-stop visual analysis platform. The platform covers the whole process from data modeling, to rapid data import, to online and offline analysis of data, and unified management of graphs; -- [HugeGraph-Tools](/docs/quickstart/hugegraph-tools): HugeGraph-Tools is HugeGraph's deployment and management tools, including functions such as managing graphs, backup/restore, Gremlin execution, etc. +- [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server is the core part of the HugeGraph project, containing Core, Backend, API and other submodules; + - Core: Implements the graph engine, connects to the Backend module downwards, and supports the API module upwards; + - Backend: Implements the storage of graph data to the backend, supports backends including: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL and PostgreSQL, users can choose one according to the actual situation; + - API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin queries. (Supports distributed storage and computation pushdown) +- [HugeGraph-Toolchain](https://github.com/apache/hugegraph-toolchain): (Toolchain) + - [HugeGraph-Client](/docs/quickstart/hugegraph-client): HugeGraph-Client provides a RESTful API client for connecting to HugeGraph-Server, currently only the Java version is implemented, users of other languages can implement it themselves; + - [HugeGraph-Loader](/docs/quickstart/hugegraph-loader): HugeGraph-Loader is a data import tool based on HugeGraph-Client, which transforms ordinary text data into vertices and edges of the graph and inserts them into the graph database; + - [HugeGraph-Hubble](/docs/quickstart/hugegraph-hubble): HugeGraph-Hubble is HugeGraph's Web +visualization management platform, a one-stop visualization analysis platform, the platform covers the whole process from data modeling, to fast data import, to online and offline analysis of data, and unified management of the graph; + - [HugeGraph-Tools](/docs/quickstart/hugegraph-tools): HugeGraph-Tools is HugeGraph's deployment and management tool, including graph management, backup/recovery, Gremlin execution and other functions. +- [HugeGraph-Computer](/docs/quickstart/hugegraph-computer): HugeGraph-Computer is a distributed graph processing system (OLAP). + It is an implementation of [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf). It can run on clusters such as Kubernetes/Yarn, and supports large-scale graph computing. +- [HugeGraph-AI(Beta)](/docs/quickstart/hugegraph-ai): HugeGraph-AI is HugeGraph's independent AI + component, providing training and inference functions of graph neural networks, LLM/Graph RAG combination/Python-Client and other related components, continuously updating. ### Contact Us diff --git a/content/en/docs/quickstart/hugegraph-hubble.md b/content/en/docs/quickstart/hugegraph-hubble.md index 97d6ffc3..a86b9e60 100644 --- a/content/en/docs/quickstart/hugegraph-hubble.md +++ b/content/en/docs/quickstart/hugegraph-hubble.md @@ -20,10 +20,6 @@ The graph management module realizes the unified management of multiple graphs a The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and [...] -##### Data Import - -Data import is to convert the user's business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement [...] - ##### Graph Analysis By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, [...] @@ -32,6 +28,20 @@ By inputting the graph traversal language Gremlin, high-performance general anal For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks. +##### Data Import + +> "Note: The data import function is currently suitable for preliminary use. For formal data import, +> please use [hugegraph-loader](/docs/quickstart/hugegraph-loader), which has much better performance, stability, and functionality." + +Data import is to convert the user's business data into the vertices and edges of the graph and +insert it into the graph database. The platform provides a wizard-style visual import module. +By creating import tasks, the management of import tasks and the parallel operation of multiple +import tasks are realized. Improve import performance. After entering the import task, you only +need to follow the platform step prompts, upload files as needed, and fill in the content to easily +implement the import process of graph data. At the same time, it supports breakpoint resuming, +error retry mechanism, etc., which reduces import costs and improves efficiency. + + ### 2 Deploy There are three ways to deplot `hugegraph-hubble` @@ -207,14 +217,14 @@ Left navigation: 1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute. 2. Created attributes can be used as attributes of vertex type and edge type. -List mode: +List mode: <center> <img src="/docs/images/images-hubble/3221属性创建.png" alt="image"> </center> -Graph mode: +Graph mode: <center> <img src="/docs/images/images-hubble/3221属性创建2.png" alt="image"> @@ -359,7 +369,7 @@ Fill in the settings map: </center> -Mapping list: +Mapping list: <center> <img src="/docs/images/images-hubble/334设置映射2.png" alt="image"> diff --git a/content/en/docs/quickstart/hugegraph-server.md b/content/en/docs/quickstart/hugegraph-server.md index c3cbf04c..13a402bb 100644 --- a/content/en/docs/quickstart/hugegraph-server.md +++ b/content/en/docs/quickstart/hugegraph-server.md @@ -16,7 +16,7 @@ The Core Module is an implementation of the Tinkerpop interface; The Backend mod #### 2.1 Install Java 11 (JDK 11) -Consider use Java 11 to run `HugeGraph-Server` (also compatible with Java 8 now), and configure by yourself. +Consider using Java 11 to run `HugeGraph-Server` (also compatible with Java 8 now), and configure by yourself. **Be sure to execute the `java -version` command to check the jdk version before reading** @@ -31,7 +31,7 @@ There are four ways to deploy HugeGraph-Server components: #### 3.1 Use Docker container (Convenient for Test/Dev) -<!-- 3.1 is linked by other place. if change 3.1's title, please check --> +<!-- 3.1 is linked by another place. if change 3.1's title, please check --> You can refer to [Docker deployment guide](https://hub.docker.com/r/hugegraph/hugegraph). We can use `docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph` to quickly start an inner `HugeGraph server` with `RocksDB` in background. @@ -171,7 +171,7 @@ for detailed configuration introduction, please refer to [configuration document #### 5.1 Use a startup script to startup -The startup is divided into "first startup" and "non-first startup". This distinction is because the back-end database needs to be initialized before the first startup, and then the service is started. +The startup is divided into "first startup" and "non-first startup." This distinction is because the back-end database needs to be initialized before the first startup, and then the service is started. after the service is stopped artificially, or when the service needs to be started again for other reasons, because the backend database is persistent, you can start the service directly. When HugeGraphServer starts, it will connect to the backend storage and try to check the version number of the backend storage. If the backend is not initialized or the backend has been initialized but the version does not match (old version data), HugeGraphServer will fail to start and give an error message. @@ -181,7 +181,7 @@ If you need to access HugeGraphServer externally, please modify the `restserver. Since the configuration (hugegraph.properties) and startup steps required by various backends are slightly different, the following will introduce the configuration and startup of each backend one by one. -If you want to use HugeGraph authentication mode, you should follow the [Server Authentication Configuration](https://hugegraph.apache.org/docs/config/config-authentication/) configuration before you start Server later. +If you want to use HugeGraph authentication mode, you should follow the [Server Authentication Configuration](https://hugegraph.apache.org/docs/config/config-authentication/) before you start Server later. ##### 5.1.1 Memory @@ -457,14 +457,14 @@ This indicates the successful creation of the sample graph. In [3.3 Use Docker container](#33-use-docker-container), we have introduced how to use docker to deploy `hugegraph-server`. `server` can also preload an example graph by setting the parameter. -##### 5.2.1 Use Cassandra as the storage +##### 5.2.1 Uses Cassandra as storage <details> <summary> Click to expand/collapse Cassandra configuration and startup methods</summary> When using Docker, we can use Cassandra as the backend storage. We highly recommend using docker-compose directly to manage both the server and Cassandra. -The sample `docker-compose.yml` can be obtained on [github](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-dist/docker/example/docker-compose-cassandra.yml), and you can start it with `docker-compose up -d`. (If using Cassandra 4.0 as the backend storage, it takes approximately two minutes to initialize. Please be patient.) +The sample `docker-compose.yml` can be obtained on [GitHub](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-dist/docker/example/docker-compose-cassandra.yml), and you can start it with `docker-compose up -d`. (If using Cassandra 4.0 as the backend storage, it takes approximately two minutes to initialize. Please be patient.) ```yaml version: "3" @@ -558,11 +558,11 @@ And use the RESTful API to request `HugeGraphServer` and get the following resul This indicates the successful creation of the sample graph. -### 6 Access server +### 6. Access server #### 6.1 Service startup status check -Use `jps` to see service process +Use `jps` to see a service process ```bash jps diff --git a/contribution.md b/contribution.md index 6353d587..9b59ab13 100644 --- a/contribution.md +++ b/contribution.md @@ -1,18 +1,20 @@ -# How to help us (如何帮助) -1. 参考后续文档, 在本地 3 步快速构建官网环境, 启动起来看下目前效果 -2. 检查目前官网的 UI/内容/图标等是否合理 / 美观, 然后阅读 `docsy` 文档了解如何修改 -3. 根据文档, 以及样例网站源码, 修改我们的网站 (或者提供**中/英文**翻译, 这个基本是 markdown 文档) -4. 先 fork 仓库, 然后基于 `master` 创建一个**新的**分支, 修改完成后提交 PR ✅ (请在 PR 内**截图**对比一下修改**前后**的效果 & 简要说明, 感谢) +# How to help us (如何参与) -Refer: 不熟悉 **github-pr** 流程的同学, 可以参考[贡献流程](https://github.com/apache/incubator-hugegraph/blob/master/CONTRIBUTING.md)文档, 最简单的方式是下 [github 桌面](https://desktop.github.com/)应用, 会简单方便许多~ +1. 在本地 3 步快速构建官网环境,启动起来看下目前效果 (Auto reload) +2. 先 fork 仓库,然后基于 `master` 创建一个**新的**分支,修改完成后提交 PR ✅ (请在 PR 内**截图**对比一下修改**前后**的效果 & 简要说明,感谢) +3. 新增/修改网站/文档 (提供**中/英文**页面翻译,基本为 `markdown` 格式) + +Refer: 不熟悉 **github-pr** 流程的同学, 可参考[贡献流程](https://github.com/apache/incubator-hugegraph/blob/master/CONTRIBUTING.md)文档, 推荐使用 [github desktop](https://desktop.github.com/) 应用, 会简单方便许多~ **PS:** 可以参考其他官网的[源码](https://www.docsy.dev/docs/examples), 方便快速了解 docsy 主题结构. -# How to install the website (hugo) +# How to start the website locally (hugo) -Only 3 steps u can easily to get start~ +Only **3 steps** u can easily to get start~ -U should ensure NPM & Hugo binary [download url](https://github.com/gohugoio/hugo/releases) before start, hugo binary must end with "**extended**" suffix, and we don't need install go env, just download hugo binary is fine +U should ensure NPM & Hugo binary [download url](https://github.com/gohugoio/hugo/releases) before start, +hugo binary must end with "**extended**" suffix, and we don't need to install go env, +just download hugo binary is fine (Note: the Hugo version can't be **too high**, try downgrade if failed) ```bash # 0. install npm & hugo if you don't have it @@ -24,12 +26,12 @@ wget https://github.com/gohugoio/hugo/releases/download/v0.95.0/hugo_extended_0. wget https://github.com/gohugoio/hugo/releases/download/v0.95.0/hugo_extended_0.95.0_Linux-64bit.tar.gz # 解压后 hugo 是单二进制文件, 可直接使用, 或推荐放 /usr/bin 及环境变量下. -sudo install hugo /usr/bin # 如果 mac 提示没有权限, 你可以直接使用它, 也可以 mv hugo /usr/bin 代替 +sudo install hugo /usr/bin # 如果 mac 提示没有权限, 可以 sudo mv hugo /usr/local/bin # 1. download website's source code git clone https://github.com/apache/hugegraph-doc.git website -# if download slowly or failed, try the proxy url +# (Optional) if download slowly or failed, try the proxy url git clone https://api.mtr.pub/apache/hugegraph-doc.git website # or https://github.do/https://github.com/apache/hugegraph-doc.git # 2. install npm dependencies in project root dir @@ -52,7 +54,7 @@ You can find detailed **theme instructions** in the [Docsy user guide - Content 1. `config.toml` in the **root dir** is global config 2. `config.toml` in the `./themes/docsy` is theme config 3. `content` dir contains multi-language contents (docs/index-html/blog/about/bg-image), it's the most important dir - - `content/en` represent english site, we do need translate the `doc` in it (可先用 google 翻译, 紧急) + - `content/en` represent english site, we do need to translate the `doc` in it (可先用 Google/GPT 翻译) - `content/cn` represent chinese site (需要汉化其中英文部分) We can see some [example website](https://www.docsy.dev/docs/examples/) & refer to their GitHub **source code** to reduce time to design @@ -67,7 +69,8 @@ If you run into the following error: ``` ➜ hugo server -Error: Error building site: TOCSS: failed to transform "scss/main.scss" (text/x-scss): resource "scss/scss/main.scss_9fadf33d895a46083cdd64396b57ef68" not found in file cache +Error: Error building site: TOCSS: failed to transform "scss/main.scss" (text/x-scss): +resource "scss/scss/main.scss_9fadf33d895a46083cdd64396b57ef68" not found in file cache ``` This error occurs if you have not installed the extended version of [Hugo](https://github.com/gohugoio/hugo/releases). diff --git a/themes/docsy/layouts/partials/community_links.html b/themes/docsy/layouts/partials/community_links.html index 601044d4..25301798 100644 --- a/themes/docsy/layouts/partials/community_links.html +++ b/themes/docsy/layouts/partials/community_links.html @@ -2,12 +2,13 @@ <section class="row td-box td-box--4 td-box--gradient td-box--height-auto linkbox"> <div class="col-xs-12 col-sm-6 col-md-6 col-lg-6"> -<h2>Learn and Connect</h2> +<h2>Learn and Connect</h2> <p>Using or want to use {{ .Site.Title }}? Find out more here: {{ with index $links "user"}} {{ template "community-links-list" . }} {{ end }} </div> + <div class="col-xs-12 col-sm-6 col-md-6 col-lg-6"> <h2>Develop and Contribute</h2> <p>If you want to get more involved by contributing to {{ .Site.Title }}, join us here: @@ -15,7 +16,7 @@ {{ template "community-links-list" . }} {{ end }} -<p>If you want to report security problems with HugeGraph,please contact us with <a href="../docs/guides/security/">security email address</a>. +<p>More SEC related process could refer <a href="../docs/guides/security/">sec-policy</a>. </p> <p>You can find out how to contribute to these docs in our <a href="../docs/contribution-guidelines/">Contribution Guidelines</a>. </div>
