dosubot[bot] commented on issue #2787:
URL: 
https://github.com/apache/incubator-hugegraph/issues/2787#issuecomment-2954562130

   <!-- Answer -->
   分布式部署 HugeGraph(RocksDB 后端)在 192.168.1.10、192.168.1.11、192.168.1.12 
上,推荐如下流程和配置:
   
   ---
   
   **1. 环境准备**  
   每台服务器安装 JDK 11+,建议 16GB 以上内存,SSD 更佳。下载 HugeGraph 1.5.0 release 包。
   
   ---
   
   **2. 配置文件(每台服务器都要有,内容几乎一致,仅端口和 data/wal 路径可区分)**
   
   以 3 台为例,假设 REST/Gremlin/RPC 端口分别为:
   
   - 192.168.1.10:8081/8181/8091
   - 192.168.1.11:8082/8182/8092
   - 192.168.1.12:8083/8183/8093
   
   **conf/rest-server.properties**(每台都要改端口,graphs 路径保持一致):
   
   ```properties
   restserver.url=http://0.0.0.0:808X   # X=1/2/3
   graphs=./conf/graphs
   rpc.server_host=192.168.1.1X         # X=0/1/2
   rpc.server_port=809X                 # X=1/2/3
   rpc.remote_url=192.168.1.10:8091,192.168.1.11:8092,192.168.1.12:8093
   ```
   
   **conf/graphs/hugegraph.properties**(所有节点内容完全一致,data/wal 路径可区分):
   
   ```properties
   backend=rocksdb
   serializer=binary
   store=hugegraph
   rocksdb.data_path=./data
   rocksdb.wal_path=./wal
   raft.group_peers=192.168.1.10:8091,192.168.1.11:8092,192.168.1.12:8093
   raft.election_timeout=10000
   raft.rpc_connect_timeout=5000
   raft.rpc_timeout=60
   rocksdb.write_buffer_size=134217728
   rocksdb.max_write_buffer_number=6
   rocksdb.block_cache_size=8589934592
   ```
   注意 raft.mode、raft.endpoint、raft.election_timeout_ms 等参数不要再用。
   
   ---
   
   **3. 启动流程**
   
   - 每台节点都执行一次 `bin/init-store.sh` 初始化(确保 data 目录无锁文件、无其他进程占用)。
   - 启动服务:`bin/start-hugegraph.sh`。
   - Docker 环境用 `JAVA_OPTS` 设置 JVM 堆内存(如 `-Xmx32g -Xms32g`),docker-compose 
需挂载各自配置文件,环境变量前缀用 hugegraph.。
   
   ---
   
   **4. 常见问题与排查**
   
   - rocksdb.write_buffer_size、rocksdb.block_cache_size 等参数必须用字节数(如 
134217728),不能写 128MB,否则报类型错误。
   - 数据不同步常见原因:配置不一致、端口冲突、节点未全部加入 raft.group_peers、data/wal 
路径冲突、节点间网络不通、节点未全部初始化。
   - lock 报错需确保无进程占用,必要时手动删除 lock 
文件(确保无服务运行)[参考](https://github.com/apache/incubator-hugegraph/issues/2762)。
   - 多实例部署时,每个实例的 data_path/wal_path 必须唯一,不能共用同一目录,否则会锁冲突 
[参考](https://github.com/apache/incubator-hugegraph/issues/2766)。
   
   ---
   
   **5. 重要说明**
   
   - 所有节点 graphs/hugegraph.properties 必须完全一致,raft.group_peers 
必须包含所有节点,节点间网络互通,data/wal 路径可写。
   - 数据同步依赖 raft 协议,只有 leader 节点写入后,follower 节点才会同步数据,需等待 raft 日志同步完成。
   - 如遇 “SST file ahead of WALs” 等 RocksDB 错误,可尝试备份后用 ldb repair 工具修复 
[参考](https://github.com/apache/incubator-hugegraph/issues/2762)。
   
   ---
   
   如有具体报错或特殊需求可补充日志进一步定位。
   
   
   <!-- Dosu Comment Footer -->
   *To reply, just mention [@dosu](https://go.dosu.dev/dosubot).*
   
   ---
   <sup>How did I do? 
[Good](https://app.dosu.dev/response-feedback/2a09def3-825c-46ba-8a25-b856e7b377c1?feedback_type=great_response)
 | 
[Irrelevant](https://app.dosu.dev/response-feedback/2a09def3-825c-46ba-8a25-b856e7b377c1?feedback_type=irrelevant_answer)
 | 
[Incorrect](https://app.dosu.dev/response-feedback/2a09def3-825c-46ba-8a25-b856e7b377c1?feedback_type=incorrect_sources)
 | 
[Verbose](https://app.dosu.dev/response-feedback/2a09def3-825c-46ba-8a25-b856e7b377c1?feedback_type=too_verbose)
 | 
[Hallucination](https://app.dosu.dev/response-feedback/2a09def3-825c-46ba-8a25-b856e7b377c1?feedback_type=hallucination)
 | [Report 
🐛](https://app.dosu.dev/response-feedback/2a09def3-825c-46ba-8a25-b856e7b377c1?feedback_type=bug_report)
 | 
[Other](https://app.dosu.dev/response-feedback/2a09def3-825c-46ba-8a25-b856e7b377c1?feedback_type=other)</sup>&nbsp;&nbsp;[![Join
 
Discord](https://img.shields.io/badge/join-5865F2?logo=discord&logoColor=white&label=)](https://go.dosu.dev/discord-bot)&nbsp;[!
 [Share on 
X](https://img.shields.io/badge/X-share-black)](https://twitter.com/intent/tweet?text=%40dosu_ai%20helped%20me%20solve%20this%20issue!&url=https%3A//github.com/apache/incubator-hugegraph/issues/2787)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to