justbejyk opened a new issue, #2298:
URL: https://github.com/apache/incubator-hugegraph/issues/2298

   ### Problem Type (问题类型)
   
   server status (启动/运行异常)
   
   ### Before submit
   
   - [X] 我已经确认现有的 [Issues](https://github.com/apache/hugegraph/issues) 与 
[FAQ](https://hugegraph.apache.org/docs/guides/faq/) 中没有相同 / 重复问题 (I have 
confirmed and searched that there are no similar problems in the historical 
issue and documents)
   
   ### Environment (环境信息)
   
   - Server Version: 1.0.0 (Apache Release Version)
   - Backend: RocksDB x nodes,  SSD 
   - OS: xx CPUs, xx G RAM, Ubuntu 2x.x / CentOS 7.x 
   - Data Size:  xx vertices, xx edges <!-- (like 1000W 点, 9000W 边) -->
   
   
   ### Your Question (问题描述)
   
   使用docker的方式时,使用的是rocksdb,但是只要一将rockdb数据目录一挂载,就会启动失败,日志如下
   执行命令如下:
   版本1.0.0
   docker run -itd --name graph  -v `pwd`/graph-data:/hugegraph/rocksdb-data   
-p 8085:8080 hugegraph/hugegraph
   
   2023-08-26 08:37:32 [main] [INFO] o.a.h.s.RestServer - RestServer starting...
   2023-08-26 08:37:48 [main] [INFO] o.a.h.u.ConfigUtil - Scanning option 
'graphs' directory './conf/graphs'
   2023-08-26 08:37:52 [db-open-1] [INFO] o.a.h.b.s.r.RocksDBStore - Opening 
RocksDB with data path: rocksdb-data/m
   2023-08-26 08:37:53 [db-open-1] [INFO] o.a.h.b.s.r.RocksDBStore - Failed to 
open RocksDB 'rocksdb-data/m' with database 'hugegraph', try to init CF later
   2023-08-26 08:37:53 [main] [INFO] o.a.h.b.c.CacheManager - Init RamCache for 
'schema-id-hugegraph' with capacity 10000
   2023-08-26 08:37:53 [main] [INFO] o.a.h.b.c.CacheManager - Init RamCache for 
'schema-name-hugegraph' with capacity 10000
   2023-08-26 08:37:53 [db-open-1] [INFO] o.a.h.b.s.r.RocksDBStore - Opening 
RocksDB with data path: rocksdb-data/s
   2023-08-26 08:38:04 [db-open-1] [INFO] o.a.h.b.s.r.RocksDBStore - Opening 
RocksDB with data path: rocksdb-data/g
   2023-08-26 08:38:04 [main] [INFO] o.c.o.l.Uns - OHC using JNA OS native 
malloc/free
   2023-08-26 08:38:05 [main] [INFO] o.a.h.b.c.CacheManager - Init LevelCache 
for 'vertex-hugegraph' with capacity 10000:10000000
   2023-08-26 08:38:05 [main] [INFO] o.a.h.b.c.CacheManager - Init LevelCache 
for 'edge-hugegraph' with capacity 1000:1000000
   2023-08-26 08:38:05 [main] [INFO] o.a.h.b.c.CacheManager - Init RamCache for 
'users-hugegraph' with capacity 10240
   2023-08-26 08:38:05 [main] [INFO] o.a.h.b.c.CacheManager - Init RamCache for 
'users_pwd-hugegraph' with capacity 10240
   2023-08-26 08:38:05 [main] [INFO] o.a.h.b.c.CacheManager - Init RamCache for 
'token-hugegraph' with capacity 10240
   2023-08-26 08:38:05 [main] [INFO] o.a.h.c.GraphManager - Graph 'hugegraph' 
was successfully configured via './conf/graphs/hugegraph.properties'
   2023-08-26 08:38:05 [main] [INFO] o.a.h.r.RpcServer - RpcServer config is 
empty, skip starting RpcServer
   2023-08-26 08:38:07 [main] [INFO] o.a.h.c.GraphManager - Check backend 
version
   2023-08-26 08:38:07 [main] [INFO] o.a.h.StandardHugeGraph - Close graph 
standardhugegraph[hugegraph]
   2023-08-26 08:38:07 [main] [INFO] o.a.h.r.RpcServer - RpcServer stop on port 
8091
   2023-08-26 08:38:07 [main] [ERROR] o.a.h.d.HugeGraphServer - HugeRestServer 
start error: 
   org.apache.hugegraph.backend.BackendException: The backend store of 
'hugegraph' has not been initialized
           at 
org.apache.hugegraph.core.GraphManager.checkBackendVersionOrExit(GraphManager.java:448)
 ~[hugegraph-api-1.0.0.jar:0.69.0.0]
           at 
org.apache.hugegraph.core.GraphManager.init(GraphManager.java:124) 
~[hugegraph-api-1.0.0.jar:0.69.0.0]
           at 
org.apache.hugegraph.server.ApplicationConfig$GraphManagerFactory$1.onEvent(ApplicationConfig.java:129)
 ~[hugegraph-api-1.0.0.jar:0.69.0.0]
           at 
org.glassfish.jersey.server.internal.monitoring.CompositeApplicationEventListener.onEvent(CompositeApplicationEventListener.java:49)
 ~[jersey-server-3.0.3.jar:?]
           at 
org.glassfish.jersey.server.internal.monitoring.MonitoringContainerListener.onStartup(MonitoringContainerListener.java:56)
 ~[jersey-server-3.0.3.jar:?]
           at 
org.glassfish.jersey.server.ApplicationHandler.onStartup(ApplicationHandler.java:711)
 ~[jersey-server-3.0.3.jar:?]
           at 
org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.start(GrizzlyHttpContainer.java:330)
 ~[jersey-container-grizzly2-http-3.0.3.jar:?]
           at 
org.glassfish.grizzly.http.server.HttpHandlerChain.start(HttpHandlerChain.java:376)
 ~[grizzly-http-server-3.0.1.jar:3.0.1]
           at 
org.glassfish.grizzly.http.server.HttpServer.setupHttpHandler(HttpServer.java:268)
 ~[grizzly-http-server-3.0.1.jar:3.0.1]
           at 
org.glassfish.grizzly.http.server.HttpServer.start(HttpServer.java:245) 
~[grizzly-http-server-3.0.1.jar:3.0.1]
           at org.apache.hugegraph.server.RestServer.start(RestServer.java:71) 
~[hugegraph-api-1.0.0.jar:0.69.0.0]
           at org.apache.hugegraph.server.RestServer.start(RestServer.java:178) 
~[hugegraph-api-1.0.0.jar:0.69.0.0]
           at 
org.apache.hugegraph.dist.HugeRestServer.start(HugeRestServer.java:32) 
~[hugegraph-dist-1.0.0.jar:1.0.0]
           at 
org.apache.hugegraph.dist.HugeGraphServer.<init>(HugeGraphServer.java:60) 
~[hugegraph-dist-1.0.0.jar:1.0.0]
           at 
org.apache.hugegraph.dist.HugeGraphServer.main(HugeGraphServer.java:120) 
~[hugegraph-dist-1.0.0.jar:1.0.0]
   2023-08-26 08:38:07 [main] [INFO] o.a.h.HugeFactory - HugeFactory shutdown
   2023-08-26 08:38:09 [SOFA-RPC-ShutdownHook] [WARN] 
c.a.s.r.c.RpcRuntimeContext - SOFA RPC Framework catch JVM shutdown event, Run 
shutdown hook now.
   2023-08-26 08:38:09 [hugegraph-shutdown] [INFO] o.a.h.HugeFactory - 
HugeGraph is shutting down
   
   ### Vertex/Edge example (问题点 / 边数据举例)
   
   _No response_
   
   ### Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to