haohao0103 opened a new issue, #2578: URL: https://github.com/apache/incubator-hugegraph/issues/2578
### Bug Type (问题类型) None ### Before submit - [X] 我已经确认现有的 [Issues](https://github.com/apache/hugegraph/issues) 与 [FAQ](https://hugegraph.apache.org/docs/guides/faq/) 中没有相同 / 重复问题 (I have confirmed and searched that there are no similar problems in the historical issue and documents) ### Environment (环境信息) - Server Version: 1.5.0 (Apache Release Version) - Backend: RocksDB 5 nodes, SSD ### Expected & Actual behavior (期望与实际表现) When memory leaks occur in the graphserver during data writing, the distribution of object quantities in the JVM is as follows: ` jmap -histo:live 51680 | head -n 10 num #instances #bytes class name (module) ------------------------------------------------------- 1: 284880553 13509899520 [B ([email protected]) 2: 284703909 9110525088 java.lang.String ([email protected]) 3: 283905229 6813725496 org.apache.hugegraph.backend.id.IdGenerator$StringId 4: 567813 2284841352 [Lorg.apache.hugegraph.backend.id.Id; 5: 1384040 182210368 [Ljava.lang.Object; ([email protected]) 6: 2270975 90839000 java.util.concurrent.ConcurrentLinkedDeque$Node ([email protected]) 7: 1191421 76250944 java.util.LinkedHashMap$Entry ([email protected] ` The issue was eventually traced to the CachedGraphTransaction, where there is an action to clear edge caches when writing vertices. If a large number of vertices are written, the commitMutation2Backend() method triggers this.notifyChanges(Cache.ACTION_INVALIDED, HugeType.VERTEX, vertexIds), which results in a backlog of tasks in the single-threaded thread pool within the EventHub, holding onto vertxId data and causing a memory leak. ### Vertex/Edge example (问题点 / 边数据举例) _No response_ ### Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构) _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
