dosubot[bot] commented on issue #2474:
URL:
https://github.com/apache/incubator-hugegraph/issues/2474#issuecomment-1984938765
<!-- Answer -->
Based on the context provided, it seems like you're encountering an issue
where the disk space is not being cleared even after setting the TTL for
vertices and edges to 86400000 and stopping write operations for several days.
This could be due to the nature of how RocksDB, the backend storage engine used
by Apache HugeGraph, handles deletions and compactions.
In RocksDB, data is not immediately removed from disk when a delete
operation occurs or when data expires. Instead, RocksDB marks the data as
deleted, and the actual space is reclaimed during a compaction process. This is
why even after stopping the write operations for several days, the disk space
was not released.
From the provided context, there is a `registerMetaHandler` method that
registers a handler for "compact" operations. This suggests that Apache
HugeGraph does support triggering compaction manually through its meta handler
mechanism. You can manually trigger the compaction process, which is necessary
for reclaiming disk space after data has been marked as deleted or has expired
due to TTL settings. Compaction is the process that actually removes the
deleted or expired data from disk.
Here's the relevant part of the code that demonstrates this behavior:
```java
this.registerMetaHandler("compact", (session, meta, args) -> {
RocksDBMetrics metrics = new RocksDBMetrics(dbsGet.get(), session);
return metrics.compact();
});
```
Therefore, to address the issue of disk space not being cleared, you may
need to manually trigger a compaction in Apache HugeGraph. This can be done by
invoking the "compact" meta handler as shown in the code snippet. Triggering
compaction should help in reclaiming the disk space that is still being
occupied by data that has been logically deleted or expired.
As for the TTL (Time to Live) setting in the Apache HugeGraph(incubating)
Release 1.2.0, it works by automatically triggering the deletion of expired
vertices and edges. This is evident from the `expiredElementIds` method within
the provided code snippet. This method checks the current time against the
expiration time of each element (vertex or edge) and collects those that have
expired into a set. It then removes these expired elements from the
`elementIds` set, effectively deleting them from the index. This method does
not require manual intervention to delete expired vertices and edges; it
automatically handles the deletion based on the TTL settings and the current
time.
Here's the relevant part of the code that demonstrates this behavior:
```java
public Set<IdWithExpiredTime> expiredElementIds() {
long now = this.graph.now();
Set<IdWithExpiredTime> expired = InsertionOrderUtil.newSet();
for (IdWithExpiredTime id : this.elementIds) {
if (0L < id.expiredTime && id.expiredTime < now) {
expired.add(id);
}
}
this.elementIds.removeAll(expired);
return expired;
}
```
I hope this information helps you understand why the disk space is not being
cleared and how to manually trigger the clearing of disk space in the Apache
HugeGraph(incubating) Release 1.2.0 deployed using Docker with RocksDB backend.
If you have any further questions, feel free to ask.
For more information, you can refer to the following sources:
-
[hugegraph-server/hugegraph-rocksdb/src/main/java/org/apache/hugegraph/backend/store/rocksdb/RocksDBStore.java](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-rocksdb/src/main/java/org/apache/hugegraph/backend/store/rocksdb/RocksDBStore.java)
-
[hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/structure/HugeIndex.java](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/structure/HugeIndex.java)
You can also refer to the similar issue
[here](https://github.com/apache/incubator-hugegraph/issues/2243) for more
context.
## Sources
- [使用docker部署hugegraph,挂载了宿主机目录,本地RocksDB ,版本0.11.0,使用一段时间后,写数据提示no space
left on
device,清理了之后还是不行](https://api.github.com/repos/apache/incubator-hugegraph/issues/2243)
-
[hugegraph-server/hugegraph-rocksdb/src/main/java/org/apache/hugegraph/backend/store/rocksdb/RocksDBStore.java](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-rocksdb/src/main/java/org/apache/hugegraph/backend/store/rocksdb/RocksDBStore.java)
-
[hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/structure/HugeIndex.java](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/structure/HugeIndex.java)
<!-- Dosu Comment Footer -->
<details open>
<summary>
#### About Dosu
</summary>
This response is meant to be useful and save you time. It is **not meant to
be a precise solution**, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is
irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
</details>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]