[ https://issues.apache.org/jira/browse/HDDS-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Aravindan Vijayan reopened HDDS-2355: ------------------------------------- Reopening for changing Resolution. > Om double buffer flush termination with rocksdb error > ----------------------------------------------------- > > Key: HDDS-2355 > URL: https://issues.apache.org/jira/browse/HDDS-2355 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Reporter: Bharat Viswanadham > Assignee: Aravindan Vijayan > Priority: Blocker > Fix For: 0.5.0 > > > om_1 |java.io.IOException: Unable to write the batch. > om_1 | at > [org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:48|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:48/]) > om_1 | at > [org.apache.hadoop.hdds.utils.db.RDBStore.commitBatchOperation(RDBStore.java:240|http://org.apache.hadoop.hdds.utils.db.rdbstore.commitbatchoperation%28rdbstore.java:240/]) > om_1 |at > org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:146) > om_1 |at java.base/java.lang.Thread.run(Thread.java:834) > om_1 |Caused by: org.rocksdb.RocksDBException: > WritePrepared/WriteUnprepared txn tag when write_after_commit_ is enabled (in > default WriteCommitted mode). If it is not due to corruption, the WAL must be > emptied before changing the WritePolicy. > om_1 |at org.rocksdb.RocksDB.write0(Native Method) > om_1 |at org.rocksdb.RocksDB.write(RocksDB.java:1421) > om_1 | at > [org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:46|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:46/]) > > In few of my test run's i see this error and OM is terminated. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org