[ 
https://issues.apache.org/jira/browse/HBASE-22539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896046#comment-16896046
 ] 

binlijin edited comment on HBASE-22539 at 7/30/19 11:42 AM:
------------------------------------------------------------

[~wchevreuil] what i am say about Durability.ASYNC_WAL, it do not related to 
AsyncFSWALProvider/FSHLogProvider. 
User can set DURABILITY on HTableDescriptor 
alter 'table', METHOD=>'table_att', DURABILITY => 'ASYNC_WAL'
{TABLE_ATTRIBUTES => {DURABILITY => 'ASYNC_WAL'} or when write can set 
Put.setDurability(Durability.ASYNC_WAL) to use it.

{code}
/**
 * Enum describing the durability guarantees for tables and {@link Mutation}s
 * Note that the items must be sorted in order of increasing durability
 */
@InterfaceAudience.Public
public enum Durability {
  /* Developer note: Do not rename the enum field names. They are serialized in 
HTableDescriptor */
  /**
   * If this is for tables durability, use HBase's global default value 
(SYNC_WAL).
   * Otherwise, if this is for mutation, use the table's default setting to 
determine durability.
   * This must remain the first option.
   */
  USE_DEFAULT,
  /**
   * Do not write the Mutation to the WAL
   */
  SKIP_WAL,
  /**
   * Write the Mutation to the WAL asynchronously
   */
  ASYNC_WAL,
  /**
   * Write the Mutation to the WAL synchronously.
   * The data is flushed to the filesystem implementation, but not necessarily 
to disk.
   * For HDFS this will flush the data to the designated number of DataNodes.
   * See <a 
href="https://issues.apache.org/jira/browse/HADOOP-6313";>HADOOP-6313</a>
   */
  SYNC_WAL,
  /**
   * Write the Mutation to the WAL synchronously and force the entries to disk.
   * See <a 
href="https://issues.apache.org/jira/browse/HADOOP-6313";>HADOOP-6313</a>
   */
  FSYNC_WAL
}
{code}



was (Author: aoxiang):
[~wchevreuil] what i am say about Durability.ASYNC_WAL, it do not related to 
AsyncFSWALProvider/FSHLogProvider. 
User can set DURABILITY on HTableDescriptor {TABLE_ATTRIBUTES => {DURABILITY => 
'ASYNC_WAL'} or when write can set Put.setDurability(Durability.ASYNC_WAL) to 
use it.

{code}
/**
 * Enum describing the durability guarantees for tables and {@link Mutation}s
 * Note that the items must be sorted in order of increasing durability
 */
@InterfaceAudience.Public
public enum Durability {
  /* Developer note: Do not rename the enum field names. They are serialized in 
HTableDescriptor */
  /**
   * If this is for tables durability, use HBase's global default value 
(SYNC_WAL).
   * Otherwise, if this is for mutation, use the table's default setting to 
determine durability.
   * This must remain the first option.
   */
  USE_DEFAULT,
  /**
   * Do not write the Mutation to the WAL
   */
  SKIP_WAL,
  /**
   * Write the Mutation to the WAL asynchronously
   */
  ASYNC_WAL,
  /**
   * Write the Mutation to the WAL synchronously.
   * The data is flushed to the filesystem implementation, but not necessarily 
to disk.
   * For HDFS this will flush the data to the designated number of DataNodes.
   * See <a 
href="https://issues.apache.org/jira/browse/HADOOP-6313";>HADOOP-6313</a>
   */
  SYNC_WAL,
  /**
   * Write the Mutation to the WAL synchronously and force the entries to disk.
   * See <a 
href="https://issues.apache.org/jira/browse/HADOOP-6313";>HADOOP-6313</a>
   */
  FSYNC_WAL
}
{code}


> Potential WAL corruption due to early DBBs re-use. 
> ---------------------------------------------------
>
>                 Key: HBASE-22539
>                 URL: https://issues.apache.org/jira/browse/HBASE-22539
>             Project: HBase
>          Issue Type: Bug
>          Components: rpc, wal
>    Affects Versions: 2.1.1
>            Reporter: Wellington Chevreuil
>            Assignee: Wellington Chevreuil
>            Priority: Blocker
>
> Summary
> We had been chasing a WAL corruption issue reported on one of our customers 
> deployments running release 2.1.1 (CDH 6.1.0). After providing a custom 
> modified jar with the extra sanity checks implemented by HBASE-21401 applied 
> on some code points, plus additional debugging messages, we believe it is 
> related to DirectByteBuffer usage, and Unsafe copy from offheap memory to 
> on-heap array triggered 
> [here|https://github.com/apache/hbase/blob/branch-2.1/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java#L1157],
>  such as when writing into a non ByteBufferWriter type, as done 
> [here|https://github.com/apache/hbase/blob/branch-2.1/hbase-common/src/main/java/org/apache/hadoop/hbase/io/ByteBufferWriterOutputStream.java#L84].
> More details on the following comment.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to