[
https://issues.apache.org/jira/browse/HBASE-2315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12844873#action_12844873
]
Flavio Paiva Junqueira commented on HBASE-2315:
-----------------------------------------------
Ryan: I don't have much to add to what Ben said in his comment. I just wanted
to mention that in the current patch, I have added a configuration property to
set the number of replicas for each write:
{noformat}
hbase.wal.bk.quorumsize
{noformat}
the default is 2.
Andrew: As we reduce the number of bytes in each write, the overhead per byte
increases, so batching writes and appending writes of the order of 1kbytes
would give us a more efficient use of the BK client. Achieving 1M ops/s over
100 nodes (or larger values if you will) depends on the length of writes, the
replication factor, and the amount of bandwidth (both I/O and network) you have
available. In our observations, it is not a problem for a client to produce
more than 10k appends/s of 1k-4kbytes, so in your example, it is just a matter
of provisioning your system appropriately with respect to disk drives and
network.
> BookKeeper for write-ahead logging
> ----------------------------------
>
> Key: HBASE-2315
> URL: https://issues.apache.org/jira/browse/HBASE-2315
> Project: Hadoop HBase
> Issue Type: New Feature
> Components: regionserver
> Reporter: Flavio Paiva Junqueira
> Attachments: HBASE-2315.patch, zookeeper-dev-bookkeeper.jar
>
>
> BookKeeper, a contrib of the ZooKeeper project, is a fault tolerant and high
> throughput write-ahead logging service. This issue provides an implementation
> of write-ahead logging for hbase using BookKeeper. Apart from expected
> throughput improvements, BookKeeper also has stronger durability guarantees
> compared to the implementation currently used by hbase.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.