[jira] [Updated] (HBASE-5699) Run with > 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5699: - Release Note: HBase's write-ahead-log (WAL) can now be configured to use multiple HDFS pipelines in parallel to provide better write throughput for clusters by using additional disks. By default, HBase will still use only a single HDFS-based WAL. To run with multiple WALs, alter the hbase-site.xml property "hbase.wal.provider" to have the value "multiwal". To return to having HBase determine what kind of WAL implementation to use either remove the property all together or set it to "defaultProvider". Altering the WAL provider used by a particular RegionServer requires restarting that instance. RegionServers using the original WAL implementation and those using the "multiwal" implementation can each handle recovery of either set of WALs, so a zero-downtime configuration update is possible through a rolling restart. This issue introduces the following configurations: hbase.wal.regiongrouping.numgroups is how many provider instances the 'multiwal' should create. Default is two. hbase.wal.regiongrouping.strategy is the strategy to use figuring which provider instance an edit should go to. Default is 'identity'. Identity is the only current built-in option. hbase.wal.regiongrouping.delegate is the type of the provider the multiwal creates. Default is default from WALFactory. was: HBase's write-ahead-log (WAL) can now be configured to use multiple HDFS pipelines in parallel to provide better write throughput for clusters by using additional disks. By default, HBase will still use only a single HDFS-based WAL. To run with multiple WALs, alter the hbase-site.xml property "hbase.wal.provider" to have the value "multiwal". To return to having HBase determine what kind of WAL implementation to use either remove the property all together or set it to "defaultProvider". Altering the WAL provider used by a particular RegionServer requires restarting that instance. RegionServers using the original WAL implementation and those using the "multiwal" implementation can each handle recovery of either set of WALs, so a zero-downtime configuration update is possible through a rolling restart. > Run with > 1 WAL in HRegionServer > - > > Key: HBASE-5699 > URL: https://issues.apache.org/jira/browse/HBASE-5699 > Project: HBase > Issue Type: Improvement > Components: Performance, wal >Reporter: binlijin >Assignee: Sean Busbey >Priority: Critical > Fix For: 1.0.0, 1.1.0 > > Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, > HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, > HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, > HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, > HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, > HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, > HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, > HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, > hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, > hbase-5699_total_throughput_sync_heavy.txt, > results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, > results-updated-hbase5699-wals-2.txt.bz2, > results-updated-hbase5699-wals-4.txt.bz2, > results-updated-hbase5699-wals-6.txt.bz2 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5699: - Fix Version/s: 1.0.0 Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Fix For: 1.0.0, 2.0.0, 1.1.0 Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Release Note: HBase's write-ahead-log (WAL) can now be configured to use multiple HDFS pipelines in parallel to provide better write throughput for clusters by using additional disks. By default, HBase will still use only a single HDFS-based WAL. To run with multiple WALs, alter the hbase-site.xml property hbase.wal.provider to have the value multiwal. To return to having HBase determine what kind of WAL implementation to use either remove the property all together or set it to defaultProvider. Altering the WAL provider used by a particular RegionServer requires restarting that instance. RegionServers using the original WAL implementation and those using the multiwal implementation can each handle recovery of either set of WALs, so a zero-downtime configuration update is possible through a rolling restart. Here's a proposed release note. Let me know if anyone thinks there's a big gap. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Fix For: 2.0.0, 1.1.0 Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Status: Resolved (was: Patch Available) Pushed to branch-1 and master. If it looks like there'll be a RC1 for 1.0 we can revisit pushing to branch-1.0. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Fix For: 2.0.0, 1.1.0 Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Release Note: HBase's write-ahead-log (WAL) can now be configured to use multiple HDFS pipelines in parallel to provide better write throughput for clusters by using additional disks. By default, HBase will still use only a single HDFS-based WAL. To run with multiple WALs, alter the hbase-site.xml property hbase.wal.provider to have the value multiwal. To return to having HBase determine what kind of WAL implementation to use either remove the property all together or set it to defaultProvider. Altering the WAL provider used by a particular RegionServer requires restarting that instance. RegionServers using the original WAL implementation and those using the multiwal implementation can each handle recovery of either set of WALs, so a zero-downtime configuration update is possible through a rolling restart. was: HBase's write-ahead-log (WAL) can now be configured to use multiple HDFS pipelines in parallel to provide better write throughput for clusters by using additional disks. By default, HBase will still use only a single HDFS-based WAL. To run with multiple WALs, alter the hbase-site.xml property hbase.wal.provider to have the value multiwal. To return to having HBase determine what kind of WAL implementation to use either remove the property all together or set it to defaultProvider. Altering the WAL provider used by a particular RegionServer requires restarting that instance. RegionServers using the original WAL implementation and those using the multiwal implementation can each handle recovery of either set of WALs, so a zero-downtime configuration update is possible through a rolling restart. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Fix For: 2.0.0, 1.1.0 Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Attachment: HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff Attaching a plot that includes running the same tests with the delegate wals as DisabledWALProvider, named HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4. This should show the limit from context switching and such in the test itself. The DisabledWALProvider doesn't include any of the overhead from the ringbuffer or sync grouping. There are very few data points for the new test cases, so I didn't include any stddev bars. I just used the average for hte whole run. All of them were so short that they probably didn't have time to get into steady state. (The disabled wals are at the top, the previous runs with datanode writes are at the bottom) Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Attachment: HBASE-5699.4.patch.txt Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Status: Patch Available (was: Open) Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Attachment: HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff Here's a chart with # workers on the x-axis and MiB/s on the y, named HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff The MiB/s is the rate for a 30 second sample. The dark blue, green, and red lines are for single wal, 2 wals, and 4 wals respectively. Each one has a lighter shaded line above and below marking 1 std dev in each direction. This is with the default block size of 128MiB for wals and a single cf:cq combo with 512B value in each edit. I've got numbers for some other block sizes finishing up this morning; should have more charts tomorrow. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Status: Open (was: Patch Available) Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: HBASE-5699.3.patch.txt, PerfHbase.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Attachment: HBASE-5699.3.patch.txt adding current RB version for QA run. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: HBASE-5699.3.patch.txt, PerfHbase.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Attachment: HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff HBASE-5699_write_iops_upstream_1_to_200_threads.tiff results-hbase5699-upstream.txt.bz2 results-hbase5699-wals-1.txt.bz2 results-updated-hbase5699-wals-2.txt.bz2 results-updated-hbase5699-wals-4.txt.bz2 results-updated-hbase5699-wals-6.txt.bz2 hbase-5699_total_throughput_sync_heavy.txt hbase-5699_multiwal_400-threads_stats_sync_heavy.txt Attaching results from some initial testing using WALPerformanceEvaluation with a sync-heavy workload. If anyone has other measurements they'd like to see, or a different workload expressed (perhaps to look at limits for pushing bytes given larger edits with fewer syncs), please let me know. h5. Overview command used was {code} bin/hbase org.apache.hadoop.hbase.wal.WALPerformanceEvaluation -threads ${threads} -regions $(((threads+1)/2)) -roll 10 -iterations 100 -verify {code} with # threads varied and # regions at ceil(threads/2). default is sync-per-write. Test rig is a physical cluster with 4 data nodes and a total of 20 data disks (5 per node). Test client was run on a separate non-loaded host. HDFS 2.5.0-cdh5.2.0 * {{.bz2}} files are the complete logs from the described group of runs. * upstream and wals-1 data is from prior to HBASE-12655, so run metrics other than the final benchmark results aren't comparable to later. * There's an image showing total write iops across the cluster for each of the sets of test runs. * hbase-5699_total_throughput_sync_heavy.txt has the final benchmark log from each of the runs, so you can quickly look across successive runs. * hbase-5699_multiwal_400-threads_stats_sync_heavy.txt has the run metrics from just the final test of the multiwal options h5. upstream vs multiwal-1 If you look at the two charts HBASE-5699_write_iops_upstream_1_to_200_threads.tiff and HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, they behave roughly the same modulo noise. (the only difference in the two happens during region set up, which shouldn't be reflected here.) They also level off on ability to push more through the pipeline at near the limit for iops given the 3 disks in the single pipeline. h5. increasing number of pipelines If you look at each of the HBASE-5699_write_iops_multiwal-X_10,50,120,190,260,330,400_threads.tiff charts, as we ramp up the number of writers we manage to push more overall activity through the cluster. It's not a linear gain because splitting out the pipelines means that we do more overall syncs since fewer of them get obviated by our sync grouping. In this test, expanding from 2 to 4 or 6 pipelines didn't provide much benefit because at up to 400 concurrent sync-heavy writers we just get to maxing out the number of iops that can be done with 2 pipelines. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: HBASE-5699.3.patch.txt, HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, hbase-5699_total_throughput_sync_heavy.txt, results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, results-updated-hbase5699-wals-2.txt.bz2, results-updated-hbase5699-wals-4.txt.bz2, results-updated-hbase5699-wals-6.txt.bz2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-5699: --- Status: Patch Available (was: Open) since the set of links is getting shortened here's an extra link to the [reviewboard (depends on the changes from HBASE-10378)|https://reviews.apache.org/r/28055/] Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Sean Busbey Priority: Critical Attachments: PerfHbase.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-5699: -- Component/s: wal Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance, wal Reporter: binlijin Assignee: Li Pi Priority: Critical Attachments: PerfHbase.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5699: - Component/s: Performance Priority: Critical (was: Major) Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Components: Performance Reporter: binlijin Assignee: Li Pi Priority: Critical Attachments: PerfHbase.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-5699: -- Attachment: PerfHbase.txt Perf results. @Ted The file attached also has the latency results. Run using LoadTestTool. Sorry for being little late. Patch will upload later. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Reporter: binlijin Assignee: Li Pi Attachments: PerfHbase.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihong Yu updated HBASE-5699: -- Comment: was deleted (was: This seems interesting. I'll take a look at doing this.) Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Reporter: binlijin Assignee: Li Pi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer
[ https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5699: - Summary: Run with 1 WAL in HRegionServer (was: Should we use muti HLog or Writer in HLog in a HRegionServer) Yes. This topic comes up from time to time. Would be nice to try it out. It is possible to stand up the WAL subsystem on its own so you could experiment having HLog output to 1 WAL. A bunch of us would be interested in what you learn. Run with 1 WAL in HRegionServer - Key: HBASE-5699 URL: https://issues.apache.org/jira/browse/HBASE-5699 Project: HBase Issue Type: Improvement Reporter: binlijin -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira