[jira] [Created] (HBASE-12932) Change the interface annotation of HConnection in 0.98 from Public/Stable to Public/Evolving
Andrew Purtell created HBASE-12932: -- Summary: Change the interface annotation of HConnection in 0.98 from Public/Stable to Public/Evolving Key: HBASE-12932 URL: https://issues.apache.org/jira/browse/HBASE-12932 Project: HBase Issue Type: Task Reporter: Andrew Purtell Assignee: Andrew Purtell See the tail of HBASE-12859. Lars wants to add methods to HConnection. I suggest not. Enis then says: {quote} We do not differentiate or explicitly document this, but my understanding of most of our InterfaceAudience.Public is for consumption, not for extending or implementing interfaces (except for some coprocessor cases). So I think we should be free to add new methods in Admin, Connection, etc in minor versions. I would say that in patch versions we should not do such changes. Maybe we can do a base classes as a convenience, but still with no guarantees. {quote} to which I reply: {quote} Shrug. So the Public/Stable annotation just means "keep this interface around, don't worry about changes that will break an implementor?" . Then what would Public/Evolving (or Unstable) mean as difference? Do we have what the annotations mean documented somewhere? This point didn't come up in a review when I backported client pushback, so now we have StatisticsHConnection. If it's ok to add methods to HConnection I would like to sink the 0.98.10 RC that has this change and fix it with a new patch/issue. {quote} I will open a subtask of this issue to nuke StatisticsHConnection. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12933) [0.98] Fold StatisticsHConnection into HConnection
Andrew Purtell created HBASE-12933: -- Summary: [0.98] Fold StatisticsHConnection into HConnection Key: HBASE-12933 URL: https://issues.apache.org/jira/browse/HBASE-12933 Project: HBase Issue Type: Sub-task Reporter: Andrew Purtell Assignee: Andrew Purtell See parent issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12935) Does any one consider the performance of HBase on SSD?
[ https://issues.apache.org/jira/browse/HBASE-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12935. Resolution: Duplicate Resolving this as a dup of HBASE-6572 and subtasks > Does any one consider the performance of HBase on SSD? > --- > > Key: HBASE-12935 > URL: https://issues.apache.org/jira/browse/HBASE-12935 > Project: HBase > Issue Type: Improvement >Reporter: Liang Lee > > Some features of HBase doesn't mathch features of SSD. Such as comapction is > harmful for SSD life span. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12938) Upgrade HTrace to a recent supportable incubating version
Andrew Purtell created HBASE-12938: -- Summary: Upgrade HTrace to a recent supportable incubating version Key: HBASE-12938 URL: https://issues.apache.org/jira/browse/HBASE-12938 Project: HBase Issue Type: Bug Reporter: Andrew Purtell Fix For: 0.98.11 I filed this as a bug because in 0.98 we have an old htrace (still using the org.cloudera.htrace package) and since the introduction of htrace code, htrace itself first moved to org.htrace, then became an incubating project. The version we reference in 0.98 is of little to no use going forward. Unfortunately we must make a disruptive change, although it looks to be mostly fixing up imports, we expose no HTrace classes to HBase configuration, and where we extend HTrace classes in our code, those HBase classes are in hbase-server and not tagged for public consumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12972) Region, a supportable public/evolving subset of HRegion
Andrew Purtell created HBASE-12972: -- Summary: Region, a supportable public/evolving subset of HRegion Key: HBASE-12972 URL: https://issues.apache.org/jira/browse/HBASE-12972 Project: HBase Issue Type: New Feature Reporter: Andrew Purtell On HBASE-12566, [~lhofhansl] proposed: {quote} Maybe we can have a {{Region}} interface that is to {{HRegion}} is what {{Store}} is to {{HStore}}. Store marked with {{@InterfaceAudience.Private}} but used in some coprocessor hooks. {quote} By example, now coprocessors have to reach into HRegion in order to participate in row and region locking protocols, this is one area where the functionality is legitimate for coprocessors but not for users, so an in-between interface make sense. In addition we should promote {{Store}}'s interface audience to LimitedPrivate(COPROC). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12973) RegionCoprocessorEnvironment should provide HRegionInfo directly
Andrew Purtell created HBASE-12973: -- Summary: RegionCoprocessorEnvironment should provide HRegionInfo directly Key: HBASE-12973 URL: https://issues.apache.org/jira/browse/HBASE-12973 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell Priority: Minor A coprocessor must go through RegionCoprocessorEnvironment#getRegion. in order order to retrieve HRegionInfo for its associated region. It should be possible to get HRegionInfo directly from RegionCoprocessorEnvironment. (Or Region, see HBASE-12972) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-12961) Negative values in read and write region server metrics
[ https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-12961: > Negative values in read and write region server metrics > > > Key: HBASE-12961 > URL: https://issues.apache.org/jira/browse/HBASE-12961 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Victoria >Assignee: Victoria >Priority: Minor > Fix For: 2.0.0, 1.1.0, 0.98.11 > > Attachments: HBASE-12961-2.0.0-v1.patch, HBASE-12961-v1.patch > > > HMaster web page ui, shows the read/write request per region server. They are > currently displayed by using 32 bit integers. Hence, if the servers are up > for a long time the values can be shown as negative. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12961) Negative values in read and write region server metrics
[ https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12961. Resolution: Fixed Ok. Let me track that down > Negative values in read and write region server metrics > > > Key: HBASE-12961 > URL: https://issues.apache.org/jira/browse/HBASE-12961 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Victoria >Assignee: Victoria >Priority: Minor > Fix For: 2.0.0, 1.1.0, 0.98.11 > > Attachments: HBASE-12961-2.0.0-v1.patch, HBASE-12961-v1.patch > > > HMaster web page ui, shows the read/write request per region server. They are > currently displayed by using 32 bit integers. Hence, if the servers are up > for a long time the values can be shown as negative. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12530) TestStatusResource can fail if run in parallel with other tests
[ https://issues.apache.org/jira/browse/HBASE-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12530. Resolution: Cannot Reproduce Fix Version/s: (was: 0.98.11) (was: 2.0.0) Haven't seen this in a while > TestStatusResource can fail if run in parallel with other tests > --- > > Key: HBASE-12530 > URL: https://issues.apache.org/jira/browse/HBASE-12530 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Andrew Purtell >Priority: Trivial > Labels: newbie > > TestStatusResource can fail if run in parallel with other tests, fix this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12977) normalize handlerCount to keep handlers distributed evenly among callQueues
[ https://issues.apache.org/jira/browse/HBASE-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12977. Resolution: Not a Problem Assignee: (was: hongyu bi) Resolving as Not A Problem, please reopen if you disagree with the outcome > normalize handlerCount to keep handlers distributed evenly among callQueues > > > Key: HBASE-12977 > URL: https://issues.apache.org/jira/browse/HBASE-12977 > Project: HBase > Issue Type: Improvement >Reporter: hongyu bi >Priority: Minor > Attachments: HBASE-12977-v0.patch > > > If enable multi callQueues , handlers may not be distributed evenly among > multi queues, which mean the queue's capacity is not the same. Should we make > handler's distribution even? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section
[ https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-12914: Patch application broke both 0.98 builds with compile errors. Please check compile before commit, it doesn't take long. I will fix the problem this time. > Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade > section > --- > > Key: HBASE-12914 > URL: https://issues.apache.org/jira/browse/HBASE-12914 > Project: HBase > Issue Type: Bug > Components: API, documentation >Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9 >Reporter: Sean Busbey >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 0.98.11 > > Attachments: HBASE-12914-0.98.patch, HBASE-12914-branch-1.patch, > HBASE-12914.patch > > > There are several features in 0.98 that require enabling HFilev3 support. > Some of those features include new extendable components that are marked > IA.Public. > Current practice has been to treat these features as experimental. This has > included pushing non-compatible changes to branch-1 as the API got worked out > through use in 0.98. > * Update all of the IA.Public classes involved to make sure they are > IS.Unstable in 0.98. > * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks > aware of these changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-12958) SSH doing hbase:meta get but hbase:meta not assigned
[ https://issues.apache.org/jira/browse/HBASE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-12958: Patch application broke both 0.98 builds with compile errors. Please check compile before commit, it doesn't take long. I will fix the problem this time. > SSH doing hbase:meta get but hbase:meta not assigned > > > Key: HBASE-12958 > URL: https://issues.apache.org/jira/browse/HBASE-12958 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: stack >Assignee: stack > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 12958.txt, 12958v2.txt, 12958v2.txt > > > All master threads are blocked waiting on this call to return: > {code} > "MASTER_SERVER_OPERATIONS-c2020:16020-2" #189 prio=5 os_prio=0 > tid=0x7f4b0408b000 nid=0x7821 in Object.wait() [0x7f4ada24d000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:168) > - locked <0x00041c374f50> (a > java.util.concurrent.atomic.AtomicBoolean) > at org.apache.hadoop.hbase.client.HTable.get(HTable.java:881) > at > org.apache.hadoop.hbase.MetaTableAccessor.get(MetaTableAccessor.java:208) > at > org.apache.hadoop.hbase.MetaTableAccessor.getRegionLocation(MetaTableAccessor.java:250) > at > org.apache.hadoop.hbase.MetaTableAccessor.getRegion(MetaTableAccessor.java:225) > at > org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:634) > - locked <0x00041c1f0d80> (a > org.apache.hadoop.hbase.master.RegionStates) > at > org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3298) > at > org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:226) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > Master is stuck trying to find hbase:meta on the server that just crashed and > that we just recovered: > Mon Feb 02 23:00:02 PST 2015, null, java.net.SocketTimeoutException: > callTimeout=6, callDuration=68181: row '' on table 'hbase:meta' at > region=hbase:meta,,1.1588230740, > hostname=c2022.halxg.cloudera.com,16020,1422944918568, seqNum=0 > Will add more detail in a sec. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section
[ https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12914. Resolution: Fixed My bad, the Jenkins changelog lead me astray, the issue is the HBASE-12958 commit. I am going to go make the above comment over there. > Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade > section > --- > > Key: HBASE-12914 > URL: https://issues.apache.org/jira/browse/HBASE-12914 > Project: HBase > Issue Type: Bug > Components: API, documentation >Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9 >Reporter: Sean Busbey >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 0.98.11 > > Attachments: HBASE-12914-0.98.patch, HBASE-12914-branch-1.patch, > HBASE-12914.patch > > > There are several features in 0.98 that require enabling HFilev3 support. > Some of those features include new extendable components that are marked > IA.Public. > Current practice has been to treat these features as experimental. This has > included pushing non-compatible changes to branch-1 as the API got worked out > through use in 0.98. > * Update all of the IA.Public classes involved to make sure they are > IS.Unstable in 0.98. > * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks > aware of these changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12958) SSH doing hbase:meta get but hbase:meta not assigned
[ https://issues.apache.org/jira/browse/HBASE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12958. Resolution: Fixed Addendum pushed to 0.98 > SSH doing hbase:meta get but hbase:meta not assigned > > > Key: HBASE-12958 > URL: https://issues.apache.org/jira/browse/HBASE-12958 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: stack >Assignee: stack > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 12958.txt, 12958v2.txt, 12958v2.txt, > HBASE-12958-0.98-addendum.patch > > > All master threads are blocked waiting on this call to return: > {code} > "MASTER_SERVER_OPERATIONS-c2020:16020-2" #189 prio=5 os_prio=0 > tid=0x7f4b0408b000 nid=0x7821 in Object.wait() [0x7f4ada24d000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:168) > - locked <0x00041c374f50> (a > java.util.concurrent.atomic.AtomicBoolean) > at org.apache.hadoop.hbase.client.HTable.get(HTable.java:881) > at > org.apache.hadoop.hbase.MetaTableAccessor.get(MetaTableAccessor.java:208) > at > org.apache.hadoop.hbase.MetaTableAccessor.getRegionLocation(MetaTableAccessor.java:250) > at > org.apache.hadoop.hbase.MetaTableAccessor.getRegion(MetaTableAccessor.java:225) > at > org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:634) > - locked <0x00041c1f0d80> (a > org.apache.hadoop.hbase.master.RegionStates) > at > org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3298) > at > org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:226) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > Master is stuck trying to find hbase:meta on the server that just crashed and > that we just recovered: > Mon Feb 02 23:00:02 PST 2015, null, java.net.SocketTimeoutException: > callTimeout=6, callDuration=68181: row '' on table 'hbase:meta' at > region=hbase:meta,,1.1588230740, > hostname=c2022.halxg.cloudera.com,16020,1422944918568, seqNum=0 > Will add more detail in a sec. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12979) Use setters instead of return values for handing back statistics from HRegion methods
Andrew Purtell created HBASE-12979: -- Summary: Use setters instead of return values for handing back statistics from HRegion methods Key: HBASE-12979 URL: https://issues.apache.org/jira/browse/HBASE-12979 Project: HBase Issue Type: Improvement Affects Versions: 0.98.10 Reporter: Andrew Purtell Assignee: Andrew Purtell In HBASE-5162 (and backports such as HBASE-12729) we modified some HRegion methods to return statistics for consumption by callers. The statistics are ultimately passed back to the client as load feedback. [~lhofhansl] thinks returning this information is a weird mix of concerns. This also produced a difficult to anticipate binary compatibility issue with Phoenix. There was no compile time issue because the code of course was not structured to assign from a method returning void, yet the method signature changes so the JVM cannot resolve it if older Phoenix binaries are installed into a 0.98.10 release. Let's change the HRegion methods back to returning 'void' and use setters instead. Officially we don't support use of HRegion (HBASE-12566) but we do not need to go out of our way to break things (smile) so I would also like to make a patch release containing just this change to help out our sister project. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12986) Compaction pressure based client pushback
Andrew Purtell created HBASE-12986: -- Summary: Compaction pressure based client pushback Key: HBASE-12986 URL: https://issues.apache.org/jira/browse/HBASE-12986 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell HBASE-8329 recently introduced on all branches {{double RegionServerServices#getCompactionPressure()}}, which returns a value greater than or equal to 0.0, and any value greater than 1.0 means we have exceeded the store file limit on some stores. It could be reasonable to send this value along in server load statistics (clamping max at 1.0), and consider it as an additional term in the ExponentialClientBackoffPolicy. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12995) Document that HConnection#getTable methods do not throw TableNotFoundException since 0.98.1
Andrew Purtell created HBASE-12995: -- Summary: Document that HConnection#getTable methods do not throw TableNotFoundException since 0.98.1 Key: HBASE-12995 URL: https://issues.apache.org/jira/browse/HBASE-12995 Project: HBase Issue Type: Task Affects Versions: 0.98.1 Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 [~jamestaylor] mentioned that recently Phoenix discovered at some point the {{HConnection#getTable}} "lightweight table reference" methods stopped throwing TableNotFoundExceptions. It used to be (in 0.94 and 0.96) that all APIs that construct HTables would check if the table is locatable and throw exceptions if not. Now, such exceptions will only be thrown at the time of the first operation submitted using the table reference, should a problem be detected then. We did a bisect and it seems this was changed in the 0.98.1 release by HBASE-10080. Since the change has now shipped in 10 in total 0.98 releases we should just document the change, in the javadoc of the HConnection class, Connection in branch-1+. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13000) Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98
Andrew Purtell created HBASE-13000: -- Summary: Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98 Key: HBASE-13000 URL: https://issues.apache.org/jira/browse/HBASE-13000 Project: HBase Issue Type: Task Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 0.98.11 Would be useful to know about abnormal datanodes in an 0.98 install too. Implement for 0.98, incorporating addendums. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13005) TestDeleteTableHandler failing in 0.98 hadoop 1 builds
Andrew Purtell created HBASE-13005: -- Summary: TestDeleteTableHandler failing in 0.98 hadoop 1 builds Key: HBASE-13005 URL: https://issues.apache.org/jira/browse/HBASE-13005 Project: HBase Issue Type: Bug Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 0.98.11 Stabilize the test or revert the change containing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12866) TestHFilePerformance is broken
[ https://issues.apache.org/jira/browse/HBASE-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12866. Tags: HBASE-9910 Resolution: Duplicate Fix Version/s: (was: 0.98.11) (was: 1.1.0) (was: 1.0.1) (was: 2.0.0) Assignee: (was: Andrew Purtell) Resolving as dup of HBASE-9910 > TestHFilePerformance is broken > -- > > Key: HBASE-12866 > URL: https://issues.apache.org/jira/browse/HBASE-12866 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 0.98.8 > Environment: Command > bin/hbase org.apache.hadoop.hbase.io.hfile.TestHFilePerformance > Failure observed for test with the following options: Read HFile with > codecName: none cipherName: aes >Reporter: Vikas Vishwakarma >Priority: Minor > > Command > bin/hbase org.apache.hadoop.hbase.io.hfile.TestHFilePerformance > File Type: HFile > Writing HFile with codecName: none cipherName: aes > 2015-01-15 16:54:51 Started timing. > /home/vvishwakarma/vikas/projects/hbase-src-0.98.8/hbase-0.98.8/target/test-data/03d50949-0185-4fac-b072-957d2da6ae0e/TestHFilePerformanceHFile.Performance > HFile write method: > 2015-01-15 16:54:52 Stopped timing. > 2015-01-15 16:54:52 Data written: > 2015-01-15 16:54:52rate = 66MB/s > 2015-01-15 16:54:52total = 5220B > 2015-01-15 16:54:52 File written: > 2015-01-15 16:54:52rate = 66MB/s > 2015-01-15 16:54:52total = 52303530B > +++ > Reading file of type: HFile > Input file size: 52303530 > 2015-01-15 16:54:52 Started timing. > 2015-01-15 16:54:52,680 ERROR [main] util.AbstractHBaseTool: Error running > command-line tool > org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile > Trailer from file > /home/vvishwakarma/vikas/projects/hbase-src-0.98.8/hbase-0.98.8/target/test-data/03d50949-0185-4fac-b072-957d2da6ae0e/TestHFilePerformance/HFile.Performance > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:463) > at > org.apache.hadoop.hbase.io.hfile.HFile.createReaderFromStream(HFile.java:517) > at > org.apache.hadoop.hbase.io.hfile.TestHFilePerformance.timeReading(TestHFilePerformance.java:272) > at > org.apache.hadoop.hbase.io.hfile.TestHFilePerformance.testRunComparisons(TestHFilePerformance.java:390) > at > org.apache.hadoop.hbase.io.hfile.TestHFilePerformance.doWork(TestHFilePerformance.java:447) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.io.hfile.TestHFilePerformance.main(TestHFilePerformance.java:452) > Caused by: java.io.IOException: Using no compression but > onDiskSizeWithoutHeader=134, uncompressedSizeWithoutHeader=113, > numChecksumbytes=4 > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.assumeUncompressed(HFileBlock.java:561) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1589) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1408) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1248) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1256) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:146) > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:451) > ... 7 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13019) Fork 0.98 docs after HBASE-12585 ?
Andrew Purtell created HBASE-13019: -- Summary: Fork 0.98 docs after HBASE-12585 ? Key: HBASE-13019 URL: https://issues.apache.org/jira/browse/HBASE-13019 Project: HBase Issue Type: Task Reporter: Andrew Purtell Should we fork the 0.98 docs like we have with 0.94 after HBASE-12585 ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13020) Add 'patchprocess/*' to RAT excludes on all branches
Andrew Purtell created HBASE-13020: -- Summary: Add 'patchprocess/*' to RAT excludes on all branches Key: HBASE-13020 URL: https://issues.apache.org/jira/browse/HBASE-13020 Project: HBase Issue Type: Task Reporter: Andrew Purtell Assignee: Andrew Purtell On HBASE-13005 a precommit build for 0.98 failed because the RAT check failed because patchprocess/ is not excluded. https://builds.apache.org/job/PreCommit-HBASE-Build/12784//artifact/patchprocess/patchReleaseAuditWarnings.txt: {quote} {noformat} 42 Unknown Licenses *** Unapproved licenses: patchprocess/patchFindbugsWarningshbase-annotations.xml patchprocess/newPatchFindbugsWarningshbase-annotations.html patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html patchprocess/newPatchFindbugsWarningshbase-client.html patchprocess/newPatchFindbugsWarningshbase-rest.html patchprocess/newPatchFindbugsWarningshbase-server.xml patchprocess/newPatchFindbugsWarningshbase-protocol.html patchprocess/patchFindbugsWarningshbase-client.xml patchprocess/newPatchFindbugsWarningshbase-thrift.xml patchprocess/patchJavadocWarnings.txt patchprocess/patchFindbugsWarningshbase-thrift.xml patchprocess/newPatchFindbugsWarningshbase-rest.xml patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html patchprocess/patchFindbugsWarningshbase-hadoop-compat.xml patchprocess/newPatchFindbugsWarningshbase-annotations.xml patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html patchprocess/patchFindbugsWarningshbase-examples.xml patchprocess/trunkCheckstyle.xml patchprocess/checkstyle-aggregate.html patchprocess/patchFindbugsWarningshbase-protocol.xml patchprocess/patchProtocErrors.txt patchprocess/newPatchFindbugsWarningshbase-common.html patchprocess/patch patchprocess/newPatchFindbugsWarningshbase-thrift.html patchprocess/patchJavacWarnings.txt patchprocess/newPatchFindbugsWarningshbase-common.xml patchprocess/newPatchFindbugsWarningshbase-examples.xml patchprocess/newPatchFindbugsWarningshbase-prefix-tree.xml patchprocess/newPatchFindbugsWarningshbase-client.xml patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.xml patchprocess/newPatchFindbugsWarningshbase-server.html patchprocess/trunkJavacWarnings.txt patchprocess/patchFindbugsWarningshbase-rest.xml patchprocess/patchCheckstyle.xml patchprocess/jira patchprocess/patchFindbugsWarningshbase-server.xml patchprocess/newPatchFindbugsWarningshbase-protocol.xml patchprocess/patchFindbugsWarningshbase-prefix-tree.xml patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.xml patchprocess/patchFindbugsWarningshbase-hadoop2-compat.xml patchprocess/newPatchFindbugsWarningshbase-examples.html patchprocess/patchFindbugsWarningshbase-common.xml {noformat} {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-13020) Add 'patchprocess/*' to RAT excludes on all branches
[ https://issues.apache.org/jira/browse/HBASE-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13020. Resolution: Fixed Fix Version/s: 0.98.11 0.94.27 Only relevant for 0.98 and 0.94. I pushed trivial POM changes. Tested them first. > Add 'patchprocess/*' to RAT excludes on all branches > > > Key: HBASE-13020 > URL: https://issues.apache.org/jira/browse/HBASE-13020 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell > Fix For: 0.94.27, 0.98.11 > > Attachments: HBASE-13020-0.94.patch, HBASE-13020-0.98.patch > > > On HBASE-13005 a precommit build for 0.98 failed because the RAT check failed > because patchprocess/ is not excluded. > https://builds.apache.org/job/PreCommit-HBASE-Build/12784//artifact/patchprocess/patchReleaseAuditWarnings.txt: > {quote} > {noformat} > 42 Unknown Licenses > *** > Unapproved licenses: > patchprocess/patchFindbugsWarningshbase-annotations.xml > patchprocess/newPatchFindbugsWarningshbase-annotations.html > patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html > patchprocess/newPatchFindbugsWarningshbase-client.html > patchprocess/newPatchFindbugsWarningshbase-rest.html > patchprocess/newPatchFindbugsWarningshbase-server.xml > patchprocess/newPatchFindbugsWarningshbase-protocol.html > patchprocess/patchFindbugsWarningshbase-client.xml > patchprocess/newPatchFindbugsWarningshbase-thrift.xml > patchprocess/patchJavadocWarnings.txt > patchprocess/patchFindbugsWarningshbase-thrift.xml > patchprocess/newPatchFindbugsWarningshbase-rest.xml > patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html > patchprocess/patchFindbugsWarningshbase-hadoop-compat.xml > patchprocess/newPatchFindbugsWarningshbase-annotations.xml > patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html > patchprocess/patchFindbugsWarningshbase-examples.xml > patchprocess/trunkCheckstyle.xml > patchprocess/checkstyle-aggregate.html > patchprocess/patchFindbugsWarningshbase-protocol.xml > patchprocess/patchProtocErrors.txt > patchprocess/newPatchFindbugsWarningshbase-common.html > patchprocess/patch > patchprocess/newPatchFindbugsWarningshbase-thrift.html > patchprocess/patchJavacWarnings.txt > patchprocess/newPatchFindbugsWarningshbase-common.xml > patchprocess/newPatchFindbugsWarningshbase-examples.xml > patchprocess/newPatchFindbugsWarningshbase-prefix-tree.xml > patchprocess/newPatchFindbugsWarningshbase-client.xml > patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.xml > patchprocess/newPatchFindbugsWarningshbase-server.html > patchprocess/trunkJavacWarnings.txt > patchprocess/patchFindbugsWarningshbase-rest.xml > patchprocess/patchCheckstyle.xml > patchprocess/jira > patchprocess/patchFindbugsWarningshbase-server.xml > patchprocess/newPatchFindbugsWarningshbase-protocol.xml > patchprocess/patchFindbugsWarningshbase-prefix-tree.xml > patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.xml > patchprocess/patchFindbugsWarningshbase-hadoop2-compat.xml > patchprocess/newPatchFindbugsWarningshbase-examples.html > patchprocess/patchFindbugsWarningshbase-common.xml > {noformat} > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag
[ https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-9531: --- I need an addendum _again_*, sigh, because of the forgotten Hadoop 1 build. Simple fix, coming right up. {noformat} [ERROR] /home/jenkins/jenkins-slave/workspace/HBase-0.98-on-Hadoop-1.1/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java:[24,7] error: MetricsReplicationGlobalSourceSource is not abstract and does not override abstract method getLastShippedAge() in MetricsReplicationSourceSource [ERROR] /home/jenkins/jenkins-slave/workspace/HBase-0.98-on-Hadoop-1.1/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java:[24,7] error: MetricsReplicationSourceSourceImpl is not abstract and does not override abstract method getLastShippedAge() in MetricsReplicationSourceSource [ERROR] /home/jenkins/jenkins-slave/workspace/HBase-0.98-on-Hadoop-1.1/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java:[24,7] error: MetricsReplicationSinkSourceImpl is not abstract and does not override abstract method getLastAppliedOpAge() in MetricsReplicationSinkSource {noformat} > a command line (hbase shell) interface to retreive the replication metrics > and show replication lag > --- > > Key: HBASE-9531 > URL: https://issues.apache.org/jira/browse/HBASE-9531 > Project: HBase > Issue Type: New Feature > Components: Replication >Affects Versions: 0.99.0 >Reporter: Demai Ni >Assignee: Ashish Singhi > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, > HBASE-9531-master-v1.patch, HBASE-9531-master-v2.patch, > HBASE-9531-master-v3.patch, HBASE-9531-master-v4.patch, > HBASE-9531-trunk-v0.patch, HBASE-9531-trunk-v0.patch, HBASE-9531-v1.patch, > HBASE-9531-v2.patch, HBASE-9531-v3-0.98.patch, HBASE-9531-v3-branch-1.patch, > HBASE-9531-v3.patch, HBASE-9531.patch > > > This jira is to provide a command line (hbase shell) interface to retreive > the replication metrics info such as:ageOfLastShippedOp, > timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and > timeStampsOfLastAppliedOp. And also to provide a point of time info of the > lag of replication(source only) > Understand that hbase is using Hadoop > metrics(http://hbase.apache.org/metrics.html), which is a common way to > monitor metric info. This Jira is to serve as a light-weight client > interface, comparing to a completed(certainly better, but heavier)GUI > monitoring package. I made the code works on 0.94.9 now, and like to use this > jira to get opinions about whether the feature is valuable to other > users/workshop. If so, I will build a trunk patch. > All inputs are greatly appreciated. Thank you! > The overall design is to reuse the existing logic which supports hbase shell > command 'status', and invent a new module, called ReplicationLoad. In > HRegionServer.buildServerLoad() , use the local replication service objects > to get their loads which could be wrapped in a ReplicationLoad object and > then simply pass it to the ServerLoad. In ReplicationSourceMetrics and > ReplicationSinkMetrics, a few getters and setters will be created, and ask > Replication to build a "ReplicationLoad". (many thanks to Jean-Daniel for > his kindly suggestions through dev email list) > the replication lag will be calculated for source only, and use this formula: > {code:title=Replication lag|borderStyle=solid} > if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - > timeStampsOfLastShippedOp)) //err on the large side > else if (current time - timeStampsOfLastShippedOp) < 2* > ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen > recently > else lag = 0 // last shipped may happens last night, so NO real lag > although ageOfLastShippedOp is non-zero > {code} > External will look something like: > {code:title=status 'replication'|borderStyle=solid} > hbase(main):001:0> status 'replication' > version 0.94.9 > 3 live servers > hdtest017.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013 > SINK :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 > 14:48:48 PDT 2013 > hdtest018.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013 > SINK :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 > 14:50:59 PDT 2013
[jira] [Resolved] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag
[ https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-9531. --- Resolution: Fixed Fixed 0.98 Hadoop 1 build with addendum > a command line (hbase shell) interface to retreive the replication metrics > and show replication lag > --- > > Key: HBASE-9531 > URL: https://issues.apache.org/jira/browse/HBASE-9531 > Project: HBase > Issue Type: New Feature > Components: Replication >Affects Versions: 0.99.0 >Reporter: Demai Ni >Assignee: Ashish Singhi > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-9531-0.98-addendum.patch, > HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, > HBASE-9531-master-v1.patch, HBASE-9531-master-v2.patch, > HBASE-9531-master-v3.patch, HBASE-9531-master-v4.patch, > HBASE-9531-trunk-v0.patch, HBASE-9531-trunk-v0.patch, HBASE-9531-v1.patch, > HBASE-9531-v2.patch, HBASE-9531-v3-0.98.patch, HBASE-9531-v3-branch-1.patch, > HBASE-9531-v3.patch, HBASE-9531.patch > > > This jira is to provide a command line (hbase shell) interface to retreive > the replication metrics info such as:ageOfLastShippedOp, > timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and > timeStampsOfLastAppliedOp. And also to provide a point of time info of the > lag of replication(source only) > Understand that hbase is using Hadoop > metrics(http://hbase.apache.org/metrics.html), which is a common way to > monitor metric info. This Jira is to serve as a light-weight client > interface, comparing to a completed(certainly better, but heavier)GUI > monitoring package. I made the code works on 0.94.9 now, and like to use this > jira to get opinions about whether the feature is valuable to other > users/workshop. If so, I will build a trunk patch. > All inputs are greatly appreciated. Thank you! > The overall design is to reuse the existing logic which supports hbase shell > command 'status', and invent a new module, called ReplicationLoad. In > HRegionServer.buildServerLoad() , use the local replication service objects > to get their loads which could be wrapped in a ReplicationLoad object and > then simply pass it to the ServerLoad. In ReplicationSourceMetrics and > ReplicationSinkMetrics, a few getters and setters will be created, and ask > Replication to build a "ReplicationLoad". (many thanks to Jean-Daniel for > his kindly suggestions through dev email list) > the replication lag will be calculated for source only, and use this formula: > {code:title=Replication lag|borderStyle=solid} > if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - > timeStampsOfLastShippedOp)) //err on the large side > else if (current time - timeStampsOfLastShippedOp) < 2* > ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen > recently > else lag = 0 // last shipped may happens last night, so NO real lag > although ageOfLastShippedOp is non-zero > {code} > External will look something like: > {code:title=status 'replication'|borderStyle=solid} > hbase(main):001:0> status 'replication' > version 0.94.9 > 3 live servers > hdtest017.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013 > SINK :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 > 14:48:48 PDT 2013 > hdtest018.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013 > SINK :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 > 14:50:59 PDT 2013 > hdtest015.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013 > SINK :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 > 14:48:48 PDT 2013 > hbase(main):002:0> status 'replication','source' > version 0.94.9 > 3 live servers > hdtest017.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013 > hdtest018.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013 > hdtest015.svl.ibm.com: > SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, > timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013 > hbase(main):003:0> status 'replication','sink' > version 0.94.9 > 3 live servers > hdtest017.svl.ibm.com: > SINK :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 > 14:48:48 PDT 2013 > hdtest018.svl.ibm.com: > SINK :AgeOfLastAppliedOp=14, TimeStampsOfLastAp
[jira] [Created] (HBASE-13044) Configuration option for disabling coprocessor loading
Andrew Purtell created HBASE-13044: -- Summary: Configuration option for disabling coprocessor loading Key: HBASE-13044 URL: https://issues.apache.org/jira/browse/HBASE-13044 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 1.1.1, 2.0.0, 1.0.1, 0.98.11 Some users would like complete assurance coprocessors cannot be loaded. Add a configuration option that prevents coprocessors from ever being loaded by ignoring any load directives found in the site file or table metadata. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13081) Branch precommit builds are not updating to branch head before patch application
Andrew Purtell created HBASE-13081: -- Summary: Branch precommit builds are not updating to branch head before patch application Key: HBASE-13081 URL: https://issues.apache.org/jira/browse/HBASE-13081 Project: HBase Issue Type: Bug Reporter: Andrew Purtell See for example https://builds.apache.org/job/PreCommit-HBASE-Build/12922//console {noformat} git checkout 0.98 Previous HEAD position was 03d8918... HBASE-13069 Thrift Http Server returns an error code of 500 instead of 401 when authentication fails (Srikanth Srungarapu) Switched to branch '0.98' Your branch is behind 'origin/0.98' by 48 commits, and can be fast-forwarded. (use "git pull" to update your local branch) git status On branch 0.98 Your branch is behind 'origin/0.98' by 48 commits, and can be fast-forwarded. (use "git pull" to update your local branch) Untracked files: (use "git add ..." to include in what will be committed) patchprocess/ nothing added to commit but untracked files present (use "git add" to track) {noformat} Because the local tree is 48 commits behind the head of the 0.98 branch, the contributor's patch based on the head of 0.98 branch cannot cleanly apply. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13090) Progress heartbeats for long running scanners
Andrew Purtell created HBASE-13090: -- Summary: Progress heartbeats for long running scanners Key: HBASE-13090 URL: https://issues.apache.org/jira/browse/HBASE-13090 Project: HBase Issue Type: New Feature Reporter: Andrew Purtell It can be necessary to set very long timeouts for clients that issue scans over large regions when all data in the region is filtered out. This is a usability concern because it can be hard to identify what worst case timeout to use until scans are occasionally/intermittently failing in production, depending on variable scan criteria. It would be better if the client-server scan protocol can send back periodic progress heartbeats to clients as long as server scanners are alive and making progress. This is related but orthogonal to streaming scan (HBASE-13071). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13096) NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL encryption and Phoenix secondary indexes
Andrew Purtell created HBASE-13096: -- Summary: NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL encryption and Phoenix secondary indexes Key: HBASE-13096 URL: https://issues.apache.org/jira/browse/HBASE-13096 Project: HBase Issue Type: Bug Affects Versions: 0.98.6 Reporter: Andrew Purtell On user@phoenix Dhavi Rami reported: {quote} I tried using phoenix in hBase with Transparent Encryption of Data At Rest enabled ( AES encryption) Works fine for a table with primary key column. But it doesn't work if I create Secondary index on that tables.I tried to dig deep into the problem and found WAL file encryption throws exception when I have Global Secondary Index created on my mutable table. Following is the error I was getting on one of the region server. {noformat} 2015-02-20 10:44:48,768 ERROR org.apache.hadoop.hbase.regionserver.wal.FSHLog: UNEXPECTED java.lang.NullPointerException at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:767) at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:754) at org.apache.hadoop.hbase.KeyValue.getKeyLength(KeyValue.java:1253) at org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec$EncryptedKvEncoder.write(SecureWALCellCodec.java:194) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:117) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$AsyncWriter.run(FSHLog.java:1137) at java.lang.Thread.run(Thread.java:745) 2015-02-20 10:44:48,776 INFO org.apache.hadoop.hbase.regionserver.wal.FSHLog: regionserver60020-WAL.AsyncWriter exiting {noformat} I had to disable WAL encryption, and it started working fine with secondary Index. So Hfile encryption works with secondary index but WAL encryption doesn't work. {quote} Parking this here for later investigation. For now I'm going to assume this is something in SecureWALCellCodec that needs looking at, but if it turns out to be a Phoenix indexer issue I will move this JIRA there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-13086) Show ZK root node on Master WebUI
[ https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-13086: This change may have introduced a regression in the 0.98 builds. See https://builds.apache.org/job/HBase-0.98/871/ and https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/829/ {noformat} java.lang.NullPointerException: null at org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:360) at org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:390) at org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:380) at org.apache.hadoop.hbase.master.TestMasterStatusServlet.testStatusTemplateWithServers(TestMasterStatusServlet.java:146) {noformat} We can fix it with an addendum, or revert and try again. > Show ZK root node on Master WebUI > - > > Key: HBASE-13086 > URL: https://issues.apache.org/jira/browse/HBASE-13086 > Project: HBase > Issue Type: Improvement > Components: master >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 13068.jpg, HBASE-13068.00.patch > > > Currently we show a well-formed ZK quorum on the master webUI but not the > root node. Root node can be changed based on deployment, so we should list it > here explicitly. This information is helpful for folks playing around with > phoenix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
Andrew Purtell created HBASE-13106: -- Summary: Ensure endpoint-only table coprocessors can be dynamically loaded Key: HBASE-13106 URL: https://issues.apache.org/jira/browse/HBASE-13106 Project: HBase Issue Type: Test Reporter: Andrew Purtell Priority: Trivial Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 I came across the blog post http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting bit: {quote} This means that you can load both Observer and Endpoint Coprocessor statically using the following Method of HTableDescriptor: {noformat} addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int priority, Map kvs) throws IOException {noformat} In my case, the above method worked fine for Observer Coprocessor *but didn’t work for Endpoint Coprocessor, causing the table to become unavailable and finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine when loaded statically. Use the above method for Endpoint Coprocessor with caution. {quote} To check this I wrote a test, attached. It passes, all seems ok. Guessing the complaint was due to user error, probably jar placement/path issues. Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-13058) Hbase shell command 'scan' for non existent table shows unnecessary info for one unrelated existent table.
[ https://issues.apache.org/jira/browse/HBASE-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-13058: Reopened due to revert > Hbase shell command 'scan' for non existent table shows unnecessary info for > one unrelated existent table. > -- > > Key: HBASE-13058 > URL: https://issues.apache.org/jira/browse/HBASE-13058 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Abhishek Kumar >Assignee: Abhishek Kumar >Priority: Trivial > Fix For: 2.0.0, 1.1.0 > > Attachments: 0001-HBASE-13058-Error-messages-in-scan-table.patch, > 0001-HBASE-13058-shell-unknown-table-message-update.patch > > > When scanning for a non existent table in hbase shell, error message in shell > sometimes(based on META table content) displays completely unrelated table > info , which seems to be unnecessary and inconsistent with other error > messages: > {noformat} > hbase(main):016:0> scan 'noTable' > ROW COLUMN+CELL > ERROR: Unknown table Table 'noTable' was not found, got: hbase:namespace.! > - > hbase(main):017:0> scan '01_noTable' > ROW COLUMN+CELL > ERROR: Unknown table 01_noTable! > -- > {noformat} > Its happening when doing a META table scan (to locate input table ) and > scanner stops at row of another table (beyond which table can not exist) in > ConnectionManager.locateRegionInMeta: > {noformat} > private RegionLocations locateRegionInMeta(TableName tableName, byte[] row, >boolean useCache, boolean retry, int replicaId) throws > IOException { > . > > // possible we got a region of a different table... > if (!regionInfo.getTable().equals(tableName)) { > throw new TableNotFoundException( > "Table '" + tableName + "' was not found, got: " + > regionInfo.getTable() + "."); > } > ... > ... > {noformat} > Here, we can simply put a debug message(if required) and just throw the > TableNotFoundException(tableName) with only tableName instead of with > scanner positioned row. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13110) TestCoprocessorEndpoint hangs on trunk
Andrew Purtell created HBASE-13110: -- Summary: TestCoprocessorEndpoint hangs on trunk Key: HBASE-13110 URL: https://issues.apache.org/jira/browse/HBASE-13110 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Reporter: Andrew Purtell TestCoprocessorEndpoint hangs with repeated RPC retries (RpcRetryingCallerImpl.callWithRetries) after the ProtobufCoprocessorService throws the test exception. Looks like a change on trunk has broken TestCoprocessorEndpoint. jstack of interest: {noformat} "main" prio=5 tid=0x7f87eb003000 nid=0x1303 in Object.wait() [0x000105173000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x0007c91aedf8> (a java.util.concurrent.atomic.AtomicBoolean) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:162) - locked <0x0007c91aedf8> (a java.util.concurrent.atomic.AtomicBoolean) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java: 95) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) at org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto$BlockingStub.error(TestRpcServiceProtos.java:378) at org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint.testCoprocessorError(TestCoprocessorEndpoint.java:308) {noformat} Tail of the log has entries like: {noformat} 2015-02-25 18:50:03,659 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56093] ipc.CallRunner(110): B.defaultRpcServer.handler=3,queue=0,port=56093: callId: 75 service: ClientService methodName: ExecService size: 141 connection: 10.3.31.30:56149 java.io.IOException: Test exception at org.apache.hadoop.hbase.coprocessor.ProtobufCoprocessorService.error(ProtobufCoprocessorService.java:64) at org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto.callMethod(TestRpcServiceProtos.java:210) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6883) at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1696) at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1678) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) at java.lang.Thread.run(Thread.java:745) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-12795) Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98
[ https://issues.apache.org/jira/browse/HBASE-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-12795: Reopening for an addendum to fix a Phoenix (test) build issue: {noformat} [ERROR] /home/apurtell/src/phoenix/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestWALRecoveryCaching.java:[320,16] no suitable method found for startRegionServer(java.lang.String) method org.apache.hadoop.hbase.MiniHBaseCluster.startRegionServer() is not applicable (actual and formal argument lists differ in length) method org.apache.hadoop.hbase.MiniHBaseCluster.startRegionServer(java.lang.String,int) is not applicable (actual and formal argument lists differ in length) method org.apache.hadoop.hbase.HBaseCluster.startRegionServer(java.lang.String,int) is not applicable (actual and formal argument lists differ in length) [ERROR] /home/apurtell/src/phoenix/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestWALRecoveryCaching.java:[322,16] method waitForRegionServerToStart in class org.apache.hadoop.hbase.HBaseCluster cannot be applied to given types; required: java.lang.String,int,long found: java.lang.String,long reason: actual and formal argument lists differ in length {noformat} We can discuss what to do long term about test code used by downstream projects, like move it to src/main/ etc. but for now I will fix this as a courtesy. > Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98 > --- > > Key: HBASE-12795 > URL: https://issues.apache.org/jira/browse/HBASE-12795 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.11 > > Attachments: HBASE-12795-0.98.patch > > > As of HBASE-12371 we are following along with improvements in the integration > test module. Evaluate HBASE-12429 (Add port to ClusterManager's actions) for > backport to 0.98. This improves testing with chaos to support testing on a > cluster with multiple regionservers running on a host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12795) Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98
[ https://issues.apache.org/jira/browse/HBASE-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12795. Resolution: Fixed > Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98 > --- > > Key: HBASE-12795 > URL: https://issues.apache.org/jira/browse/HBASE-12795 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.11 > > Attachments: HBASE-12795-0.98-addendum.patch, HBASE-12795-0.98.patch > > > As of HBASE-12371 we are following along with improvements in the integration > test module. Evaluate HBASE-12429 (Add port to ClusterManager's actions) for > backport to 0.98. This improves testing with chaos to support testing on a > cluster with multiple regionservers running on a host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13143) TestCacheOnWrite is flaky in 0.98 builds and needs a diet
Andrew Purtell created HBASE-13143: -- Summary: TestCacheOnWrite is flaky in 0.98 builds and needs a diet Key: HBASE-13143 URL: https://issues.apache.org/jira/browse/HBASE-13143 Project: HBase Issue Type: Bug Affects Versions: 0.98.11 Reporter: Andrew Purtell Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12 TestCacheOnWrite passes locally but has been flaking in 0.98 builds on Jenkins, most recently https://builds.apache.org/job/HBase-0.98/878/ The test takes a long time to execute (338.492 sec) and is resource intensive (216 tests). Neither of these characteristics endear it to Jenkins. When I ran this unit test on a macbook after a minute the fan speed was so fast I thought it would take flight. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13144) Avoid writing files to disk in unit tests
Andrew Purtell created HBASE-13144: -- Summary: Avoid writing files to disk in unit tests Key: HBASE-13144 URL: https://issues.apache.org/jira/browse/HBASE-13144 Project: HBase Issue Type: Test Reporter: Andrew Purtell We have a number of unit tests that read and write relatively small files, but do it often and run under a timeout clock. If the build or test host is virtual or IO contended or both then the test can become flaky. Even if the host is up to the IO pressure the test would run a lot faster if we were not asking the OS to persist anything. Consider both: - If built against Hadoop 2.6+, spin up miniclusters that keep all blocks in memory-only storage - Write a simple Hadoop filesystem implementation (or find one) that keeps all state in memory and use in lieu of LocalFileSystem for unit tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12969) Parameter Validation is not there for shell script, local-master-backup.sh and local-regionservers.sh
[ https://issues.apache.org/jira/browse/HBASE-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-12969. Resolution: Fixed Fix Version/s: 0.98.12 1.1.0 1.0.1 Hadoop Flags: Reviewed > Parameter Validation is not there for shell script, local-master-backup.sh > and local-regionservers.sh > - > > Key: HBASE-12969 > URL: https://issues.apache.org/jira/browse/HBASE-12969 > Project: HBase > Issue Type: Bug > Components: scripts >Affects Versions: 0.98.9 >Reporter: Y. SREENIVASULU REDDY >Assignee: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12 > > Attachments: HBASE-12969.patch > > > while executing local-regionservers.sh or local-master-backup.sh in > $HBASE_HOME/bin > if parameter is non numeric value then scripts are throwing failures. > we need to handle the validation also for those scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-13109) Make better SEEK vs SKIP decisions during scanning
[ https://issues.apache.org/jira/browse/HBASE-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-13109: > Make better SEEK vs SKIP decisions during scanning > -- > > Key: HBASE-13109 > URL: https://issues.apache.org/jira/browse/HBASE-13109 > Project: HBase > Issue Type: Improvement >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12 > > Attachments: 13109-0.98-v4.txt, 13109-trunk-v2.txt, > 13109-trunk-v3.txt, 13109-trunk-v4.txt, 13109-trunk-v5.txt, 13109-trunk.txt, > nextIndexKVChange_new.patch > > > I'm re-purposing this issue to add a heuristic as to when to SEEK and when to > SKIP Cells. This has come up in various issues, and I think I have a way to > finally fix this now. HBASE-9778, HBASE-12311, and friends are related. > --- Old description --- > This is a continuation of HBASE-9778. > We've seen a scenario of a very slow scan over a region using a timerange > that happens to fall after the ts of any Cell in the region. > Turns out we spend a lot of time seeking. > Tested with a 5 column table, and the scan is 5x faster when the timerange > falls before all Cells' ts. > We can use the lookahead hint introduced in HBASE-9778 to do opportunistic > SKIPing before we actually seek. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13234) Improve the obviousness of the download link on hbase.apache.org
Andrew Purtell created HBASE-13234: -- Summary: Improve the obviousness of the download link on hbase.apache.org Key: HBASE-13234 URL: https://issues.apache.org/jira/browse/HBASE-13234 Project: HBase Issue Type: Task Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Update the hbase.apache.org homepage to include a very obvious section describing how a user can "Download HBase Software Here" with a link. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-13234) Improve the obviousness of the download link on hbase.apache.org
[ https://issues.apache.org/jira/browse/HBASE-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13234. Resolution: Fixed Hadoop Flags: Reviewed Pushed site source change to master. Regenerated site using 'mvn site' and committed to SVN. > Improve the obviousness of the download link on hbase.apache.org > > > Key: HBASE-13234 > URL: https://issues.apache.org/jira/browse/HBASE-13234 > Project: HBase > Issue Type: Task > Components: documentation >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-13234.patch, screenshot.png > > > Update the hbase.apache.org homepage to include a very obvious section > describing how a user can "Download HBase Software Here" with a link. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13237) Improve trademark marks on the hbase.apache.org homepage
Andrew Purtell created HBASE-13237: -- Summary: Improve trademark marks on the hbase.apache.org homepage Key: HBASE-13237 URL: https://issues.apache.org/jira/browse/HBASE-13237 Project: HBase Issue Type: Task Components: documentation Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 2.0.0 Ensure trademark marks are next to first and prominent uses of "HBase" on the hbase.apache.org homepage -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13238) Time out locks and abort if HDFS is wedged
Andrew Purtell created HBASE-13238: -- Summary: Time out locks and abort if HDFS is wedged Key: HBASE-13238 URL: https://issues.apache.org/jira/browse/HBASE-13238 Project: HBase Issue Type: Brainstorming Reporter: Andrew Purtell This is a brainstorming issue on the top of timing out locks and aborting if HDFS is wedged. We had a minor production incident where a region was unable to close after 24 hours. The CloseRegionHandler was waiting for a write lock on the ReentrantReadWriteLock we take in HRegion#doClose. There were outstanding read locks. Three other threads were stuck in scanning, all blocked on the same DFSInputStream. Two were blocked in DFSInputStream#getFileLength, the third was waiting in epoll from SocketIOWithTimeout$SelectorPool#select with apparent infinite timeout from PacketReceiver#readChannelFully. This is similar to other issues we have seen before, in the context of the region wanting to finish a compaction, but can't due to some HDFS issue causing the reader to become extremely slow if not wedged. The Hadoop version was 2.3 (specifically 2.3 CDH 5.0.1), and we are planning to upgrade, but [~lhofhansl] and I were discussing the issue in general and wonder if we should not be timing out locks such as the ReentrantReadWriteLock, and if so, abort the regionserver. In this case this would have caused recovery and reassignment of the region in question and we would not have had a prolonged availability problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-13109) Make better SEEK vs SKIP decisions during scanning
[ https://issues.apache.org/jira/browse/HBASE-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13109. Resolution: Fixed Re-resolving. Follow up in PHOENIX-1731 > Make better SEEK vs SKIP decisions during scanning > -- > > Key: HBASE-13109 > URL: https://issues.apache.org/jira/browse/HBASE-13109 > Project: HBase > Issue Type: Improvement >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12 > > Attachments: 13109-0.98-v4.txt, 13109-trunk-v2.txt, > 13109-trunk-v3.txt, 13109-trunk-v4.txt, 13109-trunk-v5.txt, 13109-trunk.txt, > nextIndexKVChange_new.patch > > > I'm re-purposing this issue to add a heuristic as to when to SEEK and when to > SKIP Cells. This has come up in various issues, and I think I have a way to > finally fix this now. HBASE-9778, HBASE-12311, and friends are related. > --- Old description --- > This is a continuation of HBASE-9778. > We've seen a scenario of a very slow scan over a region using a timerange > that happens to fall after the ts of any Cell in the region. > Turns out we spend a lot of time seeking. > Tested with a 5 column table, and the scan is 5x faster when the timerange > falls before all Cells' ts. > We can use the lookahead hint introduced in HBASE-9778 to do opportunistic > SKIPing before we actually seek. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13267) Deprecate or remove isFileDeletable from SnapshotHFileCleaner
Andrew Purtell created HBASE-13267: -- Summary: Deprecate or remove isFileDeletable from SnapshotHFileCleaner Key: HBASE-13267 URL: https://issues.apache.org/jira/browse/HBASE-13267 Project: HBase Issue Type: Task Reporter: Andrew Purtell Priority: Minor Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12 The isFileDeletable method in SnapshotHFileCleaner became vestigial after HBASE-12627, lets remove it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13268) Backport the HBASE-7781 security test updates to use the MiniKDC
Andrew Purtell created HBASE-13268: -- Summary: Backport the HBASE-7781 security test updates to use the MiniKDC Key: HBASE-13268 URL: https://issues.apache.org/jira/browse/HBASE-13268 Project: HBase Issue Type: Task Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 0.98.12 Consider backport of the security test updates to use the MiniKDC that are subtasks of HBASE-7781. Would be good to improve test coverage of security code in 0.98 branch, as long as neither: - The changes are a PITA to backport - The changes break a compatibility requirement - The changes introduce test instability Investigate -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13269) Limit result array preallocation to avoid OOME with large scan caching values
Andrew Purtell created HBASE-13269: -- Summary: Limit result array preallocation to avoid OOME with large scan caching values Key: HBASE-13269 URL: https://issues.apache.org/jira/browse/HBASE-13269 Project: HBase Issue Type: Bug Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 1.0.1, 0.98.12 Scan#setCaching(Integer.MAX_VALUE) will likely terminate the regionserver with an OOME due to preallocation of the result array according to this parameter. We should limit the preallocation to some sane value. Definitely affects 0.98 (fix needed to HRegionServer) and 1.0.x (fix needed to RsRPCServices), not sure about later versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13279) Add src/main/asciidoc/asciidoctor.css to RAT exclusion list in POM
Andrew Purtell created HBASE-13279: -- Summary: Add src/main/asciidoc/asciidoctor.css to RAT exclusion list in POM Key: HBASE-13279 URL: https://issues.apache.org/jira/browse/HBASE-13279 Project: HBase Issue Type: Bug Components: documentation Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 2.0.0 Attachments: 0001-Add-src-main-asciidoc-asciidoctor.css-to-RAT-exclusi.patch After copying back the latest doc updates from trunk to 0.98 branch for a release, the release audit failed due to src/main/asciidoc/asciidoctor.css, which is MIT licensed but only by reference. Exclude it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-11544: > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-11544-branch_1_0-v1.patch, > HBASE-11544-branch_1_0-v2.patch, HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v4.patch, HBASE-11544-v5.patch, > HBASE-11544-v6.patch, HBASE-11544-v6.patch, HBASE-11544-v6.patch, > HBASE-11544-v7.patch, HBASE-11544-v8-branch-1.patch, HBASE-11544-v8.patch, > gc.j.png, hits.j.png, mean.png, net.j.png > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13325) Protocol Buffers 2.5 no longer available for download
Andrew Purtell created HBASE-13325: -- Summary: Protocol Buffers 2.5 no longer available for download Key: HBASE-13325 URL: https://issues.apache.org/jira/browse/HBASE-13325 Project: HBase Issue Type: Bug Reporter: Andrew Purtell Same as HADOOP-11738 {quote} Google recently switched off Google Code. They transferred the Protocol Buffers project to GitHub, and binaries are available from [Google's developer page|https://developers.google.com/protocol buffers/docs/downloads]. However, only the most recent version is available. We use version 2.5 to be compatible with Hadoop. That version isn't available for download. {quote} Let the fun begin -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-13248) Make HConnectionImplementation top-level class.
[ https://issues.apache.org/jira/browse/HBASE-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-13248: This change as committed broke the trunk build: {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project hbase-client: Compilation failure: Compilation failure: [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[62,36] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionManager [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperKeepAliveConnection.java:[42,5] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java:[40,3] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ZooKeeperRegistry [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java:[149,45] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionUtils [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[73,62] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionManager [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[77,37] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[122,7] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionManager [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[124,23] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionManager [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[128,23] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionManager [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[280,34] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionManager [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java:[295,7] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionManager [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperKeepAliveConnection.java:[49,9] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java:[44,33] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ZooKeeperRegistry [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java:[47,17] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ZooKeeperRegistry [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java:[85,12] cannot find symbol [ERROR] symbol: variable ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionUtils [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java:[155,5] method does not override or implement a method from a supertype [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionFactory.java:[224,7] cannot find symbol [ERROR] symbol: class ConnectionImplementation [ERROR] location: class org.apache.hadoop.hbase.client.ConnectionFactory [ERROR] /home/apurtell/src/hbase/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java:[2564,19] cannot find symbol [ERROR] symbol: class Con
[jira] [Resolved] (HBASE-13248) Make HConnectionImplementation top-level class.
[ https://issues.apache.org/jira/browse/HBASE-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13248. Resolution: Fixed Re-resolving. My last git fetch missed the addendum. I see now a fix has been committed. Sorry for the noise. > Make HConnectionImplementation top-level class. > --- > > Key: HBASE-13248 > URL: https://issues.apache.org/jira/browse/HBASE-13248 > Project: HBase > Issue Type: Sub-task > Components: API >Affects Versions: 2.0.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Fix For: 2.0.0 > > Attachments: HBASE-13248-v2.patch, HBASE-13248-v2.patch, > HBASE-13248.patch > > > To separate concerns inside ConnectionManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13336) Consistent rules for security meta table protections
Andrew Purtell created HBASE-13336: -- Summary: Consistent rules for security meta table protections Key: HBASE-13336 URL: https://issues.apache.org/jira/browse/HBASE-13336 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell The AccessController and VisibilityController do different things regarding protecting their meta tables. The AC allows schema changes and disable/enable if the user has permission. The VC unconditionally disallows all admin actions. Generally, bad things will happen if these meta tables are damaged, disabled, or dropped. The likely outcome is random frequent (or constant) server side op failures with nasty stack traces. We should have consistent and sensible rules for protecting security meta tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13340) Include LimitedPrivate interfaces in the API compatibility report
Andrew Purtell created HBASE-13340: -- Summary: Include LimitedPrivate interfaces in the API compatibility report Key: HBASE-13340 URL: https://issues.apache.org/jira/browse/HBASE-13340 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 2.0.0 The API compatibility checker script added in HBASE-12808 passes a file containing annotations to the JavaACC tool. When JavaACC is invoked with that option it will filter out all interfaces that do not have that annotation. Currently only Public interfaces are checked. We should add LimitedPrivate to the annotation list, otherwise we will miss changes that impact coprocessors and other users of those interfaces. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13341) Add option to disable filtering on interface annotations for the API compatibility report
Andrew Purtell created HBASE-13341: -- Summary: Add option to disable filtering on interface annotations for the API compatibility report Key: HBASE-13341 URL: https://issues.apache.org/jira/browse/HBASE-13341 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 2.0.0 The API compatibility checker script added in HBASE-12808 passes a file containing annotations to the JavaACC tool. When JavaACC is invoked with that option it will filter out all interfaces that do not have that annotation. We should add a command line option to the compatibility checker which turns off this filtering in case we want to look at the impact of changes to all interfaces, even private ones. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13372) Unit tests for SplitTransaction and RegionMergeTransaction listeners
Andrew Purtell created HBASE-13372: -- Summary: Unit tests for SplitTransaction and RegionMergeTransaction listeners Key: HBASE-13372 URL: https://issues.apache.org/jira/browse/HBASE-13372 Project: HBase Issue Type: Sub-task Affects Versions: 2.0.0, 1.1.0 Reporter: Andrew Purtell Fix For: 2.0.0, 1.1.0 We have new Listener interfaces in SplitTransaction and RegionMergeTransaction. There are no use cases for these yet, nor unit tests. We should have unit tests for these that do something just a bit nontrivial so as to provide a useful example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13380) Cherry pick the HBASE-12808 compatibility checker tool back to 0.98+
Andrew Purtell created HBASE-13380: -- Summary: Cherry pick the HBASE-12808 compatibility checker tool back to 0.98+ Key: HBASE-13380 URL: https://issues.apache.org/jira/browse/HBASE-13380 Project: HBase Issue Type: Task Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 1.1.0, 1.0.2, 0.98.12 The compatibility checker tool added to dev-support by HBASE-12808 can be cleanly cherry picked, in my experience, because it's a self contained change, so let's do this to every active branch that has a dev-support directory so RMs don't have to grab it from master for every release candidate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13383) TestRegionServerObserver.testCoprocessorHooksInRegionsMerge zombie after HBASE-12975
Andrew Purtell created HBASE-13383: -- Summary: TestRegionServerObserver.testCoprocessorHooksInRegionsMerge zombie after HBASE-12975 Key: HBASE-13383 URL: https://issues.apache.org/jira/browse/HBASE-13383 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 2.0.0 Stuck here: {noformat} "main" prio=10 tid=0x7f3ff4008000 nid=0x6183 waiting on condition [0x7f3 ffa49e000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.coprocessor.TestRegionServerObserver.testCoprocessorHooksInRegionsMerge(TestRegionServerObserver.java:100) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13384) Fix Javadoc warnings introduced by HBASE-12972
Andrew Purtell created HBASE-13384: -- Summary: Fix Javadoc warnings introduced by HBASE-12972 Key: HBASE-13384 URL: https://issues.apache.org/jira/browse/HBASE-13384 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Trivial Fix For: 2.0.0 Missed these new Javadoc warnings introduced by HBASE-12972 on master: {noformat} [WARNING] Javadoc Warnings [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-13384) Fix Javadoc warnings introduced by HBASE-12972
[ https://issues.apache.org/jira/browse/HBASE-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13384. Resolution: Fixed Fix Version/s: 1.1.0 > Fix Javadoc warnings introduced by HBASE-12972 > -- > > Key: HBASE-13384 > URL: https://issues.apache.org/jira/browse/HBASE-13384 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13384.patch > > > Missed these new Javadoc warnings introduced by HBASE-12972 on master: > {noformat} > [WARNING] Javadoc Warnings > [WARNING] > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: > warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP > [WARNING] > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: > warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP > [WARNING] > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: > warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP > [WARNING] > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: > warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP > [WARNING] > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java:596: > warning - Tag @link: reference not found: HConstants#LATEST_TIMESTAMP > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-13380) Cherry pick the HBASE-12808 compatibility checker tool back to 0.98+
[ https://issues.apache.org/jira/browse/HBASE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13380. Resolution: Fixed I cherry picked back HBASE-12808 (15a4738), HBASE-13340 (797573e), and HBASE-13341 (1632cd9) to branch-1.0, branch-1, and 0.98. > Cherry pick the HBASE-12808 compatibility checker tool back to 0.98+ > > > Key: HBASE-13380 > URL: https://issues.apache.org/jira/browse/HBASE-13380 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 1.1.0, 0.98.13, 1.0.2 > > > The compatibility checker tool added to dev-support by HBASE-12808 can be > cleanly cherry picked, in my experience, because it's a self contained > change, so let's do this to every active branch that has a dev-support > directory so RMs don't have to grab it from master for every release > candidate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13390) Document the implications of installing security coprocessors with hbase.security.authorization as false
Andrew Purtell created HBASE-13390: -- Summary: Document the implications of installing security coprocessors with hbase.security.authorization as false Key: HBASE-13390 URL: https://issues.apache.org/jira/browse/HBASE-13390 Project: HBase Issue Type: Sub-task Components: documentation Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 2.0.0 Add a new section to the security section of the online manual documenting the implications of installing security coprocessors with hbase.security.authorization set to false in site configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13391) TestRegionObserverInterface frequently failing on branch-1
Andrew Purtell created HBASE-13391: -- Summary: TestRegionObserverInterface frequently failing on branch-1 Key: HBASE-13391 URL: https://issues.apache.org/jira/browse/HBASE-13391 Project: HBase Issue Type: Bug Reporter: Andrew Purtell TestRegionObserverInterface is frequently failing on branch-1 . Example: {noformat} java.lang.AssertionError: Result of org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver$Legacy.getCtPreWALRestore is expected to be 1, while we get 0 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.verifyMethodResult(TestRegionObserverInterface.java:751) at org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.testLegacyRecovery(TestRegionObserverInterface.java:685) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-13110) TestCoprocessorEndpoint hangs on trunk
[ https://issues.apache.org/jira/browse/HBASE-13110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13110. Resolution: Cannot Reproduce No longer reproducible on trunk > TestCoprocessorEndpoint hangs on trunk > -- > > Key: HBASE-13110 > URL: https://issues.apache.org/jira/browse/HBASE-13110 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Andrew Purtell > > TestCoprocessorEndpoint hangs with repeated RPC retries > (RpcRetryingCallerImpl.callWithRetries) after the ProtobufCoprocessorService > throws the test exception. Looks like a change on trunk has broken > TestCoprocessorEndpoint. > jstack of interest: > {noformat} > "main" prio=5 tid=0x7f87eb003000 nid=0x1303 in Object.wait() > [0x000105173000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007c91aedf8> (a > java.util.concurrent.atomic.AtomicBoolean) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:162) > - locked <0x0007c91aedf8> (a > java.util.concurrent.atomic.AtomicBoolean) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java: > 95) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto$BlockingStub.error(TestRpcServiceProtos.java:378) > at > org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint.testCoprocessorError(TestCoprocessorEndpoint.java:308) > {noformat} > Tail of the log has entries like: > {noformat} > 2015-02-25 18:50:03,659 DEBUG > [B.defaultRpcServer.handler=3,queue=0,port=56093] ipc.CallRunner(110): > B.defaultRpcServer.handler=3,queue=0,port=56093: callId: 75 service: > ClientService methodName: ExecService size: 141 connection: 10.3.31.30:56149 > java.io.IOException: Test exception > at > org.apache.hadoop.hbase.coprocessor.ProtobufCoprocessorService.error(ProtobufCoprocessorService.java:64) > at > org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto.callMethod(TestRpcServiceProtos.java:210) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1696) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1678) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-13399) HBase Snapshot export to S3 fails with Content-MD5 errors.
[ https://issues.apache.org/jira/browse/HBASE-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-13399. Resolution: Duplicate > HBase Snapshot export to S3 fails with Content-MD5 errors. > -- > > Key: HBASE-13399 > URL: https://issues.apache.org/jira/browse/HBASE-13399 > Project: HBase > Issue Type: Bug > Components: Filesystem Integration, hadoop2 >Affects Versions: 0.98.0 > Environment: CentOS 6.5, Hortonworks Data Platform 2.1.2, Hadoop 2.4.0 >Reporter: Joseph Reid > > We're running into issues exporting snapshots of large tables to Amazon S3. > The snapshot completes successfully, but the snapshot export job runs into > errors with jets3t when we attempt to export to S3. > Error snippet, from job log: > {code} > 2015-04-03 16:59:16,425 INFO [main] mapreduce.Job: Task Id : > attempt_1426532296228_55454_m_08_1, Status : FAILED > Error: org.apache.hadoop.fs.s3.S3Exception: > org.jets3t.service.S3ServiceException: S3 Error Message. -- ResponseCode: > 400, ResponseStatus: Bad Request, XML Error Message: encoding="UTF-8"?>BadDigestThe Content-MD5 you > specified did not match what we > received.CWiSsgzVAJyzPy2oT8u4Ag==2DIsv6jZJ8FuGtalOO8SPA==CA325C738970C313tnE+O1zPZovaQWMhCuM4lkX0h/wN9173FQ7omxZzLb6eH0OCHASyan+mb8WBJkNn > at > org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleS3ServiceException(Jets3tNativeFileSystemStore.java:405) > at > org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:115) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) > at org.apache.hadoop.fs.s3native.$Proxy19.storeFile(Unknown Source) > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.close(NativeS3FileSystem.java:221) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyFile(ExportSnapshot.java:200) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:140) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:89) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: org.jets3t.service.S3ServiceException: S3 Error Message. -- > ResponseCode: 400, ResponseStatus: Bad Request, XML Error Message: version="1.0" encoding="UTF-8"?>BadDigestThe > Content-MD5 you specified did not match what we > received.CWiSsgzVAJyzPy2oT8u4Ag==2DIsv6jZJ8FuGtalOO8SPA==CA325C738970C313tnE+O1zPZovaQWMhCuM4lkX0h/wN9173FQ7omxZzLb6eH0OCHASyan+mb8WBJkNn > at org.jets3t.service.S3Service.putObject(S3Service.java:2267) > at > org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:113) > ... 21 more > 2015-04-03 17:03:50,613 INFO [main] mapreduce.Job: Task Id : > attempt_1426532296228_55454_m_10_1, Status : FAILED > AttemptID:attempt_1426532296228_55454_m_10_1 Timed out after 300 secs > {\code} > We've verified that exports to other clusters from these same snapshots work > fine. Thus the issue appears to lie within the snapshot export utility, > jets3t, and S3. > "The Content-MD5 you specified did not match what we received" seems to > indicate that the snapshot changed between when the upload started and the > error. Can that be? > Related to: > [Discussion on jets3t user > group,.|https://groups.google.com/forum/#!topic/jets3t-users/Bg2qh7OdE2U] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13409) Add categories to uncategorized tests
Andrew Purtell created HBASE-13409: -- Summary: Add categories to uncategorized tests Key: HBASE-13409 URL: https://issues.apache.org/jira/browse/HBASE-13409 Project: HBase Issue Type: Bug Affects Versions: 2.0.0, 1.1.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Trivial Fix For: 2.0.0, 1.1.0 A couple tests without categories were flagged recently by TestCheckTestClasses in a precommit build. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-4706) HBASE-4120 Create shell or portal tool for user to manage priority and group
[ https://issues.apache.org/jira/browse/HBASE-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-4706. --- Resolution: Later Assignee: (was: Liu Jia) Archiving stale subtask of issue with resolution of 'Later'. We can reopen if someone wants to refresh it > HBASE-4120 Create shell or portal tool for user to manage priority and group > > > Key: HBASE-4706 > URL: https://issues.apache.org/jira/browse/HBASE-4706 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC, shell >Affects Versions: 0.92.0 >Reporter: Liu Jia > Attachments: TablePriorityJamon.patch, TablePriorityShell_v1.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > Add a tool for user to manage the functions provided by HBase-4120 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-5170) The web tools of region server groups
[ https://issues.apache.org/jira/browse/HBASE-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-5170. --- Resolution: Later Archiving stale subtask of issue with resolution of 'Later'. We can reopen if someone wants to refresh it > The web tools of region server groups > - > > Key: HBASE-5170 > URL: https://issues.apache.org/jira/browse/HBASE-5170 > Project: HBase > Issue Type: Sub-task > Components: master, regionserver >Reporter: Liu Jia > Attachments: GroupOfRegionServerWebTool.patch > > > The web pages which allow users to perform some group management operations > including add/delete group, move > in/out servers,change table's group attribute ,balance groups, balance tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2993) Refactor servers to use a common lifecycle interface
[ https://issues.apache.org/jira/browse/HBASE-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2993. --- Resolution: Incomplete Assignee: (was: Todd Lipcon) > Refactor servers to use a common lifecycle interface > > > Key: HBASE-2993 > URL: https://issues.apache.org/jira/browse/HBASE-2993 > Project: HBase > Issue Type: Improvement > Components: master, regionserver >Affects Versions: 0.90.0 >Reporter: Todd Lipcon > > In current trunk, the region server is a Runnable and the Master is a thread. > We have all kinds of weird wrappers like JVMClusterUtil to try to work around > this. It would be nice if they both implemented the same interface - > LocalHBaseCluster and the MiniCluster would be a lot easier to understand as > well, and we could share some more code between them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2715) Concise error message when attempting to connect to ZK on the wrong port
[ https://issues.apache.org/jira/browse/HBASE-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2715. --- Resolution: Incomplete > Concise error message when attempting to connect to ZK on the wrong port > > > Key: HBASE-2715 > URL: https://issues.apache.org/jira/browse/HBASE-2715 > Project: HBase > Issue Type: Improvement > Components: Zookeeper >Reporter: Jeff Hammerbacher > > If we try to connect to ZK running on the wrong port, we generate a lot of > spew: http://gist.github.com/434943. It would be good to catch this case and > suggest to the user why things may have gone wrong (e.g. you've tried to > connect to the wrong port for ZK) and how to check if ZK is actually running > on that port (e.g. {{echo 'ruok' | nc localhost 2181}}) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2709) Allow split type (distributed,master only) to be configurable
[ https://issues.apache.org/jira/browse/HBASE-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2709. --- Resolution: Incomplete > Allow split type (distributed,master only) to be configurable > - > > Key: HBASE-2709 > URL: https://issues.apache.org/jira/browse/HBASE-2709 > Project: HBase > Issue Type: Sub-task >Reporter: Alex Newman > Original Estimate: 2h > Time Spent: 2h > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3910) acid-semantics.html - clarify some of the concepts
[ https://issues.apache.org/jira/browse/HBASE-3910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3910. --- Resolution: Incomplete Assignee: (was: Doug Meil) > acid-semantics.html - clarify some of the concepts > -- > > Key: HBASE-3910 > URL: https://issues.apache.org/jira/browse/HBASE-3910 > Project: HBase > Issue Type: Improvement > Components: documentation > Environment: Any. >Reporter: Doug Meil >Priority: Minor > Labels: documentation > > Inspired from HBASE-3903 regarding the acid-semantics page. > 2) IMHO, the documentation at http://hbase.apache.org/acid-semantics.html has > some weak points that need clarification, for example: > (a) Visibility: When a client receives a "success" response for any > mutation, that mutation is immediately visible to both that client and any > client with whom it later communicates through side channels. > Here, what is a side channel exactly? > (b) Durability: All reasonable failure scenarios will not affect any > of the guarantees of this document. > Here, what is a reasonable failure scenario? > Thanks, > Tallat -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3439) Increment operations should support setting a TimeRange per family
[ https://issues.apache.org/jira/browse/HBASE-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3439. --- Resolution: Not A Problem Assignee: (was: Jonathan Gray) > Increment operations should support setting a TimeRange per family > -- > > Key: HBASE-3439 > URL: https://issues.apache.org/jira/browse/HBASE-3439 > Project: HBase > Issue Type: New Feature >Affects Versions: 0.90.0 >Reporter: Jonathan Gray > Attachments: HBASE-3439-v1.patch, HBASE-3439-v2.patch > > > An optimization was added to Increment operations that allowed you to specify > a TimeRange for the operation. We have a case where some families in a row > are "hourly" counters but others are "lifetime" counters. We need to be able > to specify different TimeRanges for each family. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2704) Cleanup HColumnDescriptor
[ https://issues.apache.org/jira/browse/HBASE-2704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2704. --- Resolution: Incomplete > Cleanup HColumnDescriptor > - > > Key: HBASE-2704 > URL: https://issues.apache.org/jira/browse/HBASE-2704 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Jonathan Gray >Priority: Critical > > HColumnDescriptor is very old. The class comment has stale information that > no longer applies (that you cannot modify a column, er family, without > deleting it and recreating it). It is also called HColumnDescriptor rather > than HFamilyDescriptor and the related methods all say "column" instead of > "family" like those in HBaseAdmin. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3148) Duplicate check table name in HBaseAdmin's createTable method
[ https://issues.apache.org/jira/browse/HBASE-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3148. --- Resolution: Not A Problem > Duplicate check table name in HBaseAdmin's createTable method > - > > Key: HBASE-3148 > URL: https://issues.apache.org/jira/browse/HBASE-3148 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Jeff Zhang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2804) [replication] Support ICVs in a master-master setup
[ https://issues.apache.org/jira/browse/HBASE-2804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2804. --- Resolution: Not A Problem > [replication] Support ICVs in a master-master setup > --- > > Key: HBASE-2804 > URL: https://issues.apache.org/jira/browse/HBASE-2804 > Project: HBase > Issue Type: New Feature > Components: Replication >Reporter: Jean-Daniel Cryans > > Currently an ICV ends up as a Put in the HLogs, which ReplicationSource ships > to ReplicationSink that in turn only recreates the Put and not the ICV > itself. This means that in a master-master replication setup where the same > counters are implemented on both side, the Puts will actually overwrite each > other. > We need to find a way to support this use case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2368) BulkPut - Writable class compatible with TableRecordWriter for bulk puts agnostic of region server mapping at Mapper/Combiner level
[ https://issues.apache.org/jira/browse/HBASE-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2368. --- Resolution: Incomplete > BulkPut - Writable class compatible with TableRecordWriter for bulk puts > agnostic of region server mapping at Mapper/Combiner level > > > Key: HBASE-2368 > URL: https://issues.apache.org/jira/browse/HBASE-2368 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Karthik K > Attachments: HBASE-2368.patch > > > TableRecordWriter currently accepts only a put/delete as writables. Some > mapper processes might want to consolidate the 'put's and insert them in > bulk. Useful in combiners / mappers - to send across a bunch of puts from one > stage to another , while maintaining a very similar region-server-mapping > agnostic api at respective levels. > New type - BulkPut ( Writable ) introduced that is just a consolidation of > Puts. Eventually , the TableRecordWriter bulk inserts the puts together into > the hbase eco-system. > Patch made against trunk only. But since , it does not break any backward > compatibility - it can be an useful addition to the branch as well. > Let me know your comments on the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3471) parallelize flush(), split() and compact() in HBaseAdmin
[ https://issues.apache.org/jira/browse/HBASE-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3471. --- Resolution: Incomplete Incomplete, and we have been / will be tackling parallelization of this stuff with Procedure or Procedure v2. > parallelize flush(), split() and compact() in HBaseAdmin > > > Key: HBASE-3471 > URL: https://issues.apache.org/jira/browse/HBASE-3471 > Project: HBase > Issue Type: Improvement > Components: master >Affects Versions: 0.90.0 >Reporter: Ted Yu > > HBaseAdmin.flush() uses a loop to go over List HServerAddress>> > Executor service can be used for higher parallelism > Same goes with split() and compact() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3526) Tables with TTL should be able to prune memstore w/o flushing
[ https://issues.apache.org/jira/browse/HBASE-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3526. --- Resolution: Not A Problem > Tables with TTL should be able to prune memstore w/o flushing > - > > Key: HBASE-3526 > URL: https://issues.apache.org/jira/browse/HBASE-3526 > Project: HBase > Issue Type: Improvement > Components: regionserver >Affects Versions: 0.90.0 >Reporter: ryan rawson > > If you have a table with TTL, the memstore will grow until it hits flush > size, at which point the flush code will prune the KVs going to hfile. If you > have a small TTL, it may not be necessary to flush, since pruning data in > memory would ensure that we never grow too big. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3415) When scanners have readers updated we should use original file selection algorithm rather than include all files
[ https://issues.apache.org/jira/browse/HBASE-3415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3415. --- Resolution: Incomplete > When scanners have readers updated we should use original file selection > algorithm rather than include all files > > > Key: HBASE-3415 > URL: https://issues.apache.org/jira/browse/HBASE-3415 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 0.90.0 >Reporter: Jonathan Gray > Attachments: HBASE-3415-v1.patch > > > Currently when a {{StoreScanner}} is instantiated we use a {{getScanner(scan, > columns)}} call that looks at things like bloom filters and memstore only > flags. But when we get a changed readers notification, we use > {{getScanner()}} which just grabs everything. > We should always use the original file selection algorithm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2809) Accounting of ReplicationSource's memory usage
[ https://issues.apache.org/jira/browse/HBASE-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2809. --- Resolution: Invalid Assignee: (was: Jean-Daniel Cryans) > Accounting of ReplicationSource's memory usage > -- > > Key: HBASE-2809 > URL: https://issues.apache.org/jira/browse/HBASE-2809 > Project: HBase > Issue Type: Improvement >Reporter: Jean-Daniel Cryans > > A lot of data is going through the ReplicationSources, we need to take it > into our general accounting. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-1667) hbase-daemon.sh stop master should only stop the master, not the cluster
[ https://issues.apache.org/jira/browse/HBASE-1667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-1667. --- Resolution: Not A Problem > hbase-daemon.sh stop master should only stop the master, not the cluster > > > Key: HBASE-1667 > URL: https://issues.apache.org/jira/browse/HBASE-1667 > Project: HBase > Issue Type: Improvement > Components: master, scripts >Affects Versions: 0.20.0 >Reporter: Rong-En Fan > > 0.20 supports multi masters. However, > bin/hbase-daemon.sh stop master > on backup masters will bring the whole cluster down. > Per rolling upgrade wiki that stack pointed out, kill -9 for backup master is > the only way to go currently. > I think it's better to make some sort of magic that we can use something like > bin/hbase-daemon.sh stop master > to properly stop either the backup master or the whole cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3302) To prevent possible overrun of block cache with CacheOnWrite, add safeguard so we reject blocks if completely full
[ https://issues.apache.org/jira/browse/HBASE-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3302. --- Resolution: Incomplete Assignee: (was: Jonathan Gray) > To prevent possible overrun of block cache with CacheOnWrite, add safeguard > so we reject blocks if completely full > -- > > Key: HBASE-3302 > URL: https://issues.apache.org/jira/browse/HBASE-3302 > Project: HBase > Issue Type: Improvement > Components: io, regionserver >Reporter: Jonathan Gray > > With the aggressive caching when CacheOnWrite is turned on, and given the > current LRU architecture, there's potential (though low probability) we could > overrun the block cache capacity by caching faster than we can evict. > Currently the block cache triggers eviction at 85% capacity. If somehow we > attempt to cache a block but the cache is at 100% capacity, we should reject > caching of that block. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-1837) Fix results contract (If row has no results, return null, if Result has no results return null or empty Sets and Arrays?)
[ https://issues.apache.org/jira/browse/HBASE-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-1837. --- Resolution: Incomplete > Fix results contract (If row has no results, return null, if Result has no > results return null or empty Sets and Arrays?) > - > > Key: HBASE-1837 > URL: https://issues.apache.org/jira/browse/HBASE-1837 > Project: HBase > Issue Type: Task >Reporter: stack > > Make sure we are consistent regards results contract. As jgray says: > {code} > 17:47 < jgray> decisions are things like, if the result is empty do we return > nulls or do we return empty >lists/0-length arrays > 17:47 < jgray> if result is empty, do we return null for row? > 17:47 < jgray> and if row is the null row, we then return zero-length byte[0] > 17:48 < St^Ack_> So, if row is empty, we return null (I believe) > 17:48 < jgray> yes > 17:49 < St^Ack_> If you have a result, up to this, if empty, it would not > return null stuff. > 17:49 < jgray> no it did return null stuff > 17:49 < jgray> at least many of them did > 17:49 < St^Ack_> oh.. ok. > 17:49 < jgray> but then my result delayed deserialization broke that on one > case > 17:49 < St^Ack_> I thought I'd added it w/ 1836? > 17:49 < jgray> yeah u fixed what i broke, i think > 17:50 < jgray> but we should nail down the contract, specify what it is in > javadoc, and add unit tests to verify such > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-1764) Change hfile scanner seek to find wanted key or the one AFTER rather than BEFORE as it currently does
[ https://issues.apache.org/jira/browse/HBASE-1764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-1764. --- Resolution: Not A Problem > Change hfile scanner seek to find wanted key or the one AFTER rather than > BEFORE as it currently does > - > > Key: HBASE-1764 > URL: https://issues.apache.org/jira/browse/HBASE-1764 > Project: HBase > Issue Type: Improvement >Reporter: stack > > Working on new getClosestAtOrBefore in HBASE-1761, the way hfile scanner > works where it gets the asked for key or the one just before makes for our > doing more work than we should. See the code in Store where we want to get > to first element in row. We have to go to the row before and then walk > forward skipping the item that is in row before. > Consider flipping catalog tables to be regions by end-key rather than > start-key at same time (RE: conversation had on the saturday night at the > SUSF hackathon). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3479) javadoc: Filters/Scan/Get should be explicit about the requirement to add columns you are filtering on
[ https://issues.apache.org/jira/browse/HBASE-3479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3479. --- Resolution: Invalid > javadoc: Filters/Scan/Get should be explicit about the requirement to add > columns you are filtering on > -- > > Key: HBASE-3479 > URL: https://issues.apache.org/jira/browse/HBASE-3479 > Project: HBase > Issue Type: Bug >Affects Versions: 0.90.0 >Reporter: ryan rawson > > improve our javadoc! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3306) Add better cache hit ratio statistics for LRU
[ https://issues.apache.org/jira/browse/HBASE-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3306. --- Resolution: Incomplete Assignee: (was: Jonathan Gray) > Add better cache hit ratio statistics for LRU > - > > Key: HBASE-3306 > URL: https://issues.apache.org/jira/browse/HBASE-3306 > Project: HBase > Issue Type: Improvement > Components: io, regionserver >Reporter: Jonathan Gray > Attachments: HBASE-3306-v1.patch > > > Currently the hit ratio is a lifetime ratio. We should have some kind of > rolling window stats, like ratio over the past hour. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3289) EvictOnClose should be disabled when closing parent of split
[ https://issues.apache.org/jira/browse/HBASE-3289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3289. --- Resolution: Incomplete > EvictOnClose should be disabled when closing parent of split > > > Key: HBASE-3289 > URL: https://issues.apache.org/jira/browse/HBASE-3289 > Project: HBase > Issue Type: Improvement > Components: io, regionserver >Reporter: Jonathan Gray > > EvictOnClose is now on by default. It doesn't make sense to evictOnClose > when we close the parent files during a split. In that case, we will always > open the files back up with Half readers on the same server immediately after > closing them. > Since we will do full reads of these files when we compact on the other side > of the split, we should not evict the blocks when we close the initial parent. > When both daughters of the original parent files have closed those files, > then they should be evicted. This part is a bit tricky. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2507) HTable#flushCommits() - event notification handler
[ https://issues.apache.org/jira/browse/HBASE-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2507. --- Resolution: Incomplete > HTable#flushCommits() - event notification handler > --- > > Key: HBASE-2507 > URL: https://issues.apache.org/jira/browse/HBASE-2507 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Karthik K > Labels: moved_from_0_20_5 > Attachments: HBASE-2507.patch > > > Event notification handler code when flushing commits on the client side. By > default is null. > New Class - CommitEventHandler , HTableCommitEvent . > notification data - preSize, postSize . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3875) Its not good that our test depends on the network and a working DNS
[ https://issues.apache.org/jira/browse/HBASE-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3875. --- Resolution: Later > Its not good that our test depends on the network and a working DNS > --- > > Key: HBASE-3875 > URL: https://issues.apache.org/jira/browse/HBASE-3875 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.90.2 >Reporter: gaojinchao >Priority: Minor > > Thess case depends on the network, It needs improve. > TestClockSkewDetection > testBadOriginalRootLocation > testScanner > TestCatalogTracker -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-1530) Optimize changed readers notification to only update what has changed rather than rebuild entire KV heap
[ https://issues.apache.org/jira/browse/HBASE-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-1530. --- Resolution: Not A Problem > Optimize changed readers notification to only update what has changed rather > than rebuild entire KV heap > > > Key: HBASE-1530 > URL: https://issues.apache.org/jira/browse/HBASE-1530 > Project: HBase > Issue Type: Improvement > Components: regionserver >Affects Versions: 0.20.0 >Reporter: Jonathan Gray > > As discussed in HBASE-1207 and to build on what was implemented in > HBASE-1503, we should modify the existing KeyValueHeap in place by reusing > any open scanners possible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2847) Put added after a delete is overshadowed if its timestamp is older than than that of the tombstone
[ https://issues.apache.org/jira/browse/HBASE-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2847. --- Resolution: Incomplete Incomplete here. Rehashed several times in subsequent later issues > Put added after a delete is overshadowed if its timestamp is older than than > that of the tombstone > -- > > Key: HBASE-2847 > URL: https://issues.apache.org/jira/browse/HBASE-2847 > Project: HBase > Issue Type: Bug >Reporter: stack > > If we delete a row and then at a later time add to the row a cell that has a > timestamp that is older than the delete, the addition will not be seen; the > tombstone will prevent the newer addition being returned. > IMO, this is non-intuitive. We should fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3980) assembly should not be bound to package lifecycle
[ https://issues.apache.org/jira/browse/HBASE-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3980. --- Resolution: Incomplete > assembly should not be bound to package lifecycle > - > > Key: HBASE-3980 > URL: https://issues.apache.org/jira/browse/HBASE-3980 > Project: HBase > Issue Type: Improvement > Components: build >Reporter: Alejandro Abdelnur > > The current binding slows down significantly tasks like 'mvn install' when > you just want to push hbase JAR to the local maven cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-4103) master.jsp - add region-metrics to region-table
[ https://issues.apache.org/jira/browse/HBASE-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-4103. --- Resolution: Invalid > master.jsp - add region-metrics to region-table > --- > > Key: HBASE-4103 > URL: https://issues.apache.org/jira/browse/HBASE-4103 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Priority: Minor > > It would be nice, if possible, to get the region-server metrics in another > column in the RegionServer table at the bottom of master.jsp. For instance, > seeing the compaction-queues across all the RegionServers, etc., would be > useful. > Granted, frameworks like OpenTSDB can probably do this all out of the box, > but having a simple cross-cluster view that is built into HBase would be > helpful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-3954) Pluggable block index
[ https://issues.apache.org/jira/browse/HBASE-3954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-3954. --- Resolution: Later > Pluggable block index > - > > Key: HBASE-3954 > URL: https://issues.apache.org/jira/browse/HBASE-3954 > Project: HBase > Issue Type: Improvement >Reporter: Jason Rutherglen > Attachments: HBASE-3954.patch, HBASE-3954.patch, HBASE-3954.patch > > > Make a pluggable block index system. The default implementation will be the > current one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-4066) Provide a separate configuration for .META. major compaction periods
[ https://issues.apache.org/jira/browse/HBASE-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-4066. --- Resolution: Not A Problem > Provide a separate configuration for .META. major compaction periods > > > Key: HBASE-4066 > URL: https://issues.apache.org/jira/browse/HBASE-4066 > Project: HBase > Issue Type: Improvement > Components: regionserver >Affects Versions: 0.90.0 >Reporter: Harsh J > > Right now, major compaction interval settings affects ALL the tables > including .META. and -ROOT- as well. > It would be a good addition to let .META. compaction intervals be managed > separately with its own configuration so it isn't a hassle having to do it > manually like the rest of the tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-2805) [shell] Add support for 'x = get "TABLENAME", ...'
[ https://issues.apache.org/jira/browse/HBASE-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-2805. --- Resolution: Fixed This was done elsewhere IIRC > [shell] Add support for 'x = get "TABLENAME", ...' > -- > > Key: HBASE-2805 > URL: https://issues.apache.org/jira/browse/HBASE-2805 > Project: HBase > Issue Type: Improvement >Reporter: stack > > In the shell, if you do a get, it emits the content on STDOUT. It'd be > better if this behavior only happened if you did not supply an 'x = ' prefix. > In this latter case, x would hold the Result returned by the get. This > kinda behavior should come across as natural enough. For example if you > fire up the python interpreter, if no variable supplied to catch results, > then content is emitted on STDOUT. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-4133) Enhance hbck with snapshot and diff capability
[ https://issues.apache.org/jira/browse/HBASE-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-4133. --- Resolution: Invalid Not clear how what's described in the OP could be useful. Resolving as invalid. > Enhance hbck with snapshot and diff capability > -- > > Key: HBASE-4133 > URL: https://issues.apache.org/jira/browse/HBASE-4133 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > > One desirable feature for hbck is snapshot. > hbck found 650 inconsistencies for 0.90.3 in our staging cluster. I upgraded > to 0.90.4 > It would be nice if I can take snapshot of the inconsistencies so that I can > detect new problems (delta) after we run 0.90.4 for some time using the diff > capability. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-4123) Add support for BT locality groups
[ https://issues.apache.org/jira/browse/HBASE-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-4123. --- Resolution: Invalid Super light on detail. Really should mark as Invalid, so... > Add support for BT locality groups > -- > > Key: HBASE-4123 > URL: https://issues.apache.org/jira/browse/HBASE-4123 > Project: HBase > Issue Type: New Feature >Reporter: stack > > BT has locality groups. HBase doesn't. The tail of HBASE-4119 describes > some what locality groups are. Todd then dumps out advantages of locality > groups. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-4154) LoadIncrementalHFiles.doBulkLoad should validate the data being loaded
[ https://issues.apache.org/jira/browse/HBASE-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-4154. --- Resolution: Fixed We do have an option to do some limited validation of bulk loaded files in recent code > LoadIncrementalHFiles.doBulkLoad should validate the data being loaded > -- > > Key: HBASE-4154 > URL: https://issues.apache.org/jira/browse/HBASE-4154 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.90.3 >Reporter: David Capwell > > LoadIncrementalHFiles.doBulkLoad currently checks if the HDFS Path matches > the table you are uploading to but it doesn't validate that the contents of > the data belong to the table. > This can be an issue if the HFile contains multiple families or invalid > families. -- This message was sent by Atlassian JIRA (v6.3.4#6332)