[jira] [Created] (HBASE-15039) HMaster and RegionServers should try to refresh token keys from zk when face InvalidToken.
Yong Zhang created HBASE-15039: -- Summary: HMaster and RegionServers should try to refresh token keys from zk when face InvalidToken. Key: HBASE-15039 URL: https://issues.apache.org/jira/browse/HBASE-15039 Project: HBase Issue Type: Bug Reporter: Yong Zhang Assignee: Yong Zhang One of HMaster and RegionServers is token key master, and others are key slaves, key master will write keys to zookeeper and key slaves will read. If any disconnetion between key slaves and zookeeper, then these HMaster or RegionServers may lost new token, client which use token authentication will get InvalidToken exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Build failed in Jenkins: HBase-1.3 ยป latest1.7,Hadoop #466
kalashnikov:hbase.git.commit stack$ python ./dev-support/findHangingTests.py https://builds.apache.org/job/HBase-1.3/jdk=latest1.7,label=Hadoop/466/consoleText Fetching https://builds.apache.org/job/HBase-1.3/jdk=latest1.7,label=Hadoop/466/consoleText Building remotely on H4 (Mapreduce zookeeper Hadoop Pig falcon Hdfs) in workspace /home/jenkins/jenkins-slave/workspace/HBase-1.3/jdk/latest1.7/label/Hadoop Printing hanging tests Hanging test : org.apache.hadoop.hbase.procedure2.TestProcedureRecovery Printing Failing tests Something change in TestProcedureRecovery to make it hang? St.Ack On Wed, Dec 23, 2015 at 8:45 PM, Apache Jenkins Server < jenk...@builds.apache.org> wrote: > See < > https://builds.apache.org/job/HBase-1.3/jdk=latest1.7,label=Hadoop/466/changes > > > > Changes: > > [anoopsamjohn] HBASE-14940 Make our unsafe based ops more safe. > > -- > [...truncated 28934 lines...] > Running org.apache.hadoop.hbase.util.TestBytes > Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.524 sec > - in org.apache.hadoop.hbase.util.TestBytes > Running org.apache.hadoop.hbase.util.TestKeyLocker > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.374 sec > - in org.apache.hadoop.hbase.util.TestKeyLocker > Running org.apache.hadoop.hbase.util.TestCounter > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.018 sec > - in org.apache.hadoop.hbase.util.TestCounter > Running org.apache.hadoop.hbase.util.TestDrainBarrier > Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.353 sec > - in org.apache.hadoop.hbase.util.TestDrainBarrier > Running org.apache.hadoop.hbase.util.TestByteRangeWithKVSerialization > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.393 sec > - in org.apache.hadoop.hbase.util.TestByteRangeWithKVSerialization > Running org.apache.hadoop.hbase.util.TestOrder > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.121 sec > - in org.apache.hadoop.hbase.util.TestOrder > Running org.apache.hadoop.hbase.TestClassFinder > Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.48 sec > - in org.apache.hadoop.hbase.TestClassFinder > Running org.apache.hadoop.hbase.TestCellUtil > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.52 sec - > in org.apache.hadoop.hbase.TestCellUtil > Running org.apache.hadoop.hbase.zookeeper.TestZKConfig > Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.283 sec > - in org.apache.hadoop.hbase.zookeeper.TestZKConfig > Running org.apache.hadoop.hbase.TestChoreService > Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.313 > sec - in org.apache.hadoop.hbase.TestChoreService > Running org.apache.hadoop.hbase.TestCellComparator > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec - > in org.apache.hadoop.hbase.TestCellComparator > Running org.apache.hadoop.hbase.TestTimeout > Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.24 sec - > in org.apache.hadoop.hbase.TestTimeout > Running org.apache.hadoop.hbase.codec.TestCellCodec > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec > - in org.apache.hadoop.hbase.codec.TestCellCodec > Running org.apache.hadoop.hbase.codec.TestCellCodecWithTags > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec > - in org.apache.hadoop.hbase.codec.TestCellCodecWithTags > Running org.apache.hadoop.hbase.codec.TestKeyValueCodec > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec > - in org.apache.hadoop.hbase.codec.TestKeyValueCodec > Running org.apache.hadoop.hbase.codec.TestKeyValueCodecWithTags > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec > - in org.apache.hadoop.hbase.codec.TestKeyValueCodecWithTags > Running org.apache.hadoop.hbase.TestCompoundConfiguration > Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.892 sec > - in org.apache.hadoop.hbase.TestCompoundConfiguration > Running org.apache.hadoop.hbase.types.TestFixedLengthWrapper > Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.231 sec > - in org.apache.hadoop.hbase.types.TestFixedLengthWrapper > Running org.apache.hadoop.hbase.types.TestCopyOnWriteMaps > Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.452 > sec - in org.apache.hadoop.hbase.types.TestCopyOnWriteMaps > Running org.apache.hadoop.hbase.types.TestStruct > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec > - in org.apache.hadoop.hbase.types.TestStruct > Running org.apache.hadoop.hbase.types.TestOrderedBlob > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.099 sec > - in org.apache.hadoop.hbase.types.TestOrderedBlob > Running org.apache.hadoop.hbase.types.TestRawString > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec > - in org.apache.hadoop.hbase.types.TestRawString > Running or
[jira] [Created] (HBASE-15038) ExportSnapshot should support separate configurations for source and destination clusters
Gary Helmling created HBASE-15038: - Summary: ExportSnapshot should support separate configurations for source and destination clusters Key: HBASE-15038 URL: https://issues.apache.org/jira/browse/HBASE-15038 Project: HBase Issue Type: Improvement Components: mapreduce, snapshots Reporter: Gary Helmling Assignee: Gary Helmling Currently ExportSnapshot uses a single Configuration instance for both the source and destination FileSystem instances to use. It should allow overriding properties for each filesystem connection separately. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations
[ https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reopened HBASE-15018: --- I pushed this but it seems to be cause following failures... https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/468/ Backing it out for now. > Inconsistent way of handling TimeoutException in the rpc client implemenations > -- > > Key: HBASE-15018 > URL: https://issues.apache.org/jira/browse/HBASE-15018 > Project: HBase > Issue Type: Bug > Components: Client, IPC/RPC >Affects Versions: 2.0.0, 1.1.0, 1.2.0 >Reporter: Ashish Singhi >Assignee: Ashish Singhi > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3 > > Attachments: HBASE-15018.patch > > > If there is any rpc timeout when using RpcClientImpl then we wrap the > exception in IOE and throw it, > {noformat} > 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] > regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of > a local or network error: > java.io.IOException: Call to host-XX:16040 failed on local exception: > org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, > waitTime=180001, operationTimeout=18 expired. > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271) > at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) > at > org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690) > at > org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77) > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322) > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, > waitTime=180001, operationTimeout=18 expired. > at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70) > at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213) > ... 10 more > {noformat} > But that isn't case with AsyncRpcClient, we don't wrap and throw > CallTimeoutException as it is. > {noformat} > 2015-12-21 14:27:33,093 WARN > [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1] > regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because > of a local or network error: > org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, > method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class > org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest} > at > org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295) > at > org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707) > at > org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73) > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387) > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > I think we should have same behavior across both the implementations. -- This message wa
[jira] [Created] (HBASE-15037) CopyTable and VerifyReplication - Option to specify batch size, versions
Ramana Uppala created HBASE-15037: - Summary: CopyTable and VerifyReplication - Option to specify batch size, versions Key: HBASE-15037 URL: https://issues.apache.org/jira/browse/HBASE-15037 Project: HBase Issue Type: Improvement Components: Replication Affects Versions: 0.98.16.1 Reporter: Ramana Uppala Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15036) Update HBase Spark documentation to include bulk load with thin records
Ted Malaska created HBASE-15036: --- Summary: Update HBase Spark documentation to include bulk load with thin records Key: HBASE-15036 URL: https://issues.apache.org/jira/browse/HBASE-15036 Project: HBase Issue Type: New Feature Reporter: Ted Malaska Assignee: Ted Malaska -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15035) bulkloading hfiles with tags that require splits does not preserve tags
Jonathan Hsieh created HBASE-15035: -- Summary: bulkloading hfiles with tags that require splits does not preserve tags Key: HBASE-15035 URL: https://issues.apache.org/jira/browse/HBASE-15035 Project: HBase Issue Type: Bug Components: HFile Affects Versions: 1.1.0, 1.0.0, 2.0.0 Reporter: Jonathan Hsieh Priority: Blocker When an hfile is created with cell tags present and it is bulk loaded into hbase the tags will be present when loaded into a single region. If the bulk load hfile spans multiple regions, bulk load automatically splits the original hfile into a set of split hfiles corresponding to each of the regions that the original covers. Since 0.98, tags are not copied into the newly created split hfiles. (the default for "includeTags" of the HFileContextBuilder [1] is uninitialized which defaults to false). This means acls, ttls, mob pointers and other tag stored values will not be bulk loaded in. [1] https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Successful: HBase Generate Website
Build status: Successful If successful, the website and docs have been generated. If failed, skip to the bottom of this email. Use the following commands to download the patch and apply it to a clean branch based on origin/asf-site. If you prefer to keep the hbase-site repo around permanently, you can skip the clone step. git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git cd hbase-site wget -O- https://builds.apache.org/job/hbase_generate_website/78/artifact/website.patch.zip | funzip > 1af98f255132ef6716a1f6ba1d8d71a36ea38840.patch git fetch git checkout -b asf-site-1af98f255132ef6716a1f6ba1d8d71a36ea38840 origin/asf-site git am 1af98f255132ef6716a1f6ba1d8d71a36ea38840.patch At this point, you can preview the changes by opening index.html or any of the other HTML pages in your local asf-site-1af98f255132ef6716a1f6ba1d8d71a36ea38840 branch, and you can review the differences by running: git diff origin/asf-site There are lots of spurious changes, such as timestamps and CSS styles in tables. To see a list of files that have been added, deleted, renamed, changed type, or are otherwise interesting, use the following command: git diff --name-status --diff-filter=ADCRTXUB origin/asf-site To see only files that had 10 or more lines changed: git diff --stat origin/asf-site | grep -Ev "\|\s+\ [1-9]\ [\+-]+$" When you are satisfied, publish your changes to origin/asf-site using this command: git push origin asf-site-1af98f255132ef6716a1f6ba1d8d71a36ea38840:asf-site Changes take a couple of minutes to be propagated. You can then remove your asf-site-1af98f255132ef6716a1f6ba1d8d71a36ea38840 branch: git checkout asf-site && git branch -d asf-site-1af98f255132ef6716a1f6ba1d8d71a36ea38840 If failed, see https://builds.apache.org/job/hbase_generate_website/78/console
[jira] [Created] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces
Samir Ahmic created HBASE-15034: --- Summary: IntegrationTestDDLMasterFailover does not clean created namespaces Key: HBASE-15034 URL: https://issues.apache.org/jira/browse/HBASE-15034 Project: HBase Issue Type: Bug Components: integration tests Affects Versions: 2.0.0 Reporter: Samir Ahmic Assignee: Samir Ahmic Priority: Minor I was running this test recently and notice that after every run there are new namespaces created by test and not cleared when test is finished. -- This message was sent by Atlassian JIRA (v6.3.4#6332)