[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-12-17 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13833:

Component/s: tooling

> LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
> connections when using SecureBulkLoad
> ---
>
> Key: HBASE-13833
> URL: https://issues.apache.org/jira/browse/HBASE-13833
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13833.00.branch-1.1.patch, 
> HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
> HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch, 
> HBASE-13833.03.branch-1.0.patch, HBASE-13833.03.branch-1.1.patch, 
> HBASE-13833.03.branch-1.patch, HBASE-13833.03.master.patch
>
>
> Seems HBASE-13328 wasn't quite sufficient.
> {noformat}
> 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
> hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
> 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
> 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
> hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
>  first=\x80\x00\x00\x00 last=\x80LK?
> 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
> started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
> started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> ...
> ...
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
> 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
> failed due to exception.
> 2015-06-02 
> 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
> BulkLoad encountered an unrecoverable problem
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.lang.Thread.run(Thread.java:745)
> ...
> ...
> ...
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
> org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
> connection has to be unmanaged.
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed. Thanks for the review.

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
 HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch, 
 HBASE-13833.03.branch-1.0.patch, HBASE-13833.03.branch-1.1.patch, 
 HBASE-13833.03.branch-1.patch, HBASE-13833.03.master.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.03.master.patch
HBASE-13833.03.branch-1.patch
HBASE-13833.03.branch-1.1.patch
HBASE-13833.03.branch-1.0.patch

bq. Should we also close this?

Yes, we should also close the {{Admin}} instance.

Here's a new round of patches.
- *master* just close the {{Admin}} and {{RegionLocation}} when we're done with 
them. No test change because managed/unmanaged connection distinction doesn't 
exist here.
- *branch-1,branch-1.1* same patch as existing branch-1.1, but updates the test 
to check both managed and unmanaged connections. This 
{{TestSecureLoadIncrementalHFiles}} fails without the associated changes in 
{{LoadIncrementalHFiles}}. I'll revert the existing patch on branch-1.1 and 
commit this version pack ported from branch-1, hopefully to make the changes 
simpler to read
- *branch-1.0* Same as branch-1 patch, except applied to the un-refactored 
logic therein.
- *0.98* nothing applicable as far as I can tell.

On all above branches, both {{TestLoadIncrementalHFiles}} and 
{{TestSecureLoadIncrementalHFiles}} pass.

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
 HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch, 
 HBASE-13833.03.branch-1.0.patch, HBASE-13833.03.branch-1.1.patch, 
 HBASE-13833.03.branch-1.patch, HBASE-13833.03.master.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Fix Version/s: 1.2.0
   1.0.2
   2.0.0

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
 HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch, 
 HBASE-13833.03.branch-1.0.patch, HBASE-13833.03.branch-1.1.patch, 
 HBASE-13833.03.branch-1.patch, HBASE-13833.03.master.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.02.branch-1.patch

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.1.patch, 
 HBASE-13833.02.branch-1.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:542)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.02.branch-1.0.patch

Here's a patch for branch-1.0. The original does not apply cleanly, so maybe 
[~enis] wants to take a look? Test\{Secure,\}LoadIncrementalHFiles passes 
locally for me.

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
 HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-03 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.02.branch-1.1.patch

Might as well use the same connection for the table instance and region locator 
instances we pass down too.

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.1.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-03 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.00.branch-1.1.patch

Attaching a patch for branch-1.1. Still testing some combinations.

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:542)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-03 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.01.branch-1.1.patch

This version is careful to clean up the connection after itself. Hat-tip to 
[~enis].

Also fix the log line.

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:542)
 

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-03 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Fix Version/s: (was: 1.2.0)
   (was: 2.0.0)
   Status: Patch Available  (was: Open)

Not sure about other fix versions, need to check.

 LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
 connections when using SecureBulkLoad
 ---

 Key: HBASE-13833
 URL: https://issues.apache.org/jira/browse/HBASE-13833
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 1.1.1

 Attachments: HBASE-13833.00.branch-1.1.patch, 
 HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.1.patch


 Seems HBASE-13328 wasn't quite sufficient.
 {noformat}
 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
 hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
 hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
  first=\x80\x00\x00\x00 last=\x80LK?
 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
 started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
 started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
 region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
 hostname=dal-pqc5,16020,1433222547221, seqNum=2
 ...
 ...
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
 failed due to exception.
 2015-06-02 
 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
 BulkLoad encountered an unrecoverable problem
 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 ...
 ...
 ...
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
 org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
 connection has to be unmanaged.
 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at