[ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
---------------------------------
    Attachment: HBASE-13833.02.branch-1.patch

> LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
> connections when using SecureBulkLoad
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-13833
>                 URL: https://issues.apache.org/jira/browse/HBASE-13833
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 1.1.0.1
>            Reporter: Nick Dimiduk
>            Assignee: Nick Dimiduk
>             Fix For: 1.1.1
>
>         Attachments: HBASE-13833.00.branch-1.1.patch, 
> HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.1.patch, 
> HBASE-13833.02.branch-1.patch
>
>
> Seems HBASE-13328 wasn't quite sufficient.
> {noformat}
> 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
> hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
> 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
> 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
> hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
>  first=\x80\x00\x00\x00 last=\x80LK?
> 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
> started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
> started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> ...
> ...
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
> 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
> failed due to exception.
> 2015-06-02 
> 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
> BulkLoad encountered an unrecoverable problem
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.lang.Thread.run(Thread.java:745)
> ...
> ...
> ...
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
> org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
> connection has to be unmanaged.
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:542)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$4.call(LoadIncrementalHFiles.java:733)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$4.call(LoadIncrementalHFiles.java:720)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|... 7 more
> 2015-06-02 05:58:35,000|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:58:35 INFO client.ConnectionManager$HConnectionImplementation: Closing 
> zookeeper sessionid=0x14db2ae02800050
> 2015-06-02 05:58:35,006|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:58:35 INFO zookeeper.ZooKeeper: Session: 0x14db2ae02800050 closed
> 2015-06-02 05:58:35,007|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:58:35 INFO zookeeper.ClientCnxn: EventThread shut down
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to