[ 
https://issues.apache.org/jira/browse/NIFI-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Franco updated NIFI-2873:
-------------------------
    Description: 
This is the same issue that previously affected Spark:
https://github.com/Jianfeng-chs/spark/commit/9f2b2bf001262215742be418f24d5093c92ff10f

We are experiencing this issue consistently when trying to use PutHiveStreaming 
and likely would affect PutHiveQL.

The fix is identical namely preloading the Hadoop configuration during the 
processor setup phase. Pull request forthcoming.


---------------------------
2016-10-06 16:07:59,225 ERROR [Timer-Driven Process Thread-9] 
o.a.n.processors.hive.PutHiveStreaming
java.lang.IllegalArgumentException: java.net.UnknownHostException: tdcdv2
        at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
 ~[hadoop-common-2.6.2.jar:na]
        at 
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
 ~[hadoop-hdfs-2.6.2.jar:na]
        at 
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) 
~[hadoop-hdfs-2.6.2.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:668) 
~[hadoop-hdfs-2.6.2.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:604) 
~[hadoop-hdfs-2.6.2.jar:na]
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
 ~[hadoop-hdfs-2.6.2.jar:na]
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) 
~[hadoop-common-2.6.2.jar:na]
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) 
~[hadoop-common-2.6.2.jar:na]
        at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) 
~[hadoop-common-2.6.2.jar:na]
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
~[hadoop-common-2.6.2.jar:na]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
~[hadoop-common-2.6.2.jar:na]
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) 
~[hadoop-common-2.6.2.jar:na]
        at 
org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.<init>(OrcRecordUpdater.java:221)
 ~[hive-exec-1.2.1.jar:1.2.1]
        at 
org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:292)
 ~[hive-exec-1.2.1.jar:1.2.1]
        at 
org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:141)
 ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
        at 
org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:121)
 ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
        at 
org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:37)
 ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
        at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:509)
 ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
        at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
 ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
        at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
 ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
        at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
 ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
        at 
org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$1(HiveWriter.java:250) 
~[nifi-hive-processors-1.0.0.jar:1.0.0]

  was:
This is the same issue that previously affected Spark:
https://github.com/Jianfeng-chs/spark/commit/9f2b2bf001262215742be418f24d5093c92ff10f

We are experiencing this issue consistently when trying to use PutHiveStreaming 
and likely would affect PutHiveQL.

The fix is identical namely preloading the Hadoop configuration during the 
processor setup phase. Pull request forthcoming.


> Nifi throws UnknownHostException with HA NameNode
> -------------------------------------------------
>
>                 Key: NIFI-2873
>                 URL: https://issues.apache.org/jira/browse/NIFI-2873
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: 1.0.0
>            Reporter: Franco
>             Fix For: 1.1.0
>
>
> This is the same issue that previously affected Spark:
> https://github.com/Jianfeng-chs/spark/commit/9f2b2bf001262215742be418f24d5093c92ff10f
> We are experiencing this issue consistently when trying to use 
> PutHiveStreaming and likely would affect PutHiveQL.
> The fix is identical namely preloading the Hadoop configuration during the 
> processor setup phase. Pull request forthcoming.
> ---------------------------
> 2016-10-06 16:07:59,225 ERROR [Timer-Driven Process Thread-9] 
> o.a.n.processors.hive.PutHiveStreaming
> java.lang.IllegalArgumentException: java.net.UnknownHostException: tdcdv2
>         at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
>  ~[hadoop-common-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
>  ~[hadoop-hdfs-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) 
> ~[hadoop-hdfs-2.6.2.jar:na]
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:668) 
> ~[hadoop-hdfs-2.6.2.jar:na]
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:604) 
> ~[hadoop-hdfs-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
>  ~[hadoop-hdfs-2.6.2.jar:na]
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) 
> ~[hadoop-common-2.6.2.jar:na]
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) 
> ~[hadoop-common-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.<init>(OrcRecordUpdater.java:221)
>  ~[hive-exec-1.2.1.jar:1.2.1]
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:292)
>  ~[hive-exec-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:141)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:121)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:37)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:509)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$1(HiveWriter.java:250)
>  ~[nifi-hive-processors-1.0.0.jar:1.0.0]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to