Hello,

I think whats happening is in AbstractHadoopProcessor there is a method
checkHdfsUriForTimeout() which gets the URL of the file system by calling
FileSystem.getDefaultUri(config) which in your case returns hdfs://mas:8020
and then it tries to make a socket connection to that host and post which
doesn't work.

For reference, the code is here:
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/AbstractHadoopProcessor.java#L340

I'm not familiar with using name services, but do you know if there is a
way to ask the HDFS client for the URI that is evaluated against the name
service? basically something other than FileSystem.getDefaultUri(config) ?

Thanks,

Bryan


On Fri, Dec 2, 2016 at 5:25 AM, Provenzano Nicolas <
[email protected]> wrote:

> Hello all,
>
>
>
> I configured a putHDFS processor as follows :
>
>
>
> Hadoop Configuration Resources : /etc/hadoop/conf/core-site.
> xml,/etc/hadoop/conf/hdfs-site.xml
>
>
>
>
>
> The core site file contains :
>
>
>
> *  <property>*
>
> *    <name>fs.defaultFS</name>*
>
> *    <value>hdfs://mas:8020/</value>*
>
> *  </property> *
>
>
>
> While the hdfs site file contains :
>
>
>
>   *<property>*
>
> *    <name>dfs.nameservices</name>*
>
> *    <value>mas</value>*
>
> *  </property>*
>
>
>
> *  <property>*
>
> *    <name>dfs.ha.namenodes.mas</name>*
>
> *    <value>dcm1-vz2,dcm1-vz3</value>*
>
>   *</property>*
>
>
>
> *  <property>*
>
> *    <name>dfs.namenode.rpc-address.mas.dcm1-vz2</name>*
>
> *    <value>dcm1-vz2:8020</value>*
>
> *  </property>*
>
> *  <property>*
>
> *    <name>dfs.namenode.rpc-address.mas.dcm1-vz3</name>*
>
> *    <value>dcm1-vz3:8020</value>*
>
> *  </property> *
>
>
>
> However, when I start the processor, the following error occurs :
>
>
>
> *Caused by: java.nio.channels.UnresolvedAddressException: null*
>
> *        at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_72]*
>
> *        at
> sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
> ~[na:1.8.0_72]*
>
> *        at org.apache.hadoop.net
> <http://org.apache.hadoop.net>.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
> ~[na:na]*
>
> *        at org.apache.hadoop.net
> <http://org.apache.hadoop.net>.NetUtils.connect(NetUtils.java:530) ~[na:na]*
>
> *        at org.apache.hadoop.net
> <http://org.apache.hadoop.net>.NetUtils.connect(NetUtils.java:494) ~[na:na]*
>
> *        at
> org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.checkHdfsUriForTimeout(AbstractHadoopProcessor.java:348)
> ~[na:na]*
>
> *        at
> org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.resetHDFSResources(AbstractHadoopProcessor.java:270)
> ~[na:na]*
>
> *        at
> org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.abstractOnScheduled(AbstractHadoopProcessor.java:213)
> ~[na:na]*
>
> *        at
> org.apache.nifi.processors.hadoop.PutHDFS.onScheduled(PutHDFS.java:181)
> ~[na:na]*
>
>
>
> If I change the fs.defaultFS to the host name of the name node, it works
> !
>
>
>
> Si I was wondering if the use of a HDFS service name is actually supported
> in NiFi processors ? If it is, how to configure the processor ?
>
>
>
> Thanks in advance,
>
>
>
> BR
>
>
>
> Nicolas
>
>
>

Reply via email to