Howdy,

{In my mind, this isn't really a dev question; maybe I need to be
separately subscribed to some other list as well as dev?  In any event,
this bounced when I sent it to us...@nifi.apache.org, so I figured I tried
what had worked before.}

I'm trying to do pretty much what it says on the tin.

My NiFi is running on a local (to me) VM.
The EMR cluster is off in the Amazon cloud somewhere.
I grabbed the hdfs-site and core-site files from the namenode.
Since my Nifi is external to the cluster, I changed the hostname (in those
files) to mention the externally addressable hostname of the namenode.

But, when I start the PutHDFS processor, it pretty immediately notes that
it was unable to initialize the Processor, because the connection was
refused.  (N.B. even in the log file, it fails to mention what connection
[IP or port or anything] it was attempting to make, nor does it say why it
failed.)  Fiddling with telnet (inside the cluster) is starting to suggest
to me that maybe the HDFS ports are only being associated with the internal
addresses, and that remotely accessing HDFS might be a difficult path
forward.

But, I feel like using NiFi to write into HDFS is pretty much the origin
story for the project, so, there's probably a good way to do this, right?
Anybody got a suggestion?

thx,

mew

Reply via email to