PutHDFS uses hdfs file system api to copy data to hdfs. There is no requirement 
that hdfs and NiFi have to be on the same machine. PutHDFS processor asks for 
the location of the core-site.xml and hdfs-site.xml which inform the file 
system api’s on where to copy the data to. These config files need to be 
accessible on the host running NiFi. When these configuration paths are not set 
hdfs file system api’s probably default to localhost.

--
Arpit

On Mar 28, 2016, at 2:49 PM, Thad Guidry 
<thadgui...@gmail.com<mailto:thadgui...@gmail.com>> wrote:

If I want to run a local NiFi instance that uses a PutHDFS processor to flow to 
a remote HDFS...how is this accomplished ?  It seems as though the PutHDFS is 
expecting both a NiFi service and HDFS service running on the same machine ?  
What if I don't want to run NiFi on my Hadoop cluster ?

Thad
+ThadGuidry<https://www.google.com/+ThadGuidry>

Reply via email to