Sorry for the probably very fundamental nature of these questions, but i have
been reading stuff online all day and still am not sure.

I am messing with Camel with HDFS, but my questions generally relate to
using Camel to integrate any two (or more) systems that are remote (not on
same machine, not running in same JVM). Assume that I am not using an ESB,
because for my larger goal, an ESB is not an option. I need to get disparate
systems to work with one another with no ESB.

Here's a simplified scenario.  I have a hadoop cluster and I have a server
with a directory of files (dir name is "in"). As a PoC, I want to transfer
those files to HDFS on my cluster and back to my server.

To transfer the file to HDFS on the cluster, I assume I would use the
Camel-hdfs component like this:

from("file://in").to("hdfs://<IP-ADDR-OF-MY-HADOOP-NAMENODE>:9000/tmp/test"); 

First question: I was confused initially (maybe i still am!) about how Camel
would know *where* to transfer the file. My assumption is that I identify
this by putting the IP addr of the remote system into the URL. Is this the
general pattern?

Second question: What is that URL in the to() supposed to be? Is it the URL
of a web service -- in this case  of HDFS's HTTP API? Or does Camel provide
some web-addressable wrapper around services that are local to a machine?

Third question: To get remote systems working together using Camel,
generally do i have to install something Camel-related on each participating
machine? If so, what? For example (a wild stab) would I have to put the
Camel-hdfs compontent jar in the classpath? Or??

Thanks for wading into my confusion.




--
View this message in context: 
http://camel.465427.n5.nabble.com/Super-basic-questions-tp5746261.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to