On 06/03/2014 18:55, Matei Zaharia wrote:
For the native libraries, you can use an existing Hadoop build and just put them on the path. For linking to Hadoop, Spark grabs it through Maven, but you can do "mvn install" locally on your version of Hadoop to install it to your local Maven cache, and then configure Spark to use that version. Spark never builds Hadoop itself, it just downloads it through Maven.
OK, thanks for the pointers. -- Alan Burlison --