On 06/03/2014 17:44, Matei Zaharia wrote:
Is it an error, or just a warning? In any case, you need to get those libraries from a build of Hadoop for your platform. Then add them to the SPARK_LIBRARY_PATH environment variable in conf/spark-env.sh, or to your -Djava.library.path if launching an application separately.
OK, thanks. Is it possible to get Spark to build using an existing Hadoop build tree, or does Spark insist on building its own Hadoop? The instructions at https://spark.incubator.apache.org/docs/latest/ seem to suggest that it always builds its own Hadoop version.
I may also have to fiddle with Hadoop to get it to build on Solaris if the instructions at http://www.oracle.com/technetwork/articles/servers-storage-admin/sol-howto-native-hadoop-s11-1946524.html are still relevant.
These libraries just speed up some compression codecs BTW, so it should be fine to run without them too.
Yes, it works as-is but I have a need for speed :-) Thanks, -- Alan Burlison --