Is it an error, or just a warning? In any case, you need to get those libraries 
from a build of Hadoop for your platform. Then add them to the 
SPARK_LIBRARY_PATH environment variable in conf/spark-env.sh, or to your 
-Djava.library.path if launching an application separately.

These libraries just speed up some compression codecs BTW, so it should be fine 
to run without them too.

Matei

On Mar 6, 2014, at 9:04 AM, Alan Burlison <alan.burli...@oracle.com> wrote:

> Hi,
> 
> I've successfully built 0.9.0-incubating on Solaris using sbt, following the 
> instructions at http://spark.incubator.apache.org/docs/latest/ and it seems 
> to work OK. However, when I start it up I get an error about missing Hadoop 
> native libraries. I can't find any mention of how to build the native 
> components in the instructions, how is that done?
> 
> Thanks,
> 
> -- 
> Alan Burlison
> --

Reply via email to