On 25 Mar 2015, at 21:54, roni 
<roni.epi...@gmail.com<mailto:roni.epi...@gmail.com>> wrote:

Is there any way that I can install the new one and remove previous version.
I installed spark 1.3 on my EC2 master and set teh spark home to the new one.
But when I start teh spark-shell I get -
 java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
    at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native 
Method)

Is There no way to upgrade without creating new cluster?
Thanks
Roni

This isn't a spark version problem itself, more one of Hadoop versions.

If you see this it means that the Hadoop JARs shipping with Spark 1.3 are 
trying to bind to a native method implemented in hadoop.so —but that method 
isn't there.

Possible fixes

1. Simplest: find out which version of Hadoop is running in the EC2 cluster, 
and get a version of Spark 1.3 built against that version. If you can't find 
one, it's easy enough to just check out the 1.3.0 release off github/ASF git 
and build it yourself.

2. Upgrade the underlying Hadoop Cluster

3. find the location of hadoop.so in your VMs, and overwrite it with a the 
version of Hadoop.so from the version of Hadoop used in the build of Spark 1.3, 
and rely on the intent of the Hadoop team to make updated native binaries 
backwards compatible across branch-2 releases (i.e. they only add functions, 
not remove or rename them).

#3 is an ugly hack which may work immediately but once you get in the game of 
mixing artifacts from different Hadoop releases, is a slippery slope towards an 
unmaintanable Hadoop cluster.

I'd go for tactic #1 first

Reply via email to