Not sure how great a solution this is, but I thought I'd go ahead and post
it in case anyone else can benefit from it.
I ended up copying my native libraries to HDFS under
/native-libraries/ where arch is either "Linux-i386-32" or
"Linux-amd64-64". Then I used this code in my Mapper's configure()
By this, I assume you mean $HADOOP_HOME/lib/native/.
Yes and no. The code I'm wanting to call is a JNI wrapper around a legacy C
shared library. So, I have the legacy shared library (libFoo.so) and a java
class Foo.java which contains native methods (these native methods are
implemented in libFo
Would it work if you package your native library under the directory
of lib/native//...?
On Jul 10, 2009, at 12:46 PM, Todd Lipcon wrote:
Hi Stuart,
Hadoop itself doesn't have any nice way of dealing with this that I
know of.
I think your best bet is to do something like:
String dataMode
Hi Stuart,
Hadoop itself doesn't have any nice way of dealing with this that I know of.
I think your best bet is to do something like:
String dataModel = System.getProperty("sun.arch.data.model");
if ("32".equals(dataModel)) {
System.loadLibrary("mylib_32bit");
} elseif ("64".equals(dataModel))
My hadoop cluster is a combination of i386-32bit and amd64-64bit machines.
I have some native code that I need to execute from my mapper. I have
different native libraries for the different architectures.
How can I accomplish this? I've looked at using -files or DistributedCache
to push the nati