Re: jni files

2010-07-09 Thread Hemanth Yamijala
Hi,

Possibly another silly question, but can you cross check if the
versions of Hadoop on the client and the server are the same ?

Thanks
hemanth

On Thu, Jul 8, 2010 at 10:57 PM, Allen Wittenauer
awittena...@linkedin.com wrote:

 On Jul 8, 2010, at 1:08 AM, amit kumar verma wrote:

     DistributedCache.addCacheFile(hdfs://*
     /192.168.0.153:50075*/libraries/mylib.so.1#mylib.so, conf);

 Do you actually have asterisks in this?  If so, that's the problem.




Re: jni files

2010-07-09 Thread Hemanth Yamijala
Amit,

On Fri, Jul 9, 2010 at 2:39 PM, amit kumar verma v.a...@verchaska.com wrote:
  Hi Hemant,

 The version are same as copied it to all client machine.

 I think I got a solution. As I read more about hadoop and JNI, I learned
 that I need to copy jni files to
 HADOOP_INSTALLATION_DIR//lib/native/Linux-xxx-xxx. I though my linux machine
 is Linux-i386-32. then I found in org.apache.hadoop.util.PlatformName
 class gives you your machine type and its Linux-amd64-64 and asa I copied
 jni files to this directory error are not coming.

 Though full code is still not running as I developed the application using
 java.file class and i am still thinking how to make changes so that it can
 access hdfs !!!  Do i need to change my all API with respect to HDFS and
 rewrite using hadoop fs or ??!!!


To access files from HDFS, you should use the Hadoop FileSystem API.
Please take a look at the Javadoc and also a tutorial such as this:
http://developer.yahoo.com/hadoop/tutorial/module2.html#programmatically
for more information.

 It will be great if someone advice on this.



 Thanks,
 Amit Kumar Verma
 Verchaska Infotech Pvt. Ltd.



 On 07/09/2010 02:04 PM, Hemanth Yamijala wrote:

 Hi,

 Possibly another silly question, but can you cross check if the
 versions of Hadoop on the client and the server are the same ?

 Thanks
 hemanth

 On Thu, Jul 8, 2010 at 10:57 PM, Allen Wittenauer
 awittena...@linkedin.com  wrote:

 On Jul 8, 2010, at 1:08 AM, amit kumar verma wrote:

     DistributedCache.addCacheFile(hdfs://*
     /192.168.0.153:50075*/libraries/mylib.so.1#mylib.so, conf);

 Do you actually have asterisks in this?  If so, that's the problem.





Re: jni files

2010-07-09 Thread amit kumar verma

 Hi  Hemanth,

Yeah I have gone through the api documentation and there is no issue in 
accessing files from HDFS, but my concern is what about the API which 
already got developed without hadoop. OK, what I mean, I developed an 
application when I didn't know about the hadoop, but as now I need to 
implement grid environment so I am looking for Hadoop.


So no the question is, how can use the same code to work for HDFS, do I 
need to change my code and use hadoop API to used the HDFS. If that is 
the case then the change will be major, or there is any way where the 
default java.file can be integrated with hdfs.


Did you get the issue ??

Thanks,
Amit Kumar Verma
Verchaska Infotech Pvt. Ltd.



On 07/09/2010 02:47 PM, Hemanth Yamijala wrote:

Amit,

On Fri, Jul 9, 2010 at 2:39 PM, amit kumar vermav.a...@verchaska.com  wrote:

  Hi Hemant,

The version are same as copied it to all client machine.

I think I got a solution. As I read more about hadoop and JNI, I learned
that I need to copy jni files to
HADOOP_INSTALLATION_DIR//lib/native/Linux-xxx-xxx. I though my linux machine
is Linux-i386-32. then I found in org.apache.hadoop.util.PlatformName
class gives you your machine type and its Linux-amd64-64 and asa I copied
jni files to this directory error are not coming.

Though full code is still not running as I developed the application using
java.file class and i am still thinking how to make changes so that it can
access hdfs !!!  Do i need to change my all API with respect to HDFS and
rewrite using hadoop fs or ??!!!


To access files from HDFS, you should use the Hadoop FileSystem API.
Please take a look at the Javadoc and also a tutorial such as this:
http://developer.yahoo.com/hadoop/tutorial/module2.html#programmatically
for more information.


It will be great if someone advice on this.



Thanks,
Amit Kumar Verma
Verchaska Infotech Pvt. Ltd.



On 07/09/2010 02:04 PM, Hemanth Yamijala wrote:

Hi,

Possibly another silly question, but can you cross check if the
versions of Hadoop on the client and the server are the same ?

Thanks
hemanth

On Thu, Jul 8, 2010 at 10:57 PM, Allen Wittenauer
awittena...@linkedin.comwrote:

On Jul 8, 2010, at 1:08 AM, amit kumar verma wrote:


 DistributedCache.addCacheFile(hdfs://*
 /192.168.0.153:50075*/libraries/mylib.so.1#mylib.so, conf);

Do you actually have asterisks in this?  If so, that's the problem.




Re: jni files

2010-07-09 Thread Hemanth Yamijala
Amit,

On Fri, Jul 9, 2010 at 3:00 PM, amit kumar verma v.a...@verchaska.com wrote:
  Hi  Hemanth,

 Yeah I have gone through the api documentation and there is no issue in
 accessing files from HDFS, but my concern is what about the API which
 already got developed without hadoop. OK, what I mean, I developed an
 application when I didn't know about the hadoop, but as now I need to
 implement grid environment so I am looking for Hadoop.

 So no the question is, how can use the same code to work for HDFS, do I need
 to change my code and use hadoop API to used the HDFS. If that is the case
 then the change will be major, or there is any way where the default
 java.file can be integrated with hdfs.

 Did you get the issue ??


Yes, I think I do. Unfortunately, AFAIK, there's no easy way out. If
your application had previously used Java I/O File APIs, they need to
be migrated to the Hadoop FS API. If you are moving from a
non-Distributed application to Hadoop for a reason (such as handling
scale for e.g.) the investment will be well worth the effort, IMHO.

 Thanks,
 Amit Kumar Verma
 Verchaska Infotech Pvt. Ltd.



 On 07/09/2010 02:47 PM, Hemanth Yamijala wrote:

 Amit,

 On Fri, Jul 9, 2010 at 2:39 PM, amit kumar vermav.a...@verchaska.com
  wrote:

  Hi Hemant,

 The version are same as copied it to all client machine.

 I think I got a solution. As I read more about hadoop and JNI, I learned
 that I need to copy jni files to
 HADOOP_INSTALLATION_DIR//lib/native/Linux-xxx-xxx. I though my linux
 machine
 is Linux-i386-32. then I found in org.apache.hadoop.util.PlatformName
 class gives you your machine type and its Linux-amd64-64 and asa I copied
 jni files to this directory error are not coming.

 Though full code is still not running as I developed the application
 using
 java.file class and i am still thinking how to make changes so that it
 can
 access hdfs !!!  Do i need to change my all API with respect to HDFS and
 rewrite using hadoop fs or ??!!!

 To access files from HDFS, you should use the Hadoop FileSystem API.
 Please take a look at the Javadoc and also a tutorial such as this:
 http://developer.yahoo.com/hadoop/tutorial/module2.html#programmatically
 for more information.

 It will be great if someone advice on this.



 Thanks,
 Amit Kumar Verma
 Verchaska Infotech Pvt. Ltd.



 On 07/09/2010 02:04 PM, Hemanth Yamijala wrote:

 Hi,

 Possibly another silly question, but can you cross check if the
 versions of Hadoop on the client and the server are the same ?

 Thanks
 hemanth

 On Thu, Jul 8, 2010 at 10:57 PM, Allen Wittenauer
 awittena...@linkedin.com    wrote:

 On Jul 8, 2010, at 1:08 AM, amit kumar verma wrote:

     DistributedCache.addCacheFile(hdfs://*
     /192.168.0.153:50075*/libraries/mylib.so.1#mylib.so, conf);

 Do you actually have asterisks in this?  If so, that's the problem.





Re: jni files

2010-07-09 Thread Allen Wittenauer

On Jul 9, 2010, at 2:09 AM, amit kumar verma wrote:
 I think I got a solution. As I read more about hadoop and JNI, I learned that 
 I need to copy jni files to HADOOP_INSTALLATION_DIR//lib/native/Linux-xxx-xxx.

lib/native/xxx are for the native compression libraries.  They are not for 
user-level map reduce code.




jni files

2010-07-08 Thread amit kumar verma

 Hi,

I developed a project which is using some native jni files 
(liblemur_jni.so), earlier i use to run application jar by using 
-Djava.library.path=/PATH_TO_JNI_FILES, but am not able to the same with 
./hadoop jar command.


I followed 
http://hadoop.apache.org/common/docs/r0.18.3/native_libraries.html


  1. First copy the library to the HDFS.
 bin/hadoop fs -copyFromLocal mylib.so.1 /libraries/mylib.so.1
  2. The job launching program should contain the following:
 DistributedCache.createSymlink(conf);
 DistributedCache.addCacheFile(hdfs://*
 /192.168.0.153:50075*/libraries/mylib.so.1#mylib.so, conf);
  3. The map/reduce task can contain:
 System.loadLibrary(mylib.so);

 but getting error :

Exception in thread main java.io.IOException: Call to* 
/192.168.0.153:50075* failed on local exception: java.io.EOFException

at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at 
org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)

at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:170)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)

at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at 
org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.java:506)
at 
org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:640)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at com.i4dweb.trobo.grid.WordCountNew.main(WordCountNew.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:508)

at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)


Please advice.

--
Thanks,
Amit Kumar Verma
Verchaska Infotech Pvt. Ltd.




Re: jni files

2010-07-08 Thread Allen Wittenauer

On Jul 8, 2010, at 1:08 AM, amit kumar verma wrote:

 DistributedCache.addCacheFile(hdfs://*
 /192.168.0.153:50075*/libraries/mylib.so.1#mylib.so, conf);

Do you actually have asterisks in this?  If so, that's the problem.



Re: jni files

2010-07-08 Thread amit kumar verma

 Hi,

No there is no asterisks sign there, it came as i tried to make this 
*bold*.


Thanks,
Amit Kumar Verma
Verchaska Infotech Pvt. Ltd.



On 07/08/2010 10:57 PM, Allen Wittenauer wrote:

On Jul 8, 2010, at 1:08 AM, amit kumar verma wrote:


 DistributedCache.addCacheFile(hdfs://*
 /192.168.0.153:50075*/libraries/mylib.so.1#mylib.so, conf);

Do you actually have asterisks in this?  If so, that's the problem.