RE: No FileSystem for scheme: hdfs when using hadoop-2.8.0 jars

2017-07-31 Thread omprakash
Hi Surendra, Thanks a lot for the help. After adding this jar the error is gone. Regards Om Prakash From: surendra lilhore [mailto:surendra.lilh...@huawei.com] Sent: 31 July 2017 18:25 To: omprakash ; Brahma Reddy Battula ;

Re: How to write a Job for importing Files from an external Rest API into Hadoop

2017-07-31 Thread Ralph Soika
Hi Ravi, thanks a lot for your response and the code example! I think this will help me a lot to get started .I am glad to see that my idea is not to exotic. I will report if I can adapt the solution for my problem. best regards Ralph On 31.07.2017 22:05, Ravi Prakash wrote: Hi Ralph!

Re: Shuffle buffer size in presence of small partitions

2017-07-31 Thread Ravi Prakash
Hi Robert! I'm sorry I do not have a Windows box and probably don't understand the shuffle process well enough. Could you please create a JIRA in the mapreduce proect if you would like this fixed upstream? https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=116=MAPREDUCE Thanks Ravi

Re: How to write a Job for importing Files from an external Rest API into Hadoop

2017-07-31 Thread Ravi Prakash
Hi Ralph! Although not totally similar to your use case, DistCp may be the closest thing to what you want. https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java . The client builds a file list, and then submits an MR job to copy

Could not find or load main class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer

2017-07-31 Thread liang
I enable the HDFS/Yarn security mode, and try my project. However, it always reports following issue "Could not find or load main class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer" 2017-07-28 20:33:11,089 INFO

Shuffle buffer size in presence of small partitions

2017-07-31 Thread Robert Schmidtke
Hi all, I just ran into an issue, which likely resulted from my not very intelligent configuration, but nonetheless I'd like to share this with the community. This is all on Hadoop 2.7.3. In my setup, each reducer roughly fetched 65K from each mapper's spill file. I disabled transferTo during

RE: No FileSystem for scheme: hdfs when using hadoop-2.8.0 jars

2017-07-31 Thread surendra lilhore
Hi Omprakash, I feel hadoop-hdfs-client-2.8.0.jar is missing in your class path. After 2.7.2 release org.apache.hadoop.hdfs.DistributedFileSystem class is moved from hadoop-hdfs.x.x.x.jar to hadoop-hdfs-client-x.x.x.jar. Regards, Surendra From: omprakash [mailto:ompraka...@cdac.in] Sent:

RE: No FileSystem for scheme: hdfs when using hadoop-2.8.0 jars

2017-07-31 Thread omprakash
Hi, I am executing the client from eclipse from my dev machine. The Hadoop cluster is a remote machine. I have added the required jars(including hadoop-hdfs-2.8.0.jar) in the classpath of project. But the issue is still there. I removed and added again but no success. Regards Omprakash

RE: No FileSystem for scheme: hdfs when using hadoop-2.8.0 jars

2017-07-31 Thread Brahma Reddy Battula
Looks jar(hadoop-hdfs-2.8.0.jar) is missing in the classpath.Please check the client classpath. Might be there are no permissions OR missed the this jar while copying..? Reference: org.apache.hadoop.fs.FileSystem#getFileSystemClass if (clazz == null) { throw new

No FileSystem for scheme: hdfs when using hadoop-2.8.0 jars

2017-07-31 Thread omprakash
Hi all, I have moved my Hadoop-2.7.0 cluster to 2.8.0 version. I have a client application that uses hdfs to get and store file. But after replacing the 2.7.0 jars with new jars(version 2.8.0) I am facing below exception Exception in thread "main" java.io.IOException: No FileSystem for