Re: Namenode shutdown due to long GC Pauses

2016-02-25 Thread Namikaze Minato
This happened to us. Our namenodes are on a virtual machine, and reducing the number of replication locations of the journal node to 1 (it's backed by by a safe raid array anyway) solved the problem. Regards, LLoyd On 25 February 2016 at 06:39, Gokulakannan M (Engineering - Data Platform) wrote:

unsubscribe

2016-02-25 Thread Ram

Re: Namenode shutdown due to long GC Pauses

2016-02-25 Thread Sandeep Nemuri
You my need to tune your GC settings. ᐧ On Thu, Feb 25, 2016 at 3:04 PM, Namikaze Minato wrote: > This happened to us. Our namenodes are on a virtual machine, and > reducing the number of replication locations of the journal node to > 1 (it's backed by by a safe raid array anyway) solved the p

Re: Namenode shutdown due to long GC Pauses

2016-02-25 Thread bappa kon
Which garbage collector you are using currently in your env? Can you share the jvm parameters?. If you are using CMS and already optimized your parameter then probably you can look at to G1 garbage collector. First you should look at the GC stats and pattern to find out the cause of long GC. Reg

Re: Namenode shutdown due to long GC Pauses

2016-02-25 Thread Gokulakannan M (Engineering - Data Platform)
Hi Jitendra, Trying to find the pattern but one thing observed is that the metrics *RpcDetailedActivity.GetServerDefaultsNumOps *is pretty high(around 14 million) when long pause happened. G1 garbage collector is used already. These are the main JVM parameters. -XX:+UseG1GC -XX:ParallelGCThreads

Hadoop Installation and startup.

2016-02-25 Thread Vinodh Nagaraj
Hi All, I Installed hadoop.Then format it. I tried to execute start-dfs.cmd,but i got error. [image: Inline image 1] Thanks & Regards, Vinodh.N

Re: Hadoop Installation and startup.

2016-02-25 Thread Mallanagouda Patil
Hi, Did you set environment variables properly? Path and HadoopHome? Thanks Mallan On Feb 25, 2016 4:40 PM, "Vinodh Nagaraj" wrote: > Hi All, > > I Installed hadoop.Then format it. > > I tried to execute start-dfs.cmd,but i got error. > > [image: Inline image 1] > > > Thanks & Regards, > Vinodh.N

Click in Application manager gives Error 500

2016-02-25 Thread Roberto Gonzalez
When I click in the application manager link in the web GUI I get an error 500 (hadoop 2.7.2): HTTP ERROR 500 Problem accessing /proxy/application_1456416539194_0001/. Reason: Connection to http://computer85:8088 refused Caused by: org.apache.http.conn.HttpHostConnectException: Connectio

(Classpath?) Problem with MiniDFSCluster and HBase jars

2016-02-25 Thread Micha
Hi, I ran into classpath problems while trying to use MiniDFSCluster: My code uses the hadoop (2.7.1) and hbase (1.1.2) api, my classpath includes hadoop/share/hadoop/*.jar and hbase/lib/*.jar, so nothing should be missing. But I got "class not found errors" in the code using MiniDFSCluster. The

Re: (Classpath?) Problem with MiniDFSCluster and HBase jars

2016-02-25 Thread Ted Yu
You can include the following command line parameter when you build from 1.1.2 source: -Dhadoop-two.version=2.7.1 FYI On Thu, Feb 25, 2016 at 8:37 AM, Micha wrote: > Hi, > > I ran into classpath problems while trying to use MiniDFSCluster: > > My code uses the hadoop (2.7.1) and hbase (1.1.2)

YARN control on external Hadoop Streaming

2016-02-25 Thread Prabhu Joseph
Hi All, A Hadoop Streamin Job which runs on YARN, triggers separate external processes and which can take more memory / CPU. Just want to check if there any way we can control the resource Memory and CPU of external Process through YARN. Thanks, Prabhu Joseph

libhdfs force close hdfsFile

2016-02-25 Thread Ken Huang
Hi, Does anyone know how to close a hdfsFile while the connection between hdfsClient and NameNode is lost ? Thanks Ken Huang

Re: Setting up a Hadoop Multi Node Cluster in Windows

2016-02-25 Thread karthi keyan
Hi, For windows you can look into https://wiki.apache.org/hadoop/Hadoop2OnWindows , By building it via maven you can able to handle the batch/executables (.cmd) to work with windows. Hope you tired like this approach to set up a pseudo node cluster. Then follow steps http://hadoop.apache.org/doc

YARN REST API to submit a job

2016-02-25 Thread sudeep mishra
Hi, I am trying to submit a spark job to YARN 2.7.1. The format of the request is as below. Post Data url - http://:8088/ws/v1/cluster/apps header Content-Type: application/json request body : {