Re: How to stop ooze flow using CM

2015-03-16 Thread Tsuyoshi Ozawa
I think it would be better to send your question to Cloudera's mailing list. Thanks, - Tsuyoshi On Tue, Mar 17, 2015 at 11:44 AM, kumar jayapal wrote: > Hello, > > > May i know how to stop oozie flows in CDH5 using CM? > > can you please give me some link to know more about it? > > thanks > Jap

Re: Can't find map or reduce logs when a job ends.

2015-03-16 Thread twinkle sachdeva
Hi, Please try following: yarn logs -applicationId application_1426267324367_0005 Thanks, On Tue, Mar 17, 2015 at 8:23 AM, Ranadip Chatterjee wrote: > Is the job history server up and running on the right host and port? > Please check the job history server logs, if so? A common reason is fo

RE: YARN ignores host-specific resource requests

2015-03-16 Thread Naganarasimha G R (Naga)
Hi Gaurav, If you are using Capacity scheduler then try configuring to yarn.scheduler.capacity.node-locality-delay to the size of cluster. By default its set to -1 which make scheduler to assign the container to non local nodes Regards, Naga From: Gaurav Gupta [g

Re: Can't find map or reduce logs when a job ends.

2015-03-16 Thread Ranadip Chatterjee
Is the job history server up and running on the right host and port? Please check the job history server logs, if so? A common reason is for the owner of job history server to not have read permission on the logs or for the map reduce process owners to not have write permission in the job history l

got Java.lang.NullPointerException on two node when I restart cluster

2015-03-16 Thread SP
Hi, did any one got these error before, please help ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: x.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /:39000 dst: /xx:50010 java.lang.NullPointerException at org.apache.hadoop.hdfs.server.datano

How to stop ooze flow using CM

2015-03-16 Thread kumar jayapal
Hello, May i know how to stop oozie flows in CDH5 using CM? can you please give me some link to know more about it? thanks Jap

YARN ignores host-specific resource requests

2015-03-16 Thread Gaurav Gupta
Hi, I am trying to allocate containers on a particular host. But I don't get the back the resources. I am setting relaxedLocality to false and rack to null. Any pointers? Thanks Gaurav

HDFS Block Bad Response Error

2015-03-16 Thread Shipper, Jay [USA]
On a Hadoop 2.4.0 cluster, I have a job running that's encountering the following warnings in one of its map tasks (IPs changed, but otherwise, this is verbatim): --- 2015-03-16 06:59:37,994 WARN [ResponseProcessor for block BP-437460642-10.0.0.1-1391018641114:blk_1084609656_11045296] org.apac

The open source operating system are calculated in terms of the support in the big data and cloud

2015-03-16 Thread hsdcl...@163.com
hi everyone,what do you think of the open source operating system are calculated in terms of the support in the big data and cloud? hsdcl...@163.com

Re: Snappy Configuration in Hadoop2.5.2

2015-03-16 Thread donhoff_h
Hi, Azuryy Thanks very much for your help! -- Original -- From: "Azuryy Yu";; Send time: Monday, Mar 16, 2015 4:01 PM To: "user@hadoop.apache.org"; Subject: Re: Snappy Configuration in Hadoop2.5.2 yes. please add in the yarn-site.xml: yarn.nodemana

Re: Prune out data to a specific reduce task

2015-03-16 Thread Azuryy Yu
Hi, Can you set only one reduce task? why did you want set up two reudce tasks and only one work? On Mon, Mar 16, 2015 at 9:04 AM, Drake민영근 wrote: > Hi, > > If you write custom partitioner, just call them to confrim the key match > with which partition. > > You can get the number of reduer from

Re: Snappy Configuration in Hadoop2.5.2

2015-03-16 Thread Azuryy Yu
yes. please add in the yarn-site.xml: yarn.nodemanager.admin-env MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX,LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:/ opt/snappy/lib:$LD_LIBRARY_PATH yarn.app.mapreduce.am.admin.user.env LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:/opt/snapp