I think it would be better to send your question to Cloudera's mailing list.
Thanks,
- Tsuyoshi
On Tue, Mar 17, 2015 at 11:44 AM, kumar jayapal wrote:
> Hello,
>
>
> May i know how to stop oozie flows in CDH5 using CM?
>
> can you please give me some link to know more about it?
>
> thanks
> Jap
Hi,
Please try following:
yarn logs -applicationId application_1426267324367_0005
Thanks,
On Tue, Mar 17, 2015 at 8:23 AM, Ranadip Chatterjee
wrote:
> Is the job history server up and running on the right host and port?
> Please check the job history server logs, if so? A common reason is fo
Hi Gaurav,
If you are using Capacity scheduler then try configuring to
yarn.scheduler.capacity.node-locality-delay to the size of cluster.
By default its set to -1 which make scheduler to assign the container to non
local nodes
Regards,
Naga
From: Gaurav Gupta [g
Is the job history server up and running on the right host and port? Please
check the job history server logs, if so? A common reason is for the owner
of job history server to not have read permission on the logs or for the
map reduce process owners to not have write permission in the job history
l
Hi,
did any one got these error before, please help
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
x.com:50010:DataXceiver
error processing WRITE_BLOCK operation src: /:39000 dst:
/xx:50010
java.lang.NullPointerException
at
org.apache.hadoop.hdfs.server.datano
Hello,
May i know how to stop oozie flows in CDH5 using CM?
can you please give me some link to know more about it?
thanks
Jap
Hi,
I am trying to allocate containers on a particular host. But I don't get
the back the resources.
I am setting relaxedLocality to false and rack to null.
Any pointers?
Thanks
Gaurav
On a Hadoop 2.4.0 cluster, I have a job running that's encountering the
following warnings in one of its map tasks (IPs changed, but otherwise, this is
verbatim):
---
2015-03-16 06:59:37,994 WARN [ResponseProcessor for block
BP-437460642-10.0.0.1-1391018641114:blk_1084609656_11045296]
org.apac
hi everyone,what do you think of the open source operating system are
calculated in terms of the support in the big data and cloud?
hsdcl...@163.com
Hi, Azuryy
Thanks very much for your help!
-- Original --
From: "Azuryy Yu";;
Send time: Monday, Mar 16, 2015 4:01 PM
To: "user@hadoop.apache.org";
Subject: Re: Snappy Configuration in Hadoop2.5.2
yes. please add in the yarn-site.xml:
yarn.nodemana
Hi,
Can you set only one reduce task? why did you want set up two reudce tasks
and only one work?
On Mon, Mar 16, 2015 at 9:04 AM, Drake민영근 wrote:
> Hi,
>
> If you write custom partitioner, just call them to confrim the key match
> with which partition.
>
> You can get the number of reduer from
yes. please add in the yarn-site.xml:
yarn.nodemanager.admin-env
MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX,LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:/
opt/snappy/lib:$LD_LIBRARY_PATH
yarn.app.mapreduce.am.admin.user.env
LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:/opt/snapp
12 matches
Mail list logo