Thanks for the extra information!
For reference, the code that I was running was basically the code that can
be found here: https://github.com/hortonworks/simple-yarn-app. The issue
that I was having was when running from within intellij (aka without the
classpath bits that 'hadoop classpath' adds
resolved, I've filed a JIRA for this.
https://issues.apache.org/jira/browse/YARN-1998
On Tue, Apr 29, 2014 at 6:46 PM, Azuryy Yu wrote:
> Hi,
> how to change the time zone of startTime and finishTime on the yarn web ui?
> I cannot find the code, I just found render() returns a long type fiel
Hello,
I am having issue with partitioning data between mapper and reducers when the
key is numeric. When I switch it to one character string it works fine, but I
have more then 26 keys so looking to alternative way.
My data look like:
10 \t comment10 \t data
20 \t comment20 \t data
30 \t com
Hi,
I am running a MR job with AvroMutipleOutputs on hadoop 2.3.0. and I am
facing following issue. What could be the problem?
1) Job stuck at reduce 100%, and fails with Lease Exception
2) Observed that every time out of 100 reducers only 3 of them are failing.
3) I verified no other process is
Hi,
Just change the /fs.defaultFS/ property in /core-site.xml/ to connect to
logical name:
///
//fs.defaultFS//
//hdfs://MYCLUSTER:8020//
//true//
///
HDFS Client will know which NN it has to connect.
Hope it helps,
Aitor
On 29/04/14 16:07, sam liu wrote:
Hi Bry
This is great info for me. Thanks Oleg! I will take a look. Hope it can
also fit in our production environment.
Best Regards,
Bo
On Tue, Apr 29, 2014 at 3:38 AM, Oleg Zhurakousky <
oleg.zhurakou...@gmail.com> wrote:
> Yes there is. You can provide your own implementation of
> org.apache.hadoop.
Hi Bryan,
Thanks for your detailed response!
- 'you use a logical name for your "group of namenodes"': In your case, it
should be 'MYCLUSTER'
- 'provide a means for the client to handle connecting to the currently
active one': *Could you pls give an example?*
2014-04-29 21:57 GMT+08:00 Bryan
If you are using the QJM HA solution, the IP addresses of the namenodes
should not change. Instead your clients should be connecting using the
proper HA configurations. That is, you use a logical name for your "group
of namenodes", and provide a means for the client to handle connecting to
the cu
Hi Experts,
For example, at the beginning, the application will access NameNode using
IP of active NameNode(IP: 9.123.22.1). However, after failover, the IP of
active NameNode is changed to 9.123.22.2 which is the IP of previous
standby NameNode. In this case, application must update NameNode IP?
Hi Eric,
IMHO you do have a solution by increasing the xcievers count in
hdfs-site.xml, but this might give you a performance hit
dfs.datanode.max.xcievers
4096
To get a better understanding of how xcievers work go through this link:
http://blog.cloudera.com/blog/2012/03/hbase-hadoop-xceivers/
Hi Experts,
I am decommissioning one of my nodes from the cluster. All the blocks get
replicated properly to the other nodes to maintain the replication factor
except one. I get the following exception for the block:
*Source Datanode (One being decommissioned):*
2014-04-29 07:08:31,619 WARN
org.
Hi,
how to change the time zone of startTime and finishTime on the yarn web ui?
I cannot find the code, I just found render() returns a long type field,
but It shows GMT time zone format.
how to change to the local time zone? Thanks.
Yes there is. You can provide your own implementation of
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor and
configure it as 'yarn.nodemanager.container-executor.class' property.
There you can bypass Shell and create your own way of invoking processes.
Obviously it only makes se
Another question
Can set expired time for /tmp or set yarn/mapreduce to remove the expired
tmp files periodically?
Thanks,
Jack
2014-04-29 16:56 GMT+08:00 Meng QingPing :
> Thanks for all replies.
>
> The files in /tmp most are generated by hadoop jobs. Can set the
> yarn/mapreduce to specify o
Thanks for all replies.
The files in /tmp most are generated by hadoop jobs. Can set the
yarn/mapreduce to specify one repication for tmp files?
Thanks,
Jack
2014-04-29 16:40 GMT+08:00 sudhakara st :
> Hello Nitin,
>
> HDFS replication factor is always associated with file level. When your
> c
Hello Nitin,
HDFS replication factor is always associated with file level. When your
copying or creating file for any directory will set to default replicas.
but you can specify your replication when creating or copying files
hadoop fs -D dfs.replication=2 -put foo.txt fsput
and in java
File
Hi Amir,
I'm also working on how to disable the checksum, it's known issue of apache
hadoop, you can find more details from the following apache JIRA :
https://issues.apache.org/jira/browse/HADOOP-9114
https://issues.apache.org/jira/browse/HDFS-5761
On Mon, Apr 28, 2014 at 8:22 PM, Amir Hellers
17 matches
Mail list logo