You don't need, if the wiki page is correct.
Best Regards,
Raymond Liu
From: ch huang [mailto:justlo...@gmail.com]
Sent: Tuesday, October 29, 2013 12:01 PM
To: user@hadoop.apache.org
Subject: if i configed NN HA,should i still need start backup node?
ATT
Hi
I am playing with YARN 2.2, try to porting some code from pre-beta API
on to the stable API. While both the wiki doc and API doc for 2.2.0 seems still
stick with the old API. Though I could find some help from
Hi
I have setting up Hadoop 2.2.0 HA cluster following :
http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html#Configuration_details
And I can check both the active and standby namenode with WEB interface.
While, it seems that the logical name could
Encounter Similar issue with NN HA URL
Have you make it work?
Best Regards,
Raymond Liu
-Original Message-
From: Siddharth Tiwari [mailto:siddharth.tiw...@live.com]
Sent: Friday, October 18, 2013 5:17 PM
To: user@hadoop.apache.org
Subject: Using Hbase with NN HA
Hi team,
Can Hbase be
Hmm, my bad. NameserviceID is not sync in one of the properties
After fix, it works.
Best Regards,
Raymond Liu
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com]
Sent: Thursday, October 24, 2013 3:03 PM
To: user@hadoop.apache.org
Subject: How to use Hadoop2 HA's
/MAPREDUCE-3193.
You can give input dir to the Job which doesn't have nested dir's or you can
make use of the old FileInputFormat API to read files recursively in the sub
dir's.
Thanks
Devaraj k
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com]
Sent: 12 July 2013 12
Hi
I just start to try out hadoop2.0, I use the 2.0.5-alpha package
And follow
http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html
to setup a cluster in non-security mode. HDFS works fine with client tools.
While when I run wordcount example, there
, the BlockSender.sendChunks will read and
sent data in 64K bytes units?
Is that true? And if so, won't it explain that read through datanode will be
faster? Since it read data in bigger block size.
Best Regards,
Raymond Liu
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com
in 64K bytes units?
Is that true? And if so, won't it explain that read through datanode
will be faster? Since it read data in bigger block size.
Best Regards,
Raymond Liu
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com]
Sent: Saturday, February 16
Hi
I tried to use short circuit read to improve my hbase cluster MR scan
performance.
I have the following setting in hdfs-site.xml
dfs.client.read.shortcircuit set to true
dfs.block.local-path-access.user set to MR job runner.
The cluster is 1+4 node
,
did you enable security feature in your cluster? there'll be no obvious
benefit
be found if so.
Regards,
Liang
___
发件人: Liu, Raymond [raymond@intel.com]
发送时间: 2013年2月16日 11:10
收件人: user@hadoop.apache.org
主题: why my test result on dfs short
will
be attempted but will begin to fail.
On Sat, Feb 16, 2013 at 8:40 AM, Liu, Raymond raymond@intel.com
wrote:
Hi
I tried to use short circuit read to improve my hbase cluster MR
scan performance.
I have the following setting in hdfs-site.xml
for file
This would confirm that short circuit read is happening.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Feb 15, 2013, at 9:53 PM, Liu, Raymond raymond@intel.com wrote:
Hi Harsh
Yes, I did set both of these. While not in hbase-site.xml but hdfs-site.xml.
And I have
that read through datanode will be
faster? Since it read data in bigger block size.
Best Regards,
Raymond Liu
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com]
Sent: Saturday, February 16, 2013 2:23 PM
To: user@hadoop.apache.org
Subject: RE: why my test result on dfs
try a pattern that matches these
and you should have it.
The X kind of files are what MR produces on HDFS as regular outputs -
these aren't intermediate.
On Fri, Aug 10, 2012 at 8:52 AM, Liu, Raymond raymond@intel.com
wrote:
Hi
I am trying to access the intermediate file
Alright, finally managed to get the intermediate file.
The pattern should be .*_m_.* instead of .*_m_*... stupid me.
If you try to get everything, use .* for pattern. ;)
Best Regards,
Raymond Liu
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com]
Sent
16 matches
Mail list logo