will
be appreciated.
--
Bing Jiang
2014-12-01 1:25 GMT+08:00 Srinivas Chamarthi srinivas.chamar...@gmail.com:
Hi,
I have built hadoop from trunk and executing a MR program. But for some
reason, I am not seeing any user logs on my windows box. can someone tell
me what should I do to enable logging of the jobs ?
I am actually
hi, Ashish
I have ever seen a similar issue, and reported the issue
https://issues.apache.org/jira/browse/MAPREDUCE-5782
I have some workaround from that jira.
-Bing
2014-11-30 4:07 GMT+08:00 Ashish Kumar9 ashis...@in.ibm.com:
Hi ,
I am facing issue when i run teragen / terasort benchmark
for any help.
--
Bing Jiang
.
--
Bing Jiang
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
syslog in the logs .
- How to find the total amount of shuffling traffic?
Thanks Regards,
Abdul Navaz
Research Assistant
University of Houston Main Campus, Houston TX
Ph: 281-685-0388
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
By the way, mapreduce.framework.name can be set yarn or yarn-tez. It will
make differences.
2014-09-02 8:24 GMT+08:00 jay vyas jayunit100.apa...@gmail.com:
Yes as an example of running a mapreduce job followed by a tez you can see
our last post on this
I remember there is a maximum container memory config, It is
ResourceManage-side config.
2014-05-23 13:58 GMT+08:00 ch huang justlo...@gmail.com:
hi,mailist:
i want to know if this option still cause limitation in YARN?
--
Bing Jiang
weibo: http://weibo.com/jiangbinglover
(MultiInputFormat.java:88)2592
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initNextRecordReader(PigRecordReader.java:256)
Have copied the native lib folder accordingly, but facing issue. Is there
any changes to do in hadoop conf or lib folder?
Please help.
--
Bing Jiang
--
Bing Jiang
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
$LocalizerRunner.run(ResourceLocalizationService.java:861)
.Failing this attempt.. Failing the application.
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
/hadoop/logs/userlogs/application_1388722003823_0001/container_1388722003823_0001_01_62/stderr
I tried some configurations, but still failed, who can do me a favor, thxs!
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http
before printing
this email.
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
will try to limit me but I don't limit myself*
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
nodes from the cluster.
Thanks,
Manickam P
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
hi,Jerry.
I think you are worrying about the volumn of mapreduce local file, but
would you give us more details about your apps.
On Aug 20, 2013 6:09 AM, Jerry Lam chiling...@gmail.com wrote:
Hi Hadoop users and developers,
I have a use case that I need produce a large sequence file of 1 TB
access each row in the sequence file. In
practice, this is simply a MapFile.
Any idea how to resolve this dilemma is greatly appreciated.
Jerry
On Mon, Aug 19, 2013 at 8:14 PM, Bing Jiang jiangbinglo...@gmail.comwrote:
hi,Jerry.
I think you are worrying about the volumn of mapreduce local
opened, and I restart RS gracefully, the problem
has been tackled. Ok, My question is :
Can someone tell me in which conditions do RS will ignore the file handler?
Any ideas will be nice.
Thanks!
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
format one namenode,and launch another namenode using: bin/hdfs namenode
-bootstrapStandby
2013/7/26 ch huang justlo...@gmail.com
hi ,all:
i use Quorum-based Storage for name node HA,my question is i need format
both 2 name node ? or just format one name node ?
--
Bing Jiang
Tel:(86
/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L147
Hope this helps,
Chris Nauroth
Hortonworks
http://hortonworks.com/
On Wed, Jul 3, 2013 at 6:21 AM, Bing Jiang jiangbinglo...@gmail.comwrote:
hi,all.
Using hadoop-2.0.5-alpha.
and I meet a problem that I should
(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:662)
It is proved that one datanode has been required to attached to only one
namespace?
Any views about it will be thankful.
Regards~
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: http
.com:8485/cluster2/value
/property
On Thu, Jul 4, 2013 at 2:46 PM, Bing Jiang jiangbinglo...@gmail.comwrote:
hi, Chris.
I have traced the source code, and I find this issue comes from
sbin/start-dfs.sh,
SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey
and
namenodes should use the same clusterID.
On Thu, Jul 4, 2013 at 3:12 PM, Bing Jiang jiangbinglo...@gmail.comwrote:
Hi, all
We try to use hadoop-2.0.5-alpha, using two namespaces, one is for hbase
cluster, and the other one is for common use.At the same time, we use
Quorum Journal policy as HA
~
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: http://blog.sina.com.cn/jiangbinglover
National Research Center for Intelligent Computing Systems
Institute of Computing technology
Graduate University of Chinese Academy of Science
specify this kind of list using the application master
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: http://blog.sina.com.cn/jiangbinglover
National Research Center for Intelligent Computing Systems
Institute of Computing technology
Graduate University
)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
i.e containers are only launching within the host where the application
master runs while the other nodes always remain free.
Any ideas.
--
Bing Jiang
Tel:(86)134-2619-1361
+HostName, But I don't know the format.
Thanks.
--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: http://blog.sina.com.cn/jiangbinglover
National Research Center for Intelligent Computing Systems
Institute of Computing technology
Graduate University of Chinese
the other nodes always remain free.
Any ideas.
--
Bing Jiang
Tel:(86)134-2619-1361
National Research Center for Intelligent Computing Systems
Institute of Computing technology
Graduate University of Chinese Academy of Science
/container_1325062142731_0006_01_01.pid
2011-12-29 15:49:16,307 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ContainerLocalizationCleanupEvent.EventType:
CLEANUP_CONTAINER_RESOURCES
--
Bing Jiang
Tel:(86)134-2619-1361
I know Hadoop Yarn can support MapReduce job well, but I have not found DAG
model task. Can you give me some demonstration I missed out , and point
out how to build my own programming models in the Hadoop Yarn.
--
Bing Jiang
.
thanks
mahadev
On Mon, Dec 26, 2011 at 1:57 AM, Bing Jiang jiangbinglo...@gmail.comwrote:
I know Hadoop Yarn can support MapReduce job well, but I have not found
DAG model task. Can you give me some demonstration I missed out , and
point out how to build my own programming models in the Hadoop
32 matches
Mail list logo