hi,all:
i check the NN metrics ,and find the TransactionsSinceLastCheckpoint
is a negtive number, but why it will be a negtive number,after the last
check point ,if no transaction happened ,the number should be zero, if
transaction happened,it should be a positive number. why?
hi,all:
what is TotalLoad mean? i have 4 DN, and the option
dfs.datanode.max.transfer.threads is 4096 ,but i check this metric
the value is 4597 ,this number is great than
dfs.datanode.max.transfer.threads ,why?
When I use the hadoop security, I must use jsvc to start datanode. Why must
use jsvc to start datanode? What are the advantages do that?
Thanks,
LiuLei
Hi,
Can some say anything regarding the following problem?
Thanks in advance.
Regards..
Salman.
Salman Toor, PhD
salman.t...@it.uu.se
On Nov 15, 2013, at 6:54 PM, Salman Toor wrote:
Hi,
I am trying to run my C++ code using hadoop-1.2.1. I am using pipes. The code
requires to
hi,
please recommend a good maven repo to compile hadoop source code.
It complain cannot find jdbm:bundle:2.0.0:m15 during compile trunk.
thanks.
Hi All
Having some problems with map reduce running in pseudo-distributed mode. I
am running version 1.2.1 on linux. I have:
1. created $JAVA_HOME $HADOOP_HOME and added the relative bin directories
to the path;
2. Formatted the dfs;
3. executed start-dfs.sh and start-mapred.sh.
Executing jps
Hi,
I just started to write applications on top of YARN. And I have a few
questions I would love to get some opinions.
What needs to be done: I have a bunch of files and some intensive
computations need to be done on each of them separately and individually.
The computation intensity is decided
Thanks Roman.
Our app won't be able to impose any particular vendor's distribution upon our
users. We grew dependent on base apache distribution packages w/ the 64bit
builds... so is https://issues.apache.org/jira/browse/HADOOP-9911 going to be
rejected? In other words, are there no plans to
Which platform did you perform the build on ?
I was able to build trunk on Mac.
I found the following dependency in dependency tree output:
[INFO] +-
org.apache.directory.server:apacheds-jdbm-partition:jar:2.0.0-M15:compile
[INFO] | \-
It's possible that you're seeing a different problem now. I know of at
least one bug that can cause the JobTracker to hang for extensive periods
of time. It's caused by holding a lock while writing user history to HDFS.
MAPREDUCE-5606 tracks this bug, and it actually might be a duplicate of
On Mon, Nov 18, 2013 at 9:06 AM, Pastrana, Rodrigo (RIS-BCT)
rodrigo.pastr...@lexisnexis.com wrote:
Thanks Roman.
Our app won't be able to impose any particular vendor's distribution upon our
users.
We grew dependent on base apache distribution packages w/ the 64bit builds...
so is
Hi All,
Please let me know how to analyze facebook data using hadoop. I would like
to download some region users data and I want to analyze that data.
Thanks,
Sanjeevv
Kishore,
Also, please specify if you are using managed or unmanaged AMs (the numbers
I've mentioned before are using unmanaged AMs).
thx
On Sun, Nov 17, 2013 at 11:16 AM, Vinod Kumar Vavilapalli
vino...@hortonworks.com wrote:
It is just creating a connection to RM and shouldn't take that
Ted,
I am on Linux.
On 2013-11-19 1:30 AM, Ted Yu yuzhih...@gmail.com wrote:
Which platform did you perform the build on ?
I was able to build trunk on Mac.
I found the following dependency in dependency tree output:
[INFO] +-
We can definitely build them, but I'm not sure if our legal dep would allow us
to distribute 3rd party libs (even though we're open source as well)...
If bigtop packages support the latest distros, and provide 2.x GA then the
above discussion goes away =)
I'll take a look at bigtop again,
Compilation on Linux passed for me:
[hortonzy@kiyo hadoop]$ uname -a
Linux core.net 2.6.32-220.23.1.el6.20120713.x86_64 #1 SMP Fri Jul 13
11:40:51 CDT 2012 x86_64 x86_64 x86_64 GNU/Linux
[hortonzy@kiyo hadoop]$ mvn -version
Apache Maven 3.0.3 (r1075438; 2011-02-28 17:31:09+)
Cheers
On Mon,
Hi,
How can I find the NIC capacity of all the name nodes in a cluster?
Thanks
-Gaurav
I need help. No matter what I do I can't seem to get Hadoop to find my custom
partitioner.
Here is the command I am running:
../bin/hadoop jar ../contrib/streaming/hadoop-streaming-1.2.1.jar \
-libjars ./NumericPartitioner.jar \
-input /input -output /output/keys -mapper map_threeJoin.py
Hi
It does not work.
I do not find the yarn.scheduler.capacity.resource-calculator property
in the
hadoop-2.2.0/share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml.
Is it the right property?
Anyone could give me any suggestion about the exception?
2013/11/15 Rob Blah
property
nameyarn.scheduler.capacity.resource-calculator/name
valueorg.apache.hadoop.yarn.util.resource.DefaultResourceCalculator/value
description
The ResourceCalculator implementation to be used to compare
Resources in the scheduler.
The default i.e.
ATT
Hi Omkar
Thanks,It works . Early I just copy all the configuration files of the
2.0.5.alpha into my 2.2.0 conf. And I just change the yarn-site.xml .
Now,after I setup the property in the capacity-scheduler.xml, it goes well.
regards
2013/11/19 Omkar Joshi ojo...@hortonworks.com
Thanks Ted.
I missed new pom.xml, fixed it now.
On 2013-11-19 3:09 AM, Ted Yu yuzhih...@gmail.com wrote:
Compilation on Linux passed for me:
[hortonzy@kiyo hadoop]$ uname -a
Linux core.net 2.6.32-220.23.1.el6.20120713.x86_64 #1 SMP Fri Jul 13
11:40:51 CDT 2012 x86_64 x86_64 x86_64
Seems like no one has encountered this problem. Ok will keep trying to solve
this issue.
Regards..
Salman.
On Nov 18, 2013, at 10:47 AM, Salman Toor wrote:
Hi,
Can some say anything regarding the following problem?
Thanks in advance.
Regards..
Salman.
Salman Toor, PhD
Hi,
I was reading a online blog about how the RPC requests from hdfs
clients gets processed where a listener object on the name node accepts
connections after which the RPC thread read those requests and add them to
call queue. Then the worker thread kick in and does the following:
first
28 matches
Mail list logo