Hi,
I am using Hadoop and Hbase in pseudo distributed mode.
I am using Hadoop version - 1.1.2 , Hbase version - 0.94.7 .
I am receiving following error messages in data node log.
hadoop-hadoop-datanode-woody.log:2013-10-24 10:55:37,579 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
1. File and Ganglia are the only bundled sinks, though there are
socket/json (for chukwa) and graphite sinks patches in the works.
2. Hadoop metrics (and metrics2) is mostly designed for system/process
metrics, which means you'll need to attach jconsole to your map/reduce task
processes to see
Hi
I have setting up Hadoop 2.2.0 HA cluster following :
http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html#Configuration_details
And I can check both the active and standby namenode with WEB interface.
While, it seems that the logical name could
Encounter Similar issue with NN HA URL
Have you make it work?
Best Regards,
Raymond Liu
-Original Message-
From: Siddharth Tiwari [mailto:siddharth.tiw...@live.com]
Sent: Friday, October 18, 2013 5:17 PM
To: user@hadoop.apache.org
Subject: Using Hbase with NN HA
Hi team,
Can Hbase be
Hi,
How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?
On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang owenzhang1...@gmail.com wrote:
Just confirmed the problem still existed even the mapred-site.xmls on all
nodes have the same configuration
Hmm, my bad. NameserviceID is not sync in one of the properties
After fix, it works.
Best Regards,
Raymond Liu
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com]
Sent: Thursday, October 24, 2013 3:03 PM
To: user@hadoop.apache.org
Subject: How to use Hadoop2 HA's
Thank you very much for replying and sorry for posting on the wrong list
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Thursday, October 24, 2013 at 1:06 AM, Jun Ping Du wrote:
Move to @user alias.
- Original Message -
From: Jun Ping Du j...@vmware.com
Hi,
I think that it is not useful trying to install hadoop on Windows because
hadoop is very integrated in Linux and there is no support for Windows
2013/10/23 chris bidwell chris.bidw...@oracle.com
Is there any documentation or instructions on installing Hadoop 2.2.0 on
Microsoft Windows?
Disclosure: I do not work for Hortonworks, I just use their product. please
do not bash me up
Angelo,
thats not entirely correct now.
Hortonworks has done tremendous amount of work to port hadoop to windows os
as well.
Here is there press release:
* In the release note I could see that Support for running Hadoop on
Microsoft Windows is included.
*
* Can somebody tell me what are those features?
*
* - Are these new features added to support easy Windows installation?
* - Are there any inbuilt MSFT.Net support
It was my understanding that HortonWorks depended on CygWin (UNIX emulation
on Windows) for most of the Bigtop family of tools - Hadoop core,
MapReduce, etc. - so, you will probably make all your configuration files
in Windows, since XML is agnostic, and can develop in Windows, since JARs
and
No Cygwin is involved...We have the installation via MSI clearly documented
on our site. Python, Visual C++, JDK and .Net framework. This work was done
in conjunction with Microsoft who is not in the business of supporting
Cygwin. Hadoop is Java and Java runs on windows.
On Thu, Oct 24, 2013 at
Sorry for my ignorance... I don't bash up you Nitin..Eheh. Thank you very
much for your post Adam.
I 'm going to see your work.
2013/10/24 DSuiter RDX dsui...@rdx.com
Very cool Adam! Thanks for the clarification, and great work for you guys
porting it over to native running.
*Devin Suiter*
I am using hadoop 1.0.3 at Amazon EMR. I have a map / reduce job configured
like this:
private static final String TEMP_PATH_PREFIX =
System.getProperty(java.io.tmpdir) + /dmp_processor_tmp;
...
private Job setupProcessorJobS3() throws IOException, DataGrinderException {
String inputFiles =
Viswanathan,
What version of Hadoop are you using? What is the change?
On Wednesday, October 23, 2013 2:20 PM, Viswanathan J
jayamviswanat...@gmail.com wrote:
Hi guys,
If I update(very small change) the hadoop-core mapred class for one of the OOME
patch and compiled the jar. If I deploy
OOps..forgot the code:
http://pastebin.com/7XnyVnkv
On Thu, Oct 24, 2013 at 10:54 AM, jamal sasha jamalsha...@gmail.com wrote:
Hi,
I am trying to join two datasets.. One of which is json..
I am relying on json-simple library to parse that json..
I am trying to use libjars.. So far .. for
Hi,
I am trying to join two datasets.. One of which is json..
I am relying on json-simple library to parse that json..
I am trying to use libjars.. So far .. for simple data processing.. the
approach has worked.. but now i am getting the following error
Exception in thread main
Hi Rico!
What was the command line you used to build?
On Wednesday, October 23, 2013 11:44 PM, codepeak gcodep...@gmail.com wrote:
Hi all,
I have a problem when compile the hadoop 2.2.0, the apache only offers
32bit distribution, but I need 64bit, so I have to compile it myself.
My
Hi folks. Is there a way to make YARN more forgiving with last
modification times? The following exception in
org.apache.hadoop.yarn.util.FSDownload:
changed on src filesystem (expected + resource.getTimestamp() +
, was + sStat.getModificationTime());
I realize that time should be the same,
It seems like you may want to look into Amazon's EMR (elastic mapreduce),
which does much of what you are trying to do. It's worth taking a look at
since you're already storing your data in S3 and using EC2 for your
cluster(s).
On Thu, Oct 24, 2013 at 5:07 PM, Nan Zhu zhunans...@gmail.com
The scenario is: I run mapreduce job on cluster A (all source data is in
cluster A) but I want the output of the job to cluster B. Is it possible? If
yes, please let me know how to do it.
Here are some notes of my mapreduce job:
1. the data source is an HBase table
2. It only has mapper no
BCC'ing user@hadoop.
This is a question for the ambari mailing list.
-- Hitesh
On Oct 24, 2013, at 3:36 PM, Jain, Prem wrote:
Folks,
Trying to install the newly release Hadoop 2.0 using Ambari. I am able to
install Ambari, but when I try to install Hadoop 2.0 on rest of the cluster,
The servers have been very busy since the release. You probably just need
to try again.
On Oct 24, 2013 6:37 PM, Jain, Prem premanshu.j...@netapp.com wrote:
Folks,
** **
Trying to install the newly release Hadoop 2.0 using Ambari. I am able to
install Ambari, but when I try to install
My mapreduce.map.java.opts is 1024MB
Thanks,
Manu
On Thu, Oct 24, 2013 at 3:11 PM, Tsuyoshi OZAWA ozawa.tsuyo...@gmail.comwrote:
Hi,
How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?
On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang
I think I have the fix for this. I'll check when I get home.
Clay McDonald
Sent from my iPhone
On Oct 24, 2013, at 7:36 PM, Hitesh Shah hit...@apache.org wrote:
BCC'ing user@hadoop.
This is a question for the ambari mailing list.
-- Hitesh
On Oct 24, 2013, at 3:36 PM, Jain, Prem
Just specify the output location using the URI to another cluster. As long as
the network is accessible, you should be fine.
Yong
Date: Thu, 24 Oct 2013 15:28:27 -0700
From: myx...@yahoo.com
Subject: Mapreduce outputs to a different cluster?
To: user@hadoop.apache.org
The scenario is: I run
Hi,
I've been running Terasort on Hadoop 2.1.0-beta. I have a 6 node cluster 5 of
which runs a Node Manager and all have a Data Node. I don't understand why I
have a bad performance in most cases and why in some cases the performance is
good (10GB Terasort with 2 reducers).
* When I run 10,
Thanks Shahab Yong. If cluster B (in which I want to dump output) has url
hdfs://machine.domain:8080 and data folder /tmp/myfolder, what should I
specify as the output path for MR job?
Thanks
On Thursday, October 24, 2013 5:31 PM, java8964 java8964 java8...@hotmail.com
wrote:
Just
Hi all.
I have a mapreduce program with two jobs. second job's key and value comes
from first job output. but I think the second map does not get the result
from first job. in other words I think my second job did not read the
output of my first job.. what should I do?
here is the code:
public
29 matches
Mail list logo