Hi,
I am writing a simple map reduce code in java. How can I get multiple key-
value pairs as output from a single mapper. If there is any preferred link for
that, please let me know. I tried using hash map for it but there exist some
errors in the same. Can an object of output collector emit m
Just call context.write multiple times
Regards,
Dhaval
From: "Shrivastava, Himnshu (GE Global Research, Non-GE)"
To: "user@hadoop.apache.org"
Sent: Wednesday, 18 June 2014 8:44 AM
Subject: Query map reduce
Hi,
I am writing a simple map reduce code
call context.write() to emit keys/values based on some conditions.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, Jun 18, 2014 at 6:14 PM, Shrivastava, Himnshu (GE Global Resear
BTW, my hadoop is 2.2.0
2014-06-18 23:19 GMT+08:00 Vincent,Wei :
>
> Hi Hadoopers
>
> I am using libhdfs in some C/C++ application, but I found some issues may
> result in JVM crash .My platform is Ubuntu 12.04 64bit, JDK is 1.7.0
>
> I found that in some simple applications in my platform libhdf
Hi,
We have a large data set originally stored on MS SQL and for intensive data
aggregation manipulation, we’re currently using Vertica. The thing is the
data is very large and sometimes, a “select” or “insert” query which is
very complex may needs even 10 minutes to return the correct results. (t
Hi all,
I have a hadoop cluster with 4 dataNodes+nodeManager and 1
namenode+resourceManager. I'm launching a MR job (identity mapper and
identity reducer) with the relevant memory settings set to appropriate
values :
mapreduce.[map|reduce].memory.mb, JAVA_CHILD_OPTS, map sort buffer,
reduce b
Hi,
We are trying to understand Quorum Journal Protocol (HDFS-3077)
Came across a scenario in which active name node is terminated and standby
namenode took over as new active namenode. But we could not understand why
active namenode got terminated in the first place.
Scenario :
We have 3 node
I have an old Hadoop install that I'm looking to update to Hadoop 2. In the
old setup, I have a /conf/masters file that specifies the
secondary namenode.
Looking through the Hadoop 2 documentation I can't find any mention of a
"masters" file, or how to setup a secondary namenode.
Any help in the
Does hadoop map reduce code compiled against 1.2 works with Yarn?
org.apache.hadoop
*hadoop*-core
1.2.1
Hi~ I want to Subscribe to List
Try impala or Hawk(
http://www.gopivotal.com/sites/default/files/Hawq_WP_042313_FINAL.pdf), in
my opinion the best choice for SQL-on-Hadoop.
On Wed, Jun 18, 2014 at 11:26 AM, Fengjiao Jiang
wrote:
> Hi,
>
> We have a large data set originally stored on MS SQL and for intensive
> data aggregatio
I think you've done it!
From: YangShine [mailto:shin...@outlook.com]
Sent: Wednesday, June 18, 2014 8:56 AM
To: user@hadoop.apache.org
Subject: Subscribe to List
Hi~ I want to Subscribe to List
I installed Yarn on a single node and now when I try to run hadoop fs I get
:
Error: Could not find or load main class FsShell
It appears to be a HADOOP_CLASSPATH issue and I am wondering how I can
build the classpath? Should I find all the jars in HADOOP_HOME?
I installed hadoop and now when I try to run "hadoop fs" I get this error.
I am using openjdk 64 bit on a virtual machine on a centos vm. I am also
listing my environment variable and one specific message I get when running
resource manager.
Environment variables:
export JAVA_HOME=/usr/lib/jvm/j
It depends on which group of APIs your application is using. Please refer
to this doc for details:
http://hadoop.apache.org/docs/r2.4.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html
On Thu, Jun 19, 2014 at 2:24 AM, Mohit Anchlia
wrote:
> Does
Hi Mohit,
Did you set up HADOOP_INSTALL?
For me, I did following:
export HADOOP_INSTALL=/usr/local/hadoop-2.4.0
Bo
On Wed, Jun 18, 2014 at 4:27 PM, Mohit Anchlia
wrote:
> I installed Yarn on a single node and now when I try to run hadoop fs I
> get :
>
> Error: Could not find or load main
No I didn't set that up. Do you know which script it's used? I have
HADOOP_HOME setup.
On Wed, Jun 18, 2014 at 8:01 PM, bo yang wrote:
> Hi Mohit,
>
> Did you set up HADOOP_INSTALL?
>
> For me, I did following:
>
> export HADOOP_INSTALL=/usr/local/hadoop-2.4.0
>
>
> Bo
>
>
> On Wed, Jun 18, 2014
You can add the line to your bash profile (.bash_profile)
export HADOOP_INSTALL=/usr/local/hadoop-2.4.0
On Wed, Jun 18, 2014 at 8:18 PM, Mohit Anchlia
wrote:
> No I didn't set that up. Do you know which script it's used? I have
> HADOOP_HOME setup.
>
>
> On Wed, Jun 18, 2014 at 8:01 PM, bo y
I installed Yarn on a single node and now when I try to run hadoop fs I get
:
Error: Could not find or load main class FsShell
It appears to be a HADOOP_CLASSPATH issue and I am wondering how I can
build the classpath? Should I find all the jars in HADOOP_HOME?
Just wanted to be more clear.
Now when namenode on n1 tried to finalize inprogress log segment ( upon
instruction from standby namenode on n2 after edit log roll over time has
passed ), namenode process on n1 got terminated(*because it could not get
quorum of responses*).
Thanks,
Giridhar.
On W
Hi,
I am willing to find the median of a set of numbers . There are repetitions as well. How can I write a reduce function
for this. I can get the count of each number. Is cumulative frequency required
for it? Or it can be found out by any other simpler way.
Regards,
Hi,
I build Hadoop-2.3.0 from the source based on the Wiki page,
http://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment
I have YARN running on top of HDFS,
#hdfs namenode -format
#hdfs namenode
#hdfs datanode
#yarn resourcemanager
#yarn nodemanager
Everything runs smoothly. As the
@Zeshen Wu,Thanks for the response.
I still don't understand how HDFS reduces the time to write and read a
file, compared to a traditional file read / write mechanism.
For example, if I am writing a file, using the default configurations,
Hadoop internally has to write each block to 3 data nodes.
23 matches
Mail list logo