Hey guys..!!
Any suggestions..!!!
-- Forwarded message --
From: praveenesh kumar praveen...@gmail.com
Date: Wed, Jun 1, 2011 at 2:48 PM
Subject: Data node is taking time to start.. Error register
getProtocolVersion in namenode..!!
To: common-user@hadoop.apache.org
Hello Hadoop
Firewall of ubuntu box.
2011/6/2 jagaran das jagaran_...@yahoo.co.in
ufw
From: MilleBii mille...@gmail.com
To: common-user@hadoop.apache.org
Sent: Wed, 1 June, 2011 3:37:23 PM
Subject: Re: Adding first datanode isn't working
OK found my issue.
Dear all,
I ran several map-reduce jobs in Hadoop Cluster of 4 nodes.
Now this time I want a map-reduce job to be run again after one.
Fore.g to clear my point, suppose a wordcount is run on gutenberg file
in HDFS and after completion
11/06/02 15:14:35 WARN mapred.JobClient: No job jar file
Oozie's workflow feature may exactly be what you're looking for. It
can also do much more than just chain jobs.
Check out additional features at: http://yahoo.github.com/oozie/
On Thu, Jun 2, 2011 at 4:48 PM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
Dear all,
I ran several map-reduce
Ok, Is it valid for running jobs through Hadoop Pipes too.
Thanks
Harsh J wrote:
Oozie's workflow feature may exactly be what you're looking for. It
can also do much more than just chain jobs.
Check out additional features at: http://yahoo.github.com/oozie/
On Thu, Jun 2, 2011 at 4:48 PM,
Dear all,
How to run two map reduce one after the another in a single hadoop
map-reduce code.
How to run two map reduce simultaneously in a single hadoop program(not
cascading but two different map reduce sequentially means as first map
function stop the next map function start similarly
Yes, I believe Oozie does have Pipes and Streaming action helpers as well.
On Thu, Jun 2, 2011 at 5:05 PM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
Ok, Is it valid for running jobs through Hadoop Pipes too.
Thanks
Harsh J wrote:
Oozie's workflow feature may exactly be what you're
Thanks a lot, I will let you know after some work on it.
:-)
Harsh J wrote:
Yes, I believe Oozie does have Pipes and Streaming action helpers as well.
On Thu, Jun 2, 2011 at 5:05 PM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
Ok, Is it valid for running jobs through Hadoop Pipes too.
Hi all,
Are there anywhere instructions on how to change from the default ports
of Hadoop and HDFS? My main interest is in default port 8020.
Thanks,
George
--
On Thu, 02 Jun 2011 17:23:08 +0300, George Kousiouris
gkous...@mail.ntua.gr wrote:
Are there anywhere instructions on how to change from the default ports
of Hadoop and HDFS? My main interest is in default port 8020.
I think this is part of fs.default.name. You would go into core-site.xml
and
Hi,
thanks for the reply, we tried this with a definitely open (from a
firewall pov) port but still did not work. Maybe something else in
addition is needed?
Thanks,
George
On 6/2/2011 5:32 PM, John Armstrong wrote:
On Thu, 02 Jun 2011 17:23:08 +0300, George Kousiouris
Hi George,
You have to set the HADOOP_SSH_OPTS in the *hadoop-env.sh*
For eg :-
export HADOOP_SSH_OPTS=-p 8221 -o ConnectTimeout=1 -o
SendEnv=HADOOP_CONF_DIR
On Thu, Jun 2, 2011 at 7:48 AM, George Kousiouris gkous...@mail.ntua.grwrote:
Hi,
thanks for the reply, we tried this with a
George,
Could you additionally describe the issues you're facing right now w.r.t. ports?
If you need a fuller port number reference, we have a blog post here
you can get them from:
http://www.cloudera.com/blog/2009/08/hadoop-default-ports-quick-reference/
On Thu, Jun 2, 2011 at 8:18 PM, George
Hi,
the main issue that we are trying to connect a datanode from a remote
domain to the master node that is behind a firewall that does not have
8020 open. We change the port from the core-site xml to 8090 and restart.
But when we try to start hdfs, it seems that all services are started,
George,
Could you ensure that your 8090'd NameNode starts up fine? If yes, can
also you telnet to that host port from your remote node - that
should tell you if your firewall is at issue again?
On Thu, Jun 2, 2011 at 10:17 PM, George Kousiouris
gkous...@mail.ntua.gr wrote:
Hi,
the main issue
Hi Matei,
Thanks for your feedback. I am trying to verify/debug whether the failures
are actually due to speculative execution. I will send an update once I more
info on this.
-Shrinivas
On Thu, Jun 2, 2011 at 12:40 AM, Matei Zaharia ma...@eecs.berkeley.eduwrote:
Usually the number of
btw, the /lezhao/gov2 service from the root directory
(/bos/usr0/htdocs/lezhao/gov2) works fine.
Le
On 6/2/2011 1:00 PM, George Kousiouris wrote:
Hi,
it seems to be working fine, and the telnet also works...:-(
On 6/2/2011 7:56 PM, Harsh J wrote:
George,
Could you ensure that your 8090'd
Hi,
we have made some progress with this. One of the problems was that:
-etc/hosts master alias was set to localhost
So we changed the core-site xml to have directly the public IP.
BR,
George
On 6/2/2011 8:00 PM, George Kousiouris wrote:
Hi,
it seems to be working fine, and the telnet
George,
Good to know. That'd indeed have lead to wrong bind/listen. Are you
out of DFS troubles now?
On Thu, Jun 2, 2011 at 11:39 PM, George Kousiouris
gkous...@mail.ntua.gr wrote:
Hi,
we have made some progress with this. One of the problems was that:
-etc/hosts master alias was set to
Hi,
well, sort of, we have some other networking issues now that forbid us
from actually testing the change of the port.
I will let you know as soon as all is tested.
Thanks to everyone for their help,
George
On 6/2/2011 9:12 PM, Harsh J wrote:
George,
Good to know. That'd indeed have
Hi,
Does anyone knows if : SequenceFile.next(key) is actually not reading
value into memory
*nexthttp://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/SequenceFile.Reader.html#next%28org.apache.hadoop.io.Writable%29
Hi,
Sorry to bother you again, but could someone please give me advice on
how to set up hadoop with a fixed LineRecordReader?
(If I need to checkout from SVN, how do I get those three subprojects
combined und the bash scripts working? I couldn't find a doc for this yet.)
Kind regards,
Claus
I find the reason. This is because I use different version of hadoop between
server side and client. Seems CDH is not compatible with
apache release version
On Fri, Jun 3, 2011 at 1:12 AM, Tanping Wang tanp...@yahoo-inc.com wrote:
Jeff,
I would first double check if port 9000 is your service
Hi John, thanks for the reply. But I'm not asking about the key memory
allocation here. I'm just saying what's the difference between:
Next(key,value) and Next(key) . Is the later one still reading the value of
the key to reach the next key? or does it read the key then using the
recordSize
Hello guys.
I just have installed hbase on my hadoop cluster.
HMaster,HRegionServer,HQuorum Peer all are working fine.. as I can see these
processes running through JPS.
Is there any way to know which regionservers are running right and not ?
I mean is there some kind of hbase web UI or anyway
Actually, I checked the source code of Reader and it turns it reads the
value into a buffer but only returns the key to the user :( how is this
different than :
Writable value = new Writable();
reader.next(key,value) !!! both are using the same object for multiple
reads. I was hoping next(key)
26 matches
Mail list logo