You can try from this page:
https://ccp.cloudera.com/display/CDH4DOC/CDH4+Installation
You have all the links you need!
*Fabio Pitzolu*
Consultant - BI Infrastructure
Mob. +39 3356033776
Telefono 02 87157239
Fax. 02 93664786
*Gruppo Consulenza Innovazione - http://www.gr-ci.com*
2012/8/30
Hi Jilani,
It seems like a firewall issue. You will need to open appropriate ports or
disable the firewall on the machine you are running the service.
HTH,
Anil
On Thu, Aug 30, 2012 at 7:46 AM, Jilani Shaik jilani2...@gmail.com wrote:
I am able to connect to HBase using Java client if the
Anil,
Already I disabled firewall of linux using iptables service.
Thank You,
Jilani
On Thu, Aug 30, 2012 at 8:35 PM, anil gupta anilgupt...@gmail.com wrote:
Hi Jilani,
It seems like a firewall issue. You will need to open appropriate ports or
disable the firewall on the machine you are
Then, it might be a issue with port binding. try to do telnet on the port
to which HBase listens from localhost as well as remote machine.
Also, try to run the netstat command and see the bindings of service.
~Anil
On Thu, Aug 30, 2012 at 8:20 AM, Jilani Shaik jilani2...@gmail.com wrote:
Anil,
Hi Anil,
Please see the below commands which I executed and respective outputs.
on HBase and Hadoop box
[rtit@localhost conf]$ netstat -alnp | grep 2181
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp0
Hi Bertrand,
No, I do not observe the same when I run using cat | map. I can see the
output in STDOUT when I run my program.
I do not have any reducer. In my command, I provide
-D mapred.reduce.tasks=0. So, I expect the output of the mapper to be
written directly to HDFS.
Your suspicion
Please find the below conf files from both the hadoop and hbase conf.
core-site.xml
configuration
property
namehadoop.tmp.dir/name
value/home/rtit/hadoop/tmp/value
/property
property
namefs.default.name/name
valuehdfs://192.265.47.222:9000/value
This is interesting. I changed my command to:
-mapper cat $1 | $GHU_HOME/test2.py $2 \
is producing output to HDFS. But, the output is not what I expected and is
not the same as when I do cat | map on Linux. It is producing
part-0, part-1 and part-2. I expected only one output file
On Thu, Aug 30, 2012 at 12:17 PM, Jilani Shaik jilani2...@gmail.com wrote:
telnet is working for 60010, 60030 and 9000 from both the local and remote
boxes.
Then the hbase daemons are not running or as Anil is suggesting, the
connectivity between machines needs fixing (It looks like all binds
In addition to Stack's suggestion, use DNS names instead of IP address in
configuration of Hadoop and HBase. Its a bad idea to use DNS names in
configuration. Check your DNS configuration file.
By sudo i meant that run the netstat command like this: sudo netstat
-alnp . sudo is used to run a
Sorry, I messed up my last email. Please ignore that.
In addition to Stack's suggestion, use DNS names instead of IP address in
configuration of Hadoop and HBase. Its a bad idea to use IP address in
configuration of Hadoop/HBase. Check your DNS configuration file also.
By sudo i meant that run
Hi,
Do both input files contain data that needs to be processed by the
mapper in the same fashion ? In which case, you could just put the
input files under a directory in HDFS and provide that as input. The
-input option does accept a directory as argument.
Otherwise, can you please explain a
-- Forwarded message --
From: Visioner Sadak visioner.sa...@gmail.com
Date: Thu, Aug 30, 2012 at 2:02 PM
Subject: Integrating hadoop with java UI application deployed on tomcat
To: u...@hadoop.apache.org
Hi,
I have a WAR which is deployed on tomcat server the WAR contains some
I changed the mapred.job.reuse.jvm.num.tasks property from -1 to 1 using the
option
-D mapred.job.reuse.jvm.num.tasks=1 when firing the terasort job and it worked.
Not sure why ?
This means that hadoop-log-dir permissions isn't an issue.
From: Vinod Kumar Vavilapalli
Hi,
I ran TestDFSIO in my Hadoop cluster:
*hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar TestDFSIO -write -nrFiles
100 -fileSize 10240*
The report generated is:
*12/08/30 01:31:34 INFO fs.TestDFSIO: - TestDFSIO - : write*
*12/08/30 01:31:34 INFO fs.TestDFSIO:Date time: Thu
Hello list,
Does hadoop-1.0.3 support WholeFileInputFormat??I am not able to find
it in the distribution. Thank you.
Regards,
Mohammad Tariq
Hi,
I have a WAR which is deployed on tomcat server the WAR contains some
java classes which uploads files, will i be able to upload directly in to
hadoop iam using the below code in one of my java class
Configuration hadoopConf=new Configuration();
//get the default associated
Ok, but as i said before, how do i achieve the same result with out
clustering , just linear. Join on the same data-set basically?
and calculating the distance as i go
On Tue, Aug 28, 2012 at 11:07 PM, Ted Dunning tdunn...@maprtech.com wrote:
I don't mean that.
I mean that a k-means
You might need to put the apache commons configuration library jar in
web-inf/lib to clear this error.
On Thu, Aug 30, 2012 at 4:32 AM, Visioner Sadak visioner.sa...@gmail.comwrote:
Hi,
I have a WAR which is deployed on tomcat server the WAR contains some
java classes which uploads files,
Tried puttin it still same error [?]
On Thu, Aug 30, 2012 at 3:06 PM, John Hancock jhancock1...@gmail.comwrote:
You might need to put the apache commons configuration library jar in
web-inf/lib to clear this error.
On Thu, Aug 30, 2012 at 4:32 AM, Visioner Sadak
Hi,
is there anybody, who knows more about this issue: it has already been
recently marked here:
https://issues.apache.org/jira/browse/MAPREDUCE-5
I really want to do something about it, if i knew how... i tried so many
different setup parameters and JVM options and nothing did the trick...
Hi,
The error is talking about hadoop configuration. So probably you need to
put the hadoop core jar in the lib folder. That said, there might be other
dependencies you might need as well. But you can try it out once.
Thanks
hemanth
On Thu, Aug 30, 2012 at 3:53 PM, Visioner Sadak
You can try from this page:
https://ccp.cloudera.com/display/CDH4DOC/CDH4+Installation
You have all the links you need!
*Fabio Pitzolu*
Consultant - BI Infrastructure
Mob. +39 3356033776
Telefono 02 87157239
Fax. 02 93664786
*Gruppo Consulenza Innovazione - http://www.gr-ci.com*
2012/8/30
Hi All,
The formula is actually: *Throughput = (size*1000) / (time*MEGA)*
*= (1073741824000*1000)
/ (184793950 * 1048576)*
*= 5.54130695296031*
And the time is the summation of all the Exec
I was just reading about this in the Hadoop definitive guide last night.
Need to be the same version. You can try hftp between versions
Regards
Dano
On Aug 28, 2012 9:44 AM, Tao z...@outlook.com wrote:
Hi, all
I use distcp copying data from hadoop1.0.3 to hadoop 2.0.1.
Robin, moving this to the MapR forum at answers.mapr.com. Your question
is posted at
http://answers.mapr.com/questions/3480/mapr-hive-job-hangs
thanks,
Srivas.
On Thu, Aug 30, 2012 at 6:37 AM, Robin Verlangen ro...@us2.nl wrote:
Hi there,
We're experimenting with MapR and Hive. We started
Thank you Srivas, found the forum a couple of minutes ago myself.
With kind regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the
I am able to connect to HBase using Java client if the client is on the
same box where Hadoop and HBase are installed. If the client is on other
box either windows or linux, I am getting the error as below
org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for
client
Please
On 30 August 2012 13:54, Visioner Sadak visioner.sa...@gmail.com wrote:
Thanks a ton guys for your help i did used hadoop-core-1.0.3.jar
commons-lang-2.1.jar to get rid of the class not found error now i am
getting this error is this becoz i am using my app and hadoop on windows???
Mohammad,
There never has been a WholeFileInputFormat in upstream Hadoop AFAIK.
It is an example in Tom White's book.
On Thu, Aug 30, 2012 at 1:49 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello list,
Does hadoop-1.0.3 support WholeFileInputFormat??I am not able to find
it in the
Thank you for your reply.
I really need this feature, because this will boost up execution of my use case
a lot.
Could you give me a hint where to look to get a good starting
point for implementation?
Hi Eduard,
This isn't impossible, just unavailable at the moment. See
Hi,
When running a job with more reducers than containers available in the
cluster all reducers get scheduled, leaving no containers available
for the mappers to be scheduled. The result is starvation and the job
never finishes. Is this to be considered a bug or is it expected
behavior? The
FYI : the jstack output for the hanged job:
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.8-b03-424 mixed
mode):
RMI TCP Accept-0 daemon prio=9 tid=7f833f1ba800 nid=0x109b9 runnable
[109b8f000]
java.lang.Thread.State: RUNNABLE
at java.net.PlainSocketImpl.socketAccept(Native
The first scenario is expected behavior. And yes you should limit number
of the reducers.
Serge
On 8/30/12 10:41 AM, Vasco Visser vasco.vis...@gmail.com wrote:
Hi,
When running a job with more reducers than containers available in the
cluster all reducers get scheduled, leaving no containers
Can you also try to run telnet and netstat for port: 60030 and 60010 ? I
dont see post 60030 and 60010 in the output of netstat. Did you configured
some other ports for HBase Master?
2181 is the port of zookeeper.
On Thu, Aug 30, 2012 at 8:45 AM, Jilani Shaik jilani2...@gmail.com wrote:
Hi
If possible, try to run netstat as sudo.
On Thu, Aug 30, 2012 at 11:21 AM, anil gupta anilgupt...@gmail.com wrote:
Can you also try to run telnet and netstat for port: 60030 and 60010 ? I
dont see post 60030 and 60010 in the output of netstat. Did you configured
some other ports for HBase
Vinod, thanks for the reply.
On Thu, Aug 30, 2012 at 8:19 PM, Vinod Kumar Vavilapalli
vino...@hortonworks.com wrote:
Since you mentioned containers, I assume you are using hadoop 2.0.*. Replies
inline.
0.23.1 with Pig 0.10.0 on top.
When running a job with more reducers than containers
Hi Stone,
Do you have any flags in hadoop-env.sh indicating preference of IPv4 over IPv6?
On Thu, Aug 30, 2012 at 11:26 PM, Stone stones@gmail.com wrote:
FYI : the jstack output for the hanged job:
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.8-b03-424 mixed
mode):
RMI TCP
but the problem is that my code gets executed with the warning but file is
not copied to hdfs , actually i m trying to copy a file from local to hdfs
Configuration hadoopConf=new Configuration();
//get the default associated file system
FileSystem
do i have to do some tomcat configuration settings ???
On Fri, Aug 31, 2012 at 1:08 AM, Visioner Sadak visioner.sa...@gmail.comwrote:
but the problem is that my code gets executed with the warning but file
is not copied to hdfs , actually i m trying to copy a file from local to
hdfs
I don't know off-hand. I don't understand the importance of your
constraint either.
On Thu, Aug 30, 2012 at 5:21 AM, dexter morgan dextermorga...@gmail.comwrote:
Ok, but as i said before, how do i achieve the same result with out
clustering , just linear. Join on the same data-set basically?
FYI: the starvation issue is a known bug
(https://issues.apache.org/jira/browse/MAPREDUCE-4299).
Still interested in answers to the questions regarding the scheduling
though. If anyone can share some info on that it is much appreciated.
regards, Vasco
Umsubscribe
2012/8/31 Vasco Visser vasco.vis...@gmail.com
FYI: the starvation issue is a known bug
(https://issues.apache.org/jira/browse/MAPREDUCE-4299).
Still interested in answers to the questions regarding the scheduling
though. If anyone can share some info on that it is much
My name is ko.
Please let me join the mailing list.
Thank you very much.
--
kosen...@datahotel.co.jp
The mapred.local.dir is local directories on the file system of slave
nodes. In pseudo distributed mode, this would be your own machine. If
you've specified any configuration for it, it should be in your
mapred-site.xml. If not, it defaults to value of
hadoop.tmp.dir/mapred/local. The default
You are allowed to join
-Original Message-
From: 黄 光川 [mailto:kosen...@datahotel.co.jp]
Sent: 31 August 2012 08:48
To: user@hadoop.apache.org
Subject: Please let me join the mailing list.
My name is ko.
Please let me join the mailing list.
Thank you very much.
--
46 matches
Mail list logo