Error observed in Task Logs
A few of the task attempts fail with the following error message:
2013-09-03 08:47:07,670 INFO org.apache.hadoop.mapred.Merger: Merging
1000 sorted segments
2013-09-03 08:47:07,852 INFO org.apache.hadoop.mapred.Merger: Down to
the last merge-pass, with 994 segments
Hello Hadoopers,
I am trying to configure httpfs below are my
configurations in
*httpfs-site.xml*
property
namehttpfs.fsAccess.conf:fs.default.name
/name
valuehdfs://132.168.0.10:8020/value
/property
*and in core-site.xml*
property
something is wrong with your name-resolution. If you look at the error
message, it says you are trying to connect to 127.0.0.1 instead of the
remote host.
-André
On Tue, Sep 3, 2013 at 12:05 PM, Visioner Sadak
visioner.sa...@gmail.com wrote:
Hello Hadoopers,
I am
yeah shud i change my etc/hosts :)
On Tue, Sep 3, 2013 at 3:59 PM, Andre Kelpe ake...@concurrentinc.comwrote:
something is wrong with your name-resolution. If you look at the error
message, it says you are trying to connect to 127.0.0.1 instead of the
remote host.
-André
On Tue, Sep 3,
i removed 127.0.0.1 references from my etc hosts now its throwing
{RemoteException:{message:Call From redsigma1.local\/132.168.0.10
to localhost:8020 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see:
hello hadoopers,
I need to do search files in distributed database Storage.. I need to
develope front end GUI and algorithm for searching multimedia files, using
different search exploration techniques. Shall I use HBase for this?? Can
you suggest something for implementing this??
--
Thanks
Going through cloudera search functionality would be useful for you. Have a
look at solr also you can integrate it with hadoop
On 03/09/2013 9:39 PM, Devika Rankhambe rankhambedev...@gmail.com wrote:
hello hadoopers,
I need to do search files in distributed database Storage.. I need to
Can you please share your /etc/hosts file?
Thanks
Jitendra
On Tue, Sep 3, 2013 at 4:53 PM, Visioner Sadak visioner.sa...@gmail.comwrote:
i removed 127.0.0.1 references from my etc hosts now its throwing
{RemoteException:{message:Call From redsigma1.local\/132.168.0.10 to
localhost:8020
Hi,
I am trying to run Contrail on Hadoop and it starts ok, but after some time
throws an error. Some java class can't convert a string to float because
there is a comma.
I think the problem is not at contrail code. Maybe the location of my
timezone? Does anyone already passed through this?
$
Thank you Shekhar, Harsh. I will follow and try to implement your
suggestions. I appreciate the help.
On Sat, Aug 31, 2013 at 11:12 AM, Harsh J ha...@cloudera.com wrote:
Your cluster is using HDFS HA, and therefore requires a little more
configs than just fs.defaultFS/etc..
You need to use
This is usually a String.format() problem, when the developer was
using an English locale and was not aware of the fact, that
String.format is locale dependent.
Try this:
export LANG=en_EN.UTF-8
your hadoop command
- André
On Tue, Sep 3, 2013 at 3:20 PM, Felipe Gutierrez
I don't know anything about contrail, I believe it is a better idea,
to ask on their mailing list for help:
http://sourceforge.net/p/contrail-bio/mailman/?source=navbar
- André
On Tue, Sep 3, 2013 at 5:27 PM, Felipe Gutierrez
felipe.o.gutier...@gmail.com wrote:
I configure the languages but the
I configure the languages but the error persists
I wrote:
$ export LANG=en_EN.UTF-8
$ export LANGUAGE=en_US:en
$ locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to
--
Best regards,
Abhijith
I just noticed the job status for MR jobs tends to show 0's in the Map and
Reduce columns but actually shows the totals correctly.
I am not sure exactly when this started happening, but this cluster was
upgraded from Hadoop 1.0.4 to 1.1.2 and now to 1.2.1. It definitely worked
fine on 1.0.4, but
Hi,
I reported this issue in MAPREDUCE-5376
(https://issues.apache.org/jira/browse/MAPREDUCE-5376) and attached a
patch.
But it is not fixed by the current release.
Thanks,
Shinichi
(2013/09/03 11:20), Robert Dyer wrote:
I just noticed the job status for MR jobs tends to show 0's in the Map
Is there a current version of this chart, or this information?
https://blogs.apache.org/bigtop/entry/all_you_wanted_to_know
http://2.bp.blogspot.com/-pHJR7XSCTlM/TzS-3-oGS4I/AB4/9M7OUrDapro/s640/hadoop-vers.png
We've observed this internally too.
Shinichi, tx for the patch. Will follow up on JIRA to get it committed.
Thanks,
+Vinod
On Sep 3, 2013, at 11:35 AM, Shinichi Yamashita wrote:
Hi,
I reported this issue in MAPREDUCE-5376
(https://issues.apache.org/jira/browse/MAPREDUCE-5376) and attached
AFAIK, no.I will ask Cos about it.
Kim
On Tue, Sep 3, 2013 at 11:55 AM, Lance Norskog goks...@gmail.com wrote:
Is there a current version of this chart, or this information?
https://blogs.apache.org/bigtop/entry/all_you_wanted_to_know
Keep in mind that there are 2 flavors of Hadoop: the older one without HA
and the new one with it. Anyway, have you seen the following?
http://wiki.apache.org/hadoop/NameNodeFailover
http://www.youtube.com/watch?v=Ln1GMkQvP9w
Thanks all for the JIRA links.
It appears this was 'fixed' in an older JIRA... but looking at the source
in my 1.2.1 webapps directory, it definitely is not fixed!
I'll comment on the new JIRA.
On Tue, Sep 3, 2013 at 2:13 PM, Vinod Kumar Vavilapalli
vino...@apache.orgwrote:
We've observed
RahulAre you talking about rack-awareness script?
I did go through rack awareness. Here are the problems with rack awareness
w.r.to my (given) business requirment
1. Hadoop , default places two copies on the same rack and 1 copy on some
other rack. This would work as long as we have two data
Just starting with hadoop and hbase, and couldn't find specific answers
in official documentation (unless I've missed the obvious).
Assuming I have three hadoop servers: h1, h2 and h3, with h1 being a
master+slave - what is a recovery scenario if the master server, h1,
died and is beyond repair
Hello Tomasz,
Just to add,
Although it says *masters*, the */conf/masters* actually specifies the
machine where *SecondaryNameNode* will run. Master daemons run on the
machine where you execute the start scripts. If you need to change the
master machine, you must make appropriate changes in the
Here is the updated one,
http://drcos.boudnik.org/2013/02/one-more-hadoop-in-family.html
On Tue, Sep 3, 2013 at 1:07 PM, Kim Chew kchew...@gmail.com wrote:
AFAIK, no.I will ask Cos about it.
Kim
On Tue, Sep 3, 2013 at 11:55 AM, Lance Norskog goks...@gmail.com wrote:
Is there a current
On Wed, Sep 4, 2013 at 12:31 AM, Mounir E. Bsaibes
m...@linux.vnet.ibm.comwrote:
Hello,
I'm considering using hadoop for a share cluster. As start to setup quickly
the system I'm was planning to differ Kerberos use.
Is there other way than using Kerberos to forbid a user to use
hadoop.job.ugi parameter to impersonate another user?
thanks
Mt
Adam's response makes more sense to me to offline replicate generated data
from one cluster to another across data centers.
Not sure if configurable block placement block placement policy is
supported in Hadoop.If yes , then alone side with rack awareness , you
should be able to achieve the same.
Under replicated blocks are also consistent from a consumers point. Care of
explain the relation to weak consistency to hadoop.
Thanks,
Rahul
On Wed, Sep 4, 2013 at 9:56 AM, Rahul Bhattacharjee rahul.rec@gmail.com
wrote:
Adam's response makes more sense to me to offline replicate
Please send a mail to
user-unsubscr...@hadoop.apache.orgmailto:user-unsubscr...@hadoop.apache.org
for unsubscribe.
Thanks
Devaraj k
From: berty...@gmail.com [mailto:berty...@gmail.com] On Behalf Of Bert Yuan
Sent: 04 September 2013 07:43
To: user@hadoop.apache.org
Subject: unsubscribe
On
No.
There are several ways a user can impersonate another user if you do
not use strong authentication (i.e. kerberos). Some ways are
implemented with that intention, for users who don't want any
authentication.
On Wed, Sep 4, 2013 at 1:39 AM, fred th meltr...@gmail.com wrote:
Hello,
I'm
32 matches
Mail list logo