On Wed, May 9, 2012 at 10:52 PM, Raj Vishwanathan wrote:
> The picture either too small or too pixelated for my eyes :-)
>
There should be a zoom option in the top right of the page that allows you
to view it full size
>
> Can you login to the box and send the output of top? If the system is
>
Hi
Yes - that was indeed the problem...
I cleaned up the Java's on all the nodes, did a clean reinstall of Sun
jdk1.6.0_23 and the problem is gone.
Many thanks and regards!
Fourie
On 05/09/2012 05:47 PM, Harsh J wrote:
You may be hitting https://issues.apache.org/jira/browse/HDFS-1115?
Hav
try setting a lower value for mapred.job.shuffle.input.buffer.percent .
the reducer used it to decide whether use in-memory shuffle.
the default value is 0.7,meaning 70% of the "memory" are used as shuffle
buffer.
On Thu, May 10, 2012 at 2:50 AM, Yang wrote:
> it seems that if I put too many rec
Jay,
Yep even the local mode does read mapred-site.xml, and a few
properties supplied it in apply to LocalJobRunner as well.
Yes the per-job override will not work for raising limits, as that
wouldn't make sense as a 'limit' then (Although Android has a
different sense of that I've noticed:
http:
Can you share your job details (or a sample reducer code) and also
share your exact error?
If you are holding reducer provided values/keys in memory in your
implementation, it can easily cause an OOME if not handled properly.
The reducer by itself does read the values off a sorted file on the
disk
Hi Subbu!
Thanks so much for this tip. Strangely, it doesn't seem to work for me ...
I still get the checksum error (though it appears to happen later on in the
job).
Has this workaround always worked for you? I also tried using the
setMaxMapperFailurePercentage() and setMaxReducerFailurePercenta
Forgot to add that Hadoop distribution is cdh3u3 ...
Thanks
-- Alex
On Wed, May 9, 2012 at 1:58 PM, Alex Levin wrote:
> Hi,
>
> I have an issue with crashing secondary namenode due to a simple move
> operation
> Appreciate any ideas on the resolution ...
>
> Details bellow:
> I was moving
The picture either too small or too pixelated for my eyes :-)
Can you login to the box and send the output of top? If the system is
unresponsive, it has to be something more than an unbalanced hdfs cluster,
methinks.
Raj
>
> From: Darrell Taylor
>To: common-u
I would wait for that number to go down to 0
That could a reason for your CPU utilization
Regards,
Serge
On 5/9/12 2:27 PM, "Darrell Taylor" wrote:
>On Wed, May 9, 2012 at 10:00 PM, Serge Blazhiyevskyy <
>serge.blazhiyevs...@nice.com> wrote:
>
>> Looks like you have some under replicated bloc
On Wed, May 9, 2012 at 10:23 PM, Raj Vishwanathan wrote:
> When you say 'load', what do you mean? CPU load or something else?
>
I mean in the unix sense of load average, i.e. top would show a load of
(currently) 376.
Looking at Ganglia stats for the box it's not CPU load as such, the graphs
sho
On Wed, May 9, 2012 at 10:00 PM, Serge Blazhiyevskyy <
serge.blazhiyevs...@nice.com> wrote:
> Looks like you have some under replicated blocks. Does that number
> decreases if you fsck multiple times?
>
Yes, since my last post it's now down to 353
Status: HEALTHY
Total size:246983628437
When you say 'load', what do you mean? CPU load or something else?
Raj
>
> From: Darrell Taylor
>To: common-user@hadoop.apache.org
>Sent: Wednesday, May 9, 2012 9:52 AM
>Subject: High load on datanode startup
>
>Hi,
>
>I wonder if someone could give some point
Ahhh I now know the answer : My solution :
1) Get a simple mapred config file and remove all parameters but the one i
need to set for my local mode.
2) Put the mapred site config .xml in my classpath.
3) Run my application.
The JobConf I assume SHOULD NOT work because this parameter is specifical
You should be able to set that param on JobConf object
Regards,
Serge
On 5/9/12 1:09 PM, "Jay Vyas" wrote:
>Hi guys : I need to set a cluster configuration parameter (specifically,
>the "mapreduce.job.counters.limit")
> Easy ... right ?
>
>Well one problem : I'm running hadoop in l
Looks like you have some under replicated blocks. Does that number
decreases if you fsck multiple times?
Regards,
Serge
On 5/9/12 12:23 PM, "Darrell Taylor" wrote:
>On Wed, May 9, 2012 at 6:04 PM, Serge Blazhiyevskyy <
>serge.blazhiyevs...@nice.com> wrote:
>
>>
>> Whats the response from fsck
Hi,
I have an issue with crashing secondary namenode due to a simple move
operation
Appreciate any ideas on the resolution ...
Details bellow:
I was moving old backups to a separate folder, exact command:
sudo -u hdfs hadoop fs -mv /hbase-bak /backup/
and shortly after the command sec
Hi guys : I need to set a cluster configuration parameter (specifically,
the "mapreduce.job.counters.limit")
Easy ... right ?
Well one problem : I'm running hadoop in local-mode !
So How can I simulate this parameter so that my local mode allows me to use
non-default cluster configr
On Wed, May 9, 2012 at 6:04 PM, Serge Blazhiyevskyy <
serge.blazhiyevs...@nice.com> wrote:
>
> Whats the response from fsck look like?
>
>
[snip lots of stuff about under replicated blocks]
..Status: HEALTHY
Total size:246858876262 B (Total open files size: 372 B)
Total dirs:14914
it seems that if I put too many records into the same mapper output
key, all these records are grouped into one key one one reducer,
then the reducer became out of memory.
but the reducer interface is:
public void reduce(K key, Iterator values,
OutputCollector o
Whats the response from fsck look like?
hadoop fsck /
It might be the case that some of the blocks are misreplicated
Serge
Hadoopway.blogspot.com
On 5/9/12 9:58 AM, "Darrell Taylor" wrote:
>On Wed, May 9, 2012 at 5:56 PM, Serge Blazhiyevskyy <
>serge.blazhiyevs...@nice.com> wrote:
>
On Wed, May 9, 2012 at 5:56 PM, Serge Blazhiyevskyy <
serge.blazhiyevs...@nice.com> wrote:
> Take a look at your data distribution for that cluster. Maybe, it is
> unbalanced.
>
>
> Run balancer, if it isŠ
>
The cluster is balanced, I ran balancer yesterday. Oddly enough the
problem started afte
Take a look at your data distribution for that cluster. Maybe, it is
unbalanced.
Run balancer, if it isŠ
Regards,
Serge
hadoopway.blogspot.com
On 5/9/12 9:52 AM, "Darrell Taylor" wrote:
>Hi,
>
>I wonder if someone could give some pointers with a problem I'm having?
>
>I have a 7 machine c
Hi,
I wonder if someone could give some pointers with a problem I'm having?
I have a 7 machine cluster setup for testing and we have been pouring data
into it for a week without issue, have learnt several thing along the way
and solved all the problems up to now by searching online, but now I'm
s
Thanks - I'll check!
Regards!
Fourie
On 05/09/2012 05:47 PM, Harsh J wrote:
You may be hitting https://issues.apache.org/jira/browse/HDFS-1115?
Have you ensured Sun JDK is the only JDK available in the machines and
your services aren't using OpenJDK accidentally?
On Wed, May 9, 2012 at 8:44
You may be hitting https://issues.apache.org/jira/browse/HDFS-1115?
Have you ensured Sun JDK is the only JDK available in the machines and
your services aren't using OpenJDK accidentally?
On Wed, May 9, 2012 at 8:44 PM, Fourie Joubert wrote:
> Hi
>
> I am running Hadoop-1.0.1 with Sun jdk1.6.0_23
Hello!
I have built hadoop for ppc64 (IBM Power), based on branch-1 source, and
I would like to contribute to the Wiki a document on how to do that.
Where should be the appropriate place to add it?
Just for more information, there are a couple of tricks needed for
building it. They are relat
Hi
I am running Hadoop-1.0.1 with Sun jdk1.6.0_23.
My system is a head node with 14 compute blades
When trying to start hadoop, I get the following message in the logs for
each data node:
2012-05-09 16:53:35,548 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(1
Yes I am the one that said I would look into releasing the Yahoo documentation.
Thanks for reminding me. I have been a bit distracted, by some
"restructuring" that has happened recently, but I will get on that.
--Bobby
On 5/8/12 12:30 PM, "Arun C Murthy" wrote:
Thanks for offering to provid
That's pretty cool. We have a german LinkedIn / Xing group too
http://goo.gl/N8pCF
cheers,
Alex
--
Alexander Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF
On May 9, 2012, at 1:17 PM, Jean-Pierre Koenig wrote:
> Hallo Hadoop Users!
>
> To promote knowle
$DuplicationException: Invalid input, there are duplicated files in the
sources: hftp://ub13:50070/tmp/Rtmp1BU9Kb/file6abc6ccb6551/_logs/history,
hftp://ub13:50070/tmp/Rtmp3yCJhu/file1ca96d9331/_logs/history
Any idea what is the problem here?
They are different files how are they conflicting?
Tha
Hallo Hadoop Users!
To promote knowledge exchange amongst developers, many "Hadoop User
Groups" have been founded around the world. In German-speaking
countries there are currently only two of these groups, located in
Berlin and Munich, and until now nothing in Switzerland. This gap will
now be fi
HI Ali,
I also faced this error when i ran the jobs either in local or in a cluster.
I am able to solve this problem by removing the .crc file created in the
input folder for this job.
Please check that there is no .crc file in the input.
I hope this solves the problem.
Thanks,
Subbu
On Wed, May
32 matches
Mail list logo