[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485427
]
Devaraj Das commented on HADOOP-1159:
-
We saw the NPE coming from the call to mapOutputIn.read( ) in the
MapOut
Nige, would it be possible for you to verify this since you also
know how to reproduce the issue?
Sure, but I may not get to it for a while.
On Mar 29, 2007, at 3:02 AM, Tahir Hashmi (JIRA) wrote:
[ https://issues.apache.org/jira/browse/HADOOP-1011?
page=com.atlassian.jira.plugin.syst
[
https://issues.apache.org/jira/browse/HADOOP-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485390
]
Raghu Angadi commented on HADOOP-1184:
--
> It also removes method filterDecommissionedNodes() because it was not
hairong Kuang wrote:
Two main reasons caused the performance decrease:
1. NNBench sets the block size to be 1. Althouth it generates a file with
only 1 byte, but the file's checksum file has 16 bytes (12 bytes header
plus 4 bytes checksums). Without the checksum file, only 1 block needs to be
g
[
https://issues.apache.org/jira/browse/HADOOP-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485387
]
Hadoop QA commented on HADOOP-1178:
---
+1, because
http://issues.apache.org/jira/secure/attachment/12354544/namenod
Two main reasons caused the performance decrease:
1. NNBench sets the block size to be 1. Althouth it generates a file with
only 1 byte, but the file's checksum file has 16 bytes (12 bytes header
plus 4 bytes checksums). Without the checksum file, only 1 block needs to be
generated. With the chec
[
https://issues.apache.org/jira/browse/HADOOP-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1178:
-
Status: Patch Available (was: Open)
> NullPointer Exception in org.apache.hadoop.dfs.Na
[
https://issues.apache.org/jira/browse/HADOOP-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485370
]
Koji Noguchi commented on HADOOP-1179:
--
For all the nodes with org.mortbay.jetty.servlet.ServletHandler:
java.
[
https://issues.apache.org/jira/browse/HADOOP-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1184:
-
Attachment: decommissionOneReplica.patch
This patch enhances TestDecommission to test th
Decommission fails if a block that needs replication has only one replica
-
Key: HADOOP-1184
URL: https://issues.apache.org/jira/browse/HADOOP-1184
Project: Hadoop
Issue
Nigel Daley wrote:
So shouldn't fixing this test to conform to the new model in HADOOP-1134
be the concern of the patch for HADOOP-1134? As it stand, I can't run
NNBench at scale without using a raw file system, which is what this
patch is intended to allow. HADOOP-928 caused this test to use
[
https://issues.apache.org/jira/browse/HADOOP-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Nigel Daley updated HADOOP-1183:
Fix Version/s: 0.12.3
Affects Version/s: 0.12.2
> MapTask completion not recorded properly
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485353
]
Tom White commented on HADOOP-1159:
---
I understand that the NPE is logged fully, but NPEs typically indicate a
pro
[
https://issues.apache.org/jira/browse/HADOOP-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tom White updated HADOOP-1177:
--
Resolution: Fixed
Status: Resolved (was: Patch Available)
I've just committed this. Thanks De
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485349
]
Sameer Paranjpye commented on HADOOP-1134:
--
> Is the upgrade the time to detect corrupt blocks? Won't these
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485348
]
Devaraj Das commented on HADOOP-1159:
-
By the way, just to point out, in both cases, the entire exception stack
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485347
]
Devaraj Das commented on HADOOP-1159:
-
Well, the NPE is caught and logged. Since the doGet method doesn't explic
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485346
]
Sameer Paranjpye commented on HADOOP-1134:
--
> That's not the universal experience. Many if not most of the
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485345
]
Doug Cutting commented on HADOOP-1134:
--
> The client would need a way to go from a block id to a .crc file via
[
https://issues.apache.org/jira/browse/HADOOP-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1183:
Attachment: 1183.new.patch
This patch does a slightly better handling of failed maps. It records o
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485341
]
Doug Cutting commented on HADOOP-1134:
--
> Yes, but in each of those 100 real data corruptions data can be salva
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485339
]
Sameer Paranjpye commented on HADOOP-1134:
--
> Why wouldn't the map tasks run on a node where the block is l
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485337
]
Tom White commented on HADOOP-1159:
---
Catching NPEs is generally considered bad form since it hides the problem. In
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485328
]
Sameer Paranjpye commented on HADOOP-1134:
--
> Okay, I see your point. If we import only a single replica of
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485326
]
Doug Cutting commented on HADOOP-1134:
--
> it is potentially much slower since validating with Map/Reduce could
[
https://issues.apache.org/jira/browse/HADOOP-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485325
]
Tom White commented on HADOOP-1110:
---
Sorry - FWIW I was just about to commit this one yesterday, when I realised
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485321
]
Sameer Paranjpye commented on HADOOP-1134:
--
As Konstantin suggests, using a client program to perform valid
Nigel Daley wrote:
As you realized below, the test was using raw methods before
HADOOP-928. I don't understand your reference to "undocumented" and
"unsupported", but I'm not sure it matters.
The 'raw' methods were only intended to be used by FileSystem
implementations.
One of the design g
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485317
]
Doug Cutting commented on HADOOP-1134:
--
> When the HDFS client encounters a checksum error, it doesn't know whe
On Mar 29, 2007, at 12:07 PM, Doug Cutting wrote:
Nigel Daley wrote:
So shouldn't fixing this test to conform to the new model in
HADOOP-1134 be the concern of the patch for HADOOP-1134?
Yes, but, as it stands, this patch would silently stop working
correctly once HADOOP-1134 is committed
[
https://issues.apache.org/jira/browse/HADOOP-958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485309
]
Tom White commented on HADOOP-958:
--
On 29/03/07, Doug Cutting <[EMAIL PROTECTED]> wrote:
> I think for this issue,
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485308
]
Sameer Paranjpye commented on HADOOP-1134:
--
> The same way we do today: we don't.
When the HDFS client en
[
https://issues.apache.org/jira/browse/HADOOP-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485306
]
Doug Cutting commented on HADOOP-1110:
--
> This little patch has been available for a week. Is there something I
Nigel Daley wrote:
So shouldn't fixing this test to conform to the new model in HADOOP-1134
be the concern of the patch for HADOOP-1134?
Yes, but, as it stands, this patch would silently stop working correctly
once HADOOP-1134 is committed. It should instead be written in a more
robust way,
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485300
]
Hadoop QA commented on HADOOP-1159:
---
+1, because
http://issues.apache.org/jira/secure/attachment/12354541/1159-me
[
https://issues.apache.org/jira/browse/HADOOP-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485298
]
Doug Cutting commented on HADOOP-1161:
--
> I wonder why we were waiting to merge patches meant for 0.13.0 until
So shouldn't fixing this test to conform to the new model in
HADOOP-1134 be the concern of the patch for HADOOP-1134? As it
stand, I can't run NNBench at scale without using a raw file system,
which is what this patch is intended to allow. HADOOP-928 caused
this test to use a ChecksumFile
[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485292
]
Doug Cutting commented on HADOOP-1134:
--
> Even if we *can* get the old CRC data, how do we know that it is not
[
https://issues.apache.org/jira/browse/HADOOP-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1178:
-
Attachment: (was: namenodestart.patch)
> NullPointer Exception in org.apache.hadoop.
[
https://issues.apache.org/jira/browse/HADOOP-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1178:
-
Attachment: namenodestart2.patch
This removes a unnecessary newline introduced by the la
[
https://issues.apache.org/jira/browse/HADOOP-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1093:
-
Attachment: nyr2.patch
This patch does the following:
1. The client does not send block
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1159:
Status: Patch Available (was: Open)
> Reducers hang when map output file has a checksum error
> -
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1159:
Attachment: 1159-merge.patch
This patch merges the two patches (1159.patch and h1159-2.patch).
>
[
https://issues.apache.org/jira/browse/HADOOP-1180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Doug Cutting updated HADOOP-1180:
-
Status: Open (was: Patch Available)
-1 This patch may be rendered obsolete by HADOOP-1134. And
Code review:
- You have a spurious newline in Server.doStop()
Other than that, it looks good.
On Mar 29, 2007, at 11:13 AM, dhruba borthakur (JIRA) wrote:
[ https://issues.apache.org/jira/browse/HADOOP-1178?
page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba b
[
https://issues.apache.org/jira/browse/HADOOP-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1093:
-
Attachment: (was: notyetreplicated.patch)
> NNBench generates millions of NotReplica
Nigel Daley wrote:
Just a caveat: when there is conditional code like this
based on the configured logging level, then testing
should be done with different logging levels to
verify that there are no side effects or exceptions
(such as NPE) generated by these conditional code
blocks. FWIW, this
Thanks. There is a isDebugEnabled() also. We should convert many of our
debug statements into this. I will do as I submit patches.
Raghu.
Dhruba Borthakur wrote:
There are portions code in the Namenode:
if (NameNode.stateChangeLog.isInfoEnabled()) {
...
}
-Original Message-
From:
Raghu Angadi wrote:
if ( debugEnabled ) {
NameNode.stateChangeLog.debug( ...
}
Yes, this is sometimes warranted. I've also seen it abused, with, e.g.,
guards placed around all log statements. I suggest the following
guidelines:
Guards should only be used:
1. in inner-loop code where pe
[
https://issues.apache.org/jira/browse/HADOOP-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1178:
-
Attachment: namenodestart.patch
Close listener socket connection. Also, interrupt all RP
Just a caveat: when there is conditional code like this
based on the configured logging level, then testing
should be done with different logging levels to
verify that there are no side effects or exceptions
(such as NPE) generated by these conditional code
blocks. FWIW, this is not something I c
[
https://issues.apache.org/jira/browse/HADOOP-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1178:
-
Attachment: (was: namenodestart.patch)
> NullPointer Exception in org.apache.hadoop.
There are portions code in the Namenode:
if (NameNode.stateChangeLog.isInfoEnabled()) {
...
}
-Original Message-
From: Raghu Angadi [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 29, 2007 11:00 AM
To: hadoop-dev@lucene.apache.org
Subject: Cost of debug statements.
Is there a way to
Is there a way to test to if debug is enabled before invoking a
statement like :
NameNode.stateChangeLog.debug(
"BLOCK* NameSystem.UnderReplicationBlock.add:"
+ block.getBlockName()
+ " has only "+curReplicas
[
https://issues.apache.org/jira/browse/HADOOP-1159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Nigel Daley updated HADOOP-1159:
Status: Open (was: Patch Available)
patches need to be merged into 1 patch
> Reducers hang when
[
https://issues.apache.org/jira/browse/HADOOP-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485269
]
Doug Cutting commented on HADOOP-1172:
--
+0 If the logging disk is full, then the node is not useful. Optimizin
[
https://issues.apache.org/jira/browse/HADOOP-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1182:
-
Component/s: (was: mapred)
dfs
Summary: DFS Scalability issu
[
https://issues.apache.org/jira/browse/HADOOP-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485267
]
dhruba borthakur commented on HADOOP-1182:
--
It would help us a lot if you can monitor the CPU usage on the
Nigel Daley wrote:
Should we try to do all code format changes in a series of contiguous
patches?
This is more than a format change, but I see what you mean.
This should minimize the amount of pain while diff'ing source
files across revisions. Also, should these format changes be the last
p
[
https://issues.apache.org/jira/browse/HADOOP-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Nigel Daley updated HADOOP-1178:
Fix Version/s: (was: 0.12.2)
0.13.0
Affects Version/s: 0.13.0
>
[
https://issues.apache.org/jira/browse/HADOOP-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485262
]
Doug Cutting commented on HADOOP-1179:
--
Note that running out of file handles can cause OutOfMemoryExceptions,
[
https://issues.apache.org/jira/browse/HADOOP-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485239
]
David Bowen commented on HADOOP-1110:
-
This little patch has been available for a week. Is there something I n
[
https://issues.apache.org/jira/browse/HADOOP-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tom White updated HADOOP-1123:
--
Resolution: Fixed
Status: Resolved (was: Patch Available)
The new patch fixed the NPE I was s
[
https://issues.apache.org/jira/browse/HADOOP-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1183:
Attachment: 1183.patch
Retrials of map output fetches might overwrite the new events got from the
[
https://issues.apache.org/jira/browse/HADOOP-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485201
]
Runping Qi commented on HADOOP-1179:
This issue is related to HADOOP-1158 but not a dup.
Two things need to be
[
https://issues.apache.org/jira/browse/HADOOP-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1183:
Description: A couple of reducers were continuously trying to fetch map
outputs from a lost tasktr
[
https://issues.apache.org/jira/browse/HADOOP-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1179:
Attachment: 1179.patch
I have attached a patch containing the part to do with closing index file a
[
https://issues.apache.org/jira/browse/HADOOP-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485174
]
Hadoop QA commented on HADOOP-1177:
---
+1, because http://issues.apache.org/jira/secure/attachment/12354511/1177.pat
[
https://issues.apache.org/jira/browse/HADOOP-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1177:
Attachment: 1177.patch
Good catch, Owen. Patch attached .
> Lack of logging of exceptions in MapO
[
https://issues.apache.org/jira/browse/HADOOP-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-1177:
Status: Patch Available (was: Open)
> Lack of logging of exceptions in MapOutputLocation.getFile
MapTask completion event lost
-
Key: HADOOP-1183
URL: https://issues.apache.org/jira/browse/HADOOP-1183
Project: Hadoop
Issue Type: Bug
Components: mapred
Reporter: Devaraj Das
Assig
[
https://issues.apache.org/jira/browse/HADOOP-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tahir Hashmi updated HADOOP-1011:
-
Attachment: 1011.patch
Here's a patch that probably fixes the issue. I don't have enough
infras
72 matches
Mail list logo