[jira] [Updated] (HADOOP-11836) Update release note in index.md.vm

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11836:
---
Summary: Update release note in index.md.vm  (was: Update index.md.vm)

> Update release note in index.md.vm
> --
>
> Key: HADOOP-11836
> URL: https://issues.apache.org/jira/browse/HADOOP-11836
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Akira AJISAKA
>Assignee: J.Andreina
>
> http://hadoop.apache.org/docs/current/ still shows the new features of Hadoop 
> 2.5. The document should be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12018) smart-apply-patch.sh fails if the patch edits CR+LF files and is created by 'git diff --no-prefix'

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555663#comment-14555663
 ] 

Akira AJISAKA commented on HADOOP-12018:


I'm thinking there are two patterns to fix this issue.
# Fix smart-apply-patch.sh.
# Document not to use "git diff --no-prefix" to create patch.

> smart-apply-patch.sh fails if the patch edits CR+LF files and is created by 
> 'git diff --no-prefix'
> --
>
> Key: HADOOP-12018
> URL: https://issues.apache.org/jira/browse/HADOOP-12018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> If the patch edits a file includes CR+LF and created by "git diff 
> --no-prefix", smart-apply-patch.sh fails to patch. smart-apply-patch.sh 
> checks if the patch is created by "git diff" or "patch", however, if a patch 
> is created by "git diff --no-prefix", smart-apply-patch.sh detects the patch 
> is created by "patch" command. That's why 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/console fails.
> A workaround is to use "git diff" for creating patch if a file includes CR+LF 
> is edited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555642#comment-14555642
 ] 

Akira AJISAKA commented on HADOOP-12014:


Filed HADOOP-12018 to fix the build failure.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.2.0
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12018) smart-apply-patch.sh fails if the patch edits CR+LF files and is created by 'git diff --no-prefix'

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12018:
---
Description: 
If the patch edits a file includes CR+LF and created by "git diff --no-prefix", 
smart-apply-patch.sh fails to patch. smart-apply-patch.sh checks if the patch 
is created by "git diff" or "patch", however, if a patch is created by "git 
diff --no-prefix", smart-apply-patch.sh detects the patch is created by "patch" 
command. That's why 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/console fails.
A workaround is to use "git diff" for creating patch if a file includes CR+LF 
is edited.

  was:
If the patch edits a file includes CR+LF and created by "git diff --no-prefix", 
smart-apply-patch.sh fails to patch. This is because 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/console fails.
smart-apply-patch.sh checks if the patch is created by "git diff" or "patch", 
however, if a patch is created by "git diff --no-prefix", smart-apply-patch.sh 
detects the patch is created by "patch" command.
A workaround is to use "git diff" for creating patch if a file includes CR+LF 
is edited.


> smart-apply-patch.sh fails if the patch edits CR+LF files and is created by 
> 'git diff --no-prefix'
> --
>
> Key: HADOOP-12018
> URL: https://issues.apache.org/jira/browse/HADOOP-12018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> If the patch edits a file includes CR+LF and created by "git diff 
> --no-prefix", smart-apply-patch.sh fails to patch. smart-apply-patch.sh 
> checks if the patch is created by "git diff" or "patch", however, if a patch 
> is created by "git diff --no-prefix", smart-apply-patch.sh detects the patch 
> is created by "patch" command. That's why 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/console fails.
> A workaround is to use "git diff" for creating patch if a file includes CR+LF 
> is edited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12018) smart-apply-patch.sh fails if the patch edits CR+LF files and is created by 'git diff --no-prefix'

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12018:
---
Description: 
If the patch edits a file includes CR+LF and created by "git diff --no-prefix", 
smart-apply-patch.sh fails to patch. This is because 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/console fails.
smart-apply-patch.sh checks if the patch is created by "git diff" or "patch", 
however, if a patch is created by "git diff --no-prefix", smart-apply-patch.sh 
detects the patch is created by "patch" command.
A workaround is to use "git diff" for creating patch if a file includes CR+LF 
is edited.

> smart-apply-patch.sh fails if the patch edits CR+LF files and is created by 
> 'git diff --no-prefix'
> --
>
> Key: HADOOP-12018
> URL: https://issues.apache.org/jira/browse/HADOOP-12018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> If the patch edits a file includes CR+LF and created by "git diff 
> --no-prefix", smart-apply-patch.sh fails to patch. This is because 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/console fails.
> smart-apply-patch.sh checks if the patch is created by "git diff" or "patch", 
> however, if a patch is created by "git diff --no-prefix", 
> smart-apply-patch.sh detects the patch is created by "patch" command.
> A workaround is to use "git diff" for creating patch if a file includes CR+LF 
> is edited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12018) smart-apply-patch.sh fails if the patch edits CR+LF files and is created by 'git diff --no-prefix'

2015-05-21 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12018:
--

 Summary: smart-apply-patch.sh fails if the patch edits CR+LF files 
and is created by 'git diff --no-prefix'
 Key: HADOOP-12018
 URL: https://issues.apache.org/jira/browse/HADOOP-12018
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira AJISAKA
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1499#comment-1499
 ] 

Hudson commented on HADOOP-12014:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7889 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7889/])
HADOOP-12014. hadoop-config.cmd displays a wrong error message. Contributed by 
Kengo Seki. (aajisaka: rev c7fea088f7b6c44e4e04bde19dc839975d8ac8ba)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd


> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.2.0
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12014:
---
Affects Version/s: 2.2.0

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.2.0
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12014:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~sekikn] for contribution.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.2.0
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1477#comment-1477
 ] 

Akira AJISAKA commented on HADOOP-12014:


I remembered why the jenkins build fails : 
https://issues.apache.org/jira/browse/HADOOP-7984?focusedCommentId=14146354&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14146354

I'll commit this shortly. Thanks Kengo.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1475#comment-1475
 ] 

Hadoop QA commented on HADOOP-12014:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734576/HADOOP-12014.001.patch 
|
| Optional Tests |  |
| git revision | trunk / cf2b569 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/console |


This message was automatically generated.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1476#comment-1476
 ] 

Akira AJISAKA commented on HADOOP-12014:


Started jenkins build : 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6800/

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12014:
---
Priority: Minor  (was: Trivial)
Target Version/s: 2.8.0
Hadoop Flags: Reviewed

Looks good to me, +1.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12013) Generate fixed data to perform erasure coder test

2015-05-21 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1463#comment-1463
 ] 

Zhe Zhang commented on HADOOP-12013:


Thanks for the clarification. I see, so the purpose is for easier debugging.

It's still faster to use bit-wise operations but I'm OK with the current way of 
wrapping around 256 since it's only for testing.

+1 on the patch.

> Generate fixed data to perform erasure coder test
> -
>
> Key: HADOOP-12013
> URL: https://issues.apache.org/jira/browse/HADOOP-12013
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12013-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to allow 
> generating and using fixed data to test raw erasure coders to ease the 
> debugging some coding issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-21 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1414#comment-1414
 ] 

Yi Liu commented on HADOOP-11847:
-

{quote}
Sorry I missed to explain why the codes are like that. It was thinking that 
it's rarely the first units that's erased, so in most cases just checking 
inputs\[0\] will return the wanted result, avoiding involving into the loop.
{quote}
If the first element is not null, it will return. It will have loop?

{quote}
How about simply having maxInvalidUnits = numParityUnits? The good is we don't 
have to re-allocate the shared buffers for different erasures.
{quote}
We don't need to allocate {{numParityUnits}} number of buffers, the output 
should at least have one, right?  Maybe more than one.   I don't think we have 
to re-allocate the shared buffers for different erasures.   If the buffers is 
not enough, then we allocate new and add it to the shared pool, it's typically 
behavior.

{quote}
We don't have or use chunkSize now. Please note the check is:
{quote}
Right, we don't need to use ChunkSize now.  I think 
{{bytesArrayBuffers\[0\].length < dataLen}} is OK.
{{ensureBytesArrayBuffer}} and {{ensureDirectBuffers}} need to be renamed and 
rewritten per above comments.

{quote}
Would you check again, thanks.
{quote}
{code}
for (int i = 0; i < adjustedByteArrayOutputsParameter.length; i++) {
  adjustedByteArrayOutputsParameter[i] =
  resetBuffer(bytesArrayBuffers[bufferIdx++], 0, dataLen);
  adjustedOutputOffsets[i] = 0; // Always 0 for such temp output
}

int outputIdx = 0;
for (int i = 0; i < erasedIndexes.length; i++, outputIdx++) {
  for (int j = 0; j < erasedOrNotToReadIndexes.length; j++) {
// If this index is one requested by the caller via erasedIndexes, then
// we use the passed output buffer to avoid copying data thereafter.
if (erasedIndexes[i] == erasedOrNotToReadIndexes[j]) {
  adjustedByteArrayOutputsParameter[j] =
  resetBuffer(outputs[outputIdx], 0, dataLen);
  adjustedOutputOffsets[j] = outputOffsets[outputIdx];
}
  }
}
{code}
You call {{resetBuffer}}: parityNum + erasedIndexes,  is that true?


> Enhance raw coder allowing to read least required inputs in decoding
> 
>
> Key: HADOOP-11847
> URL: https://issues.apache.org/jira/browse/HADOOP-11847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
> HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
> HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-v1.patch, HADOOP-11847-v2.patch
>
>
> This is to enhance raw erasure coder to allow only reading least required 
> inputs while decoding. It will also refine and document the relevant APIs for 
> better understanding and usage. When using least required inputs, it may add 
> computating overhead but will possiblly outperform overall since less network 
> traffic and disk IO are involved.
> This is something planned to do but just got reminded by [~zhz]' s question 
> raised in HDFS-7678, also copied here:
> bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
> is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
> I construct the inputs to RawErasureDecoder#decode?
> With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-21 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555483#comment-14555483
 ] 

Kai Zheng commented on HADOOP-11847:


Thanks for more review and comment!
bq. for findFirstValidInput, still one comment not addressed:
Sorry I missed to explain why the codes are like that. It was thinking that 
it's rarely the first units that's erased, so in most cases just checking 
{{inputs\[0\]}} will return the wanted result, avoiding involving into the loop.
bq. Do we need maxInvalidUnits * 2 for bytesArrayBuffers and directBuffers? 
Since we don't need additional buffer for inputs. The correct size should be ...
Good catch! How about simply having {{maxInvalidUnits = numParityUnits}}? The 
good is we don't have to re-allocate the shared buffers for different erasures.
bq. The share buffer size should be always the chunk size, otherwise they can't 
be shared, since the dataLen may be different.
We don't have or use chunkSize now. Please note the check is:
{code}
+if (bytesArrayBuffers == null || bytesArrayBuffers[0].length < dataLen) {
+  /**
+   * Create this set of buffers on demand, which is only needed at the 
first
+   * time running into this, using bytes array.
+   */
{code}
bq. We should check erasedOrNotToReadIndexes contains erasedIndexes. 
Good point. The check would avoid bad usage with mismatched inputs and 
erasedIndexes.
bq. We just need one loop...
Hmm, I'm not sure. We should place the output buffers from caller in the 
correct positions. For example:
Assuming 6+3, recovering d0, not-to-read=\[p1, d3\], outputs = \[d0\]. Then 
adjustedByteArrayOutputsParameter should be: 
\[p1,d0,s1(d3)\], where s* means shared buffer. 

Would you check again, thanks.

> Enhance raw coder allowing to read least required inputs in decoding
> 
>
> Key: HADOOP-11847
> URL: https://issues.apache.org/jira/browse/HADOOP-11847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
> HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
> HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-v1.patch, HADOOP-11847-v2.patch
>
>
> This is to enhance raw erasure coder to allow only reading least required 
> inputs while decoding. It will also refine and document the relevant APIs for 
> better understanding and usage. When using least required inputs, it may add 
> computating overhead but will possiblly outperform overall since less network 
> traffic and disk IO are involved.
> This is something planned to do but just got reminded by [~zhz]' s question 
> raised in HDFS-7678, also copied here:
> bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
> is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
> I construct the inputs to RawErasureDecoder#decode?
> With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-05-21 Thread Rohan Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555453#comment-14555453
 ] 

Rohan Kulkarni commented on HADOOP-9922:


Yes HADOOP-9922-002.patch  works for 2.5.2 , thanks so much 'Kiran' and 'Vinay' 
, atleast this will get me started now .Really thanks . @Admin of this post 
please feel free to delete my attachment or comment if it digresses from the 
basic issue . thanks , 

> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch, patch 
> error.txt
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-21 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555446#comment-14555446
 ] 

Yi Liu commented on HADOOP-11847:
-

*in AbstractRawErasureDecoder.java*
for findFirstValidInput, still one comment not addressed:
{code}
+if (inputs[0] != null) {
+  return inputs[0];
+}
+
+for (int i = 1; i < inputs.length; i++) {
+  if (inputs[i] != null) {
+return inputs[i];
+  }
+}
{code}
It can be:
{code}
for (int i = 0; i < inputs.length; i++) {
  if (inputs[i] != null) {
return inputs[i];
  }
}
{code}

*In RSRawDecoder.java*
{code}
private void ensureBytesArrayBuffers(int dataLen) {
if (bytesArrayBuffers == null || bytesArrayBuffers[0].length < dataLen) {
  /**
   * Create this set of buffers on demand, which is only needed at the first
   * time running into this, using bytes array.
   */
  // Erased or not to read
  int maxInvalidUnits = getNumParityUnits();
  adjustedByteArrayOutputsParameter = new byte[maxInvalidUnits][];
  adjustedOutputOffsets = new int[maxInvalidUnits];

  // These are temp buffers for both inputs and outputs
  bytesArrayBuffers = new byte[maxInvalidUnits * 2][];
  for (int i = 0; i < bytesArrayBuffers.length; ++i) {
bytesArrayBuffers[i] = new byte[dataLen];
  }
}
  }

  private void ensureDirectBuffers(int dataLen) {
if (directBuffers == null || directBuffers[0].capacity() < dataLen) {
  /**
   * Create this set of buffers on demand, which is only needed at the first
   * time running into this, using DirectBuffer.
   */
  // Erased or not to read
  int maxInvalidUnits = getNumParityUnits();
  adjustedDirectBufferOutputsParameter = new ByteBuffer[maxInvalidUnits];

  // These are temp buffers for both inputs and outputs
  directBuffers = new ByteBuffer[maxInvalidUnits * 2];
  for (int i = 0; i < directBuffers.length; i++) {
directBuffers[i] = ByteBuffer.allocateDirect(dataLen);
  }
}
  }
{code}
1.  Do we need {{maxInvalidUnits * 2}} for bytesArrayBuffers and directBuffers? 
Since we don't need additional buffer for inputs.  The correct size should be 
{{parityUnitNum - outputs.length}}. If next time, there is no enough buffer, 
then you allocate new.
2. The share buffer size should be always the chunk size, otherwise they can't 
be shared, since the dataLen may be different.  

In {{doDecode}}
{code}
for (int i = 0; i < adjustedByteArrayOutputsParameter.length; i++) {
  adjustedByteArrayOutputsParameter[i] =
  resetBuffer(bytesArrayBuffers[bufferIdx++], 0, dataLen);
  adjustedOutputOffsets[i] = 0; // Always 0 for such temp output
}

int outputIdx = 0;
for (int i = 0; i < erasedIndexes.length; i++, outputIdx++) {
  for (int j = 0; j < erasedOrNotToReadIndexes.length; j++) {
// If this index is one requested by the caller via erasedIndexes, then
// we use the passed output buffer to avoid copying data thereafter.
if (erasedIndexes[i] == erasedOrNotToReadIndexes[j]) {
  adjustedByteArrayOutputsParameter[j] =
  resetBuffer(outputs[outputIdx], 0, dataLen);
  adjustedOutputOffsets[j] = outputOffsets[outputIdx];
}
  }
}
{code}
1. We should check erasedOrNotToReadIndexes contains erasedIndexes. 
2. We just need one loop,  go though {{adjustedByteArrayOutputsParameter}}, 
assign buffer from outputs if exists, otherwise from {{bytesArrayBuffers}}


> Enhance raw coder allowing to read least required inputs in decoding
> 
>
> Key: HADOOP-11847
> URL: https://issues.apache.org/jira/browse/HADOOP-11847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
> HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
> HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-v1.patch, HADOOP-11847-v2.patch
>
>
> This is to enhance raw erasure coder to allow only reading least required 
> inputs while decoding. It will also refine and document the relevant APIs for 
> better understanding and usage. When using least required inputs, it may add 
> computating overhead but will possiblly outperform overall since less network 
> traffic and disk IO are involved.
> This is something planned to do but just got reminded by [~zhz]' s question 
> raised in HDFS-7678, also copied here:
> bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
> is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
> I construct the inputs to RawErasureDecoder#decode?
> With this work, hopefully the answer to abov

[jira] [Commented] (HADOOP-12013) Generate fixed data to perform erasure coder test

2015-05-21 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555404#comment-14555404
 ] 

Kai Zheng commented on HADOOP-12013:


Hi [~zhz],

Thanks for your comments and questions. 
bq. My main question is how FIXED_DATA_GENERATOR leads to the desired 
repeatable behavior. It is a static variable of the base testing class, so the 
generated byte is not really very predictable.
The static generator always starts with 0 and generates bytes by simply ++, 
when hits 256 then starts with 0 again. Thus if we run it but with different 
settings or coders the inputs should be the same, as below. The test data is 
not only predictable, repeatable, but also human-readable for coding analysis 
because the bytes pattern is very simple.
{code}
Testing data chunks
0x00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 
0x02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 
0x04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 
0x06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 
0x08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 
0x0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 
0x0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 
0x0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 
0x10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 
0x12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21
{code}
bq. Do you have an example of how to use it in a real test? Like setting an 
expected value and assert it (I assume the assertion should fail without the 
fixed value but will pass with it).
Well, below is just a debugging output using the fixed data generator. I 
understand your idea about the real test, against expected *correct* coding 
output, though it's not what this issue means to do. Please note different 
encoder implementations (even in the same code scheme like RS) may have 
different encoding outputs, because they may use different hard-coded 
parameters or concrete algorithms. It's still correct because the corresponding 
decoder can decode and recover erasure data. More about this, as I mentioned in 
HADOOP-12010, we may desire they have the same coding result data against the 
same input data, but for now such behavior is still on-going yet. When that's 
done, we may have new test as, using the decoder in one implementation (like 
{{NativeRSRawDecoder}}) to decode data encoded by another implementation (like 
{{RSRawEncoder}}), to achieve the similar test effect you expect here. 
{code}
Erasure coder test settings:
 numDataUnits=10 numParityUnits=4 chunkSize=513
 erasedDataIndexes=[0, 1] erasedParityIndexes=[0, 1] usingDirectBuffer=true 
usingFixedData=true

Testing data chunks
0x00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 
0x02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 
0x04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 
0x06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 
0x08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 
0x0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 
0x0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 
0x0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 
0x10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 
0x12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 


Encoded parity chunks
0x08 5B 6A 39 8D DE 2E 7D 03 50 36 65 24 77 77 24 
0xAE FD 49 1A 2B 78 06 55 A5 F6 B7 E4 82 D1 5F 0C 
0x0A C3 C2 0B 29 E0 35 FC 0F C6 27 EE 24 ED 94 5D 
0x85 4C 6E A7 A6 6F 9C 55 80 49 83 4A AB 62 68 A1 


Decoding input chunks
0x00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
0x00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
0x0A C3 C2 0B 29 E0 35 FC 0F C6 27 EE 24 ED 94 5D 
0x85 4C 6E A7 A6 6F 9C 55 80 49 83 4A AB 62 68 A1 
0x00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
0x00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
0x04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 
0x06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 
0x08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 
0x0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 
0x0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 
0x0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 
0x10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 
0x12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 


Decoded/recovered chunks
0x08 5B 6A 39 8D DE 2E 7D 03 50 36 65 24 77 77 24 
0xAE FD 49 1A 2B 78 06 55 A5 F6 B7 E4 82 D1 5F 0C 
0x00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 
0x02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 

Erasure coder test settings:
 numDataUnits=10 numParityUnits=4 chunkSize=496
 erasedDataIndexes=[0, 1] erasedParityIndexes=[0, 1] usingDirectBuffer=true 
usingFixedData=true


Testing data chunks
0x00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 
0x02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 
0x04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 
0x06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 
0x08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 
0x0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 
0x0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 
0x0E 0F 10 11 12 13 14 1

[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555399#comment-14555399
 ] 

Allen Wittenauer commented on HADOOP-11933:
---

bq. docker  5m 55s  docker mode 

:D

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch, HADOOP-11933.06.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555387#comment-14555387
 ] 

Kengo Seki commented on HADOOP-12014:
-

I retried {{dev-support/test-patch.sh HADOOP-12014}} and now it was 
successfully applied.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555376#comment-14555376
 ] 

Hadoop QA commented on HADOOP-11933:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | docker |   5m 55s | docker mode |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 18s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.7) issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| | |   6m 32s | |
\\
\\
|| Subsystem || Report/Notes ||
| Java | 1.7.0_80 |
| uname | Linux 53c5cb7b3495 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Docker | C=1.6.1/S=1.6.1/I:test-patch-base-hadoop-date2015-05-22-00 |
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734706/HADOOP-11933.06.patch |
| git revision | trunk / 53fafcf |
| Optional Tests | shellcheck |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6799/console |


This message was automatically generated.

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch, HADOOP-11933.06.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555364#comment-14555364
 ] 

Hadoop QA commented on HADOOP-11933:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6799/console in case of 
problems.

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch, HADOOP-11933.06.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11933:
--
Attachment: HADOOP-11933.06.patch

-06:
* [~raviprak]'s issues, except custom dockerfile support

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch, HADOOP-11933.06.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555354#comment-14555354
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | docker |   6m  6s | docker mode |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 16s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.7) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   6m 38s | |
\\
\\
|| Subsystem || Report/Notes ||
| Java | 1.7.0_80 |
| uname | Linux b48baba7ade7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Docker | C=1.6.1/S=1.6.1/I:test-patch-base-hadoop-date2015-05-22-00 |
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734696/HADOOP-11933.06.patch |
| git revision | trunk / 53fafcf |
| Optional Tests | shellcheck |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6798/console |


This message was automatically generated.

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11820.patch, HADOOP-11933.03.patch, 
> HADOOP-11933.04.patch, HADOOP-11933.05.patch, HADOOP-11933.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555343#comment-14555343
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6798/console in case of 
problems.

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11820.patch, HADOOP-11933.03.patch, 
> HADOOP-11933.04.patch, HADOOP-11933.05.patch, HADOOP-11933.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11994) smart-apply-patch wrongly assumes that git is infallible

2015-05-21 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555336#comment-14555336
 ] 

Kengo Seki commented on HADOOP-11994:
-

I misunderstood. Thank you.

> smart-apply-patch wrongly assumes that git is infallible
> 
>
> Key: HADOOP-11994
> URL: https://issues.apache.org/jira/browse/HADOOP-11994
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> Even if git fails, smart-apply-patch should try the normal patch command.  
> I've seen a few patches now where git apply fails, but patch does not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11994) smart-apply-patch wrongly assumes that git is infallible

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555329#comment-14555329
 ] 

Allen Wittenauer commented on HADOOP-11994:
---


It definitely tries git apply first in trunk:

{code}
# Special case for git-diff patches without --no-prefix
if is_git_diff_with_prefix "$PATCH_FILE"; then
  GIT_FLAGS="--binary -p1 -v"
  if [[ -z $DRY_RUN ]]; then
  GIT_FLAGS="$GIT_FLAGS --stat --apply "
  echo Going to apply git patch with: git apply "${GIT_FLAGS}"
  else
  GIT_FLAGS="$GIT_FLAGS --check "
  fi
  git apply ${GIT_FLAGS} "${PATCH_FILE}"
  exit $?
fi

# Come up with a list of changed files into $TMP
TMP="$TMPDIR/smart-apply.paths.$RANDOM"
TOCLEAN="$TOCLEAN $TMP"

if $PATCH -p0 -E --dry-run < $PATCH_FILE 2>&1 > $TMP; then
{code}

> smart-apply-patch wrongly assumes that git is infallible
> 
>
> Key: HADOOP-11994
> URL: https://issues.apache.org/jira/browse/HADOOP-11994
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> Even if git fails, smart-apply-patch should try the normal patch command.  
> I've seen a few patches now where git apply fails, but patch does not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-05-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-11933.06.patch

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11820.patch, HADOOP-11933.03.patch, 
> HADOOP-11933.04.patch, HADOOP-11933.05.patch, HADOOP-11933.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11994) smart-apply-patch wrongly assumes that git is infallible

2015-05-21 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555322#comment-14555322
 ] 

Kengo Seki commented on HADOOP-11994:
-

[~aw], smart-apply-patch seems to use the normal patch command rather than git 
apply. And on HADOOP-12014, smart-apply-patch failed but git apply succeeded.
What you meant is, if patch command fails, then try git apply?


> smart-apply-patch wrongly assumes that git is infallible
> 
>
> Key: HADOOP-11994
> URL: https://issues.apache.org/jira/browse/HADOOP-11994
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> Even if git fails, smart-apply-patch should try the normal patch command.  
> I've seen a few patches now where git apply fails, but patch does not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12017) Hadoop archives command should use configurable replication factor when closing

2015-05-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12017:
-
Affects Version/s: 2.7.0

> Hadoop archives command should use configurable replication factor when 
> closing
> ---
>
> Key: HADOOP-12017
> URL: https://issues.apache.org/jira/browse/HADOOP-12017
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> {{HadoopArchives#HArchivesReducer#close}} uses hard-coded replication factor. 
> It should use {{repl}} instead, which is parsed from command line parameters.
> {code}
>   // try increasing the replication 
>   fs.setReplication(index, (short) 5);
>   fs.setReplication(masterIndex, (short) 5);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555222#comment-14555222
 ] 

Allen Wittenauer commented on HADOOP-11933:
---

bq. There is horribly little in the way of security while pulling down images. 
Can we please atleast pass in the image / dockerfile as a command line option? 

I was originally going to do this but decided to do it as a follow-on JIRA at a 
later date.  The key problem is that we need all of the test-patch-docker bits 
in the same directory. We need to essentially copy the standard bits and then 
the provided file to the processing dir then launch docker against that.  That 
requires quite a bit of testing esp with --reexec mode.  It's also worth noting 
that using this on the Jenkins hosts prevents the ability to actually precommit 
test the test-patch-docker bits.   So, it's on my to-do list but a lower 
priority vs. everything else.  I really wanted to get this in without this 
functionality to unblock other, higher priority items.

bq. Also we are curling and wgetting http instead of https. 

Good catch.  I'll update the dev Dockerfile as well.

bq. Could you please also write documentation for several functions missing it?

Sure. Just be aware that they are private to the test-patch-docker script.

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12017) Hadoop archives command should use configurable replication factor when closing

2015-05-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12017:
---
Description: 
{{HadoopArchives#HArchivesReducer#close}} uses hard-coded replication factor. 
It should use {{repl}} instead, which is parsed from command line parameters.
{code}
  // try increasing the replication 
  fs.setReplication(index, (short) 5);
  fs.setReplication(masterIndex, (short) 5);
}
{code}

  was:
{{HadoopArchives#close}} uses hard-coded replication factor. It should use 
{{repl}} instead, which is parsed from command line parameters.
{code}
  // try increasing the replication 
  fs.setReplication(index, (short) 5);
  fs.setReplication(masterIndex, (short) 5);
}
{code}


> Hadoop archives command should use configurable replication factor when 
> closing
> ---
>
> Key: HADOOP-12017
> URL: https://issues.apache.org/jira/browse/HADOOP-12017
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> {{HadoopArchives#HArchivesReducer#close}} uses hard-coded replication factor. 
> It should use {{repl}} instead, which is parsed from command line parameters.
> {code}
>   // try increasing the replication 
>   fs.setReplication(index, (short) 5);
>   fs.setReplication(masterIndex, (short) 5);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12017) Hadoop archives command should use configurable replication factor when closing

2015-05-21 Thread Zhe Zhang (JIRA)
Zhe Zhang created HADOOP-12017:
--

 Summary: Hadoop archives command should use configurable 
replication factor when closing
 Key: HADOOP-12017
 URL: https://issues.apache.org/jira/browse/HADOOP-12017
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang


{{HadoopArchives#close}} uses hard-coded replication factor. It should use 
{{repl}} instead, which is parsed from command line parameters.
{code}
  // try increasing the replication 
  fs.setReplication(index, (short) 5);
  fs.setReplication(masterIndex, (short) 5);
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12016) Typo in FileSystem:: listStatusIterator

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555144#comment-14555144
 ] 

Hudson commented on HADOOP-12016:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7886 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7886/])
HADOOP-12016. Typo in FileSystem::listStatusIterator. Contributed by Arthur 
Vigil. (jghoman: rev 4fc942a84f492065bacfa30cf8b624dc6a5f062b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Typo in FileSystem:: listStatusIterator
> ---
>
> Key: HADOOP-12016
> URL: https://issues.apache.org/jira/browse/HADOOP-12016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Jakob Homan
>Assignee: Arthur Vigil
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-12016.001.patch
>
>
> {code}  public RemoteIterator listStatusIterator(final Path p)
>   throws FileNotFoundException, IOException {
> return new RemoteIterator() {
> //...
>   throw new NoSuchElementException("No more entry in " + p);
> }
> return stats[i++];
>   }{code}
> Should be 'no more entries'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12016) Typo in FileSystem:: listStatusIterator

2015-05-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HADOOP-12016:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1.  I've committed this.  Resolving.  Thanks, Arthur!

> Typo in FileSystem:: listStatusIterator
> ---
>
> Key: HADOOP-12016
> URL: https://issues.apache.org/jira/browse/HADOOP-12016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Jakob Homan
>Assignee: Arthur Vigil
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-12016.001.patch
>
>
> {code}  public RemoteIterator listStatusIterator(final Path p)
>   throws FileNotFoundException, IOException {
> return new RemoteIterator() {
> //...
>   throw new NoSuchElementException("No more entry in " + p);
> }
> return stats[i++];
>   }{code}
> Should be 'no more entries'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555123#comment-14555123
 ] 

Ravi Prakash commented on HADOOP-11933:
---

Thanks for all the work Allen! I love the idea of reproducible builds.

One of my prime concerns with this patch is security. There is horribly little 
in the way of security while pulling down images. Can we please atleast pass in 
the image / dockerfile as a command line option? Also we are curling and 
wgetting http instead of https. Could you please also write documentation for 
several functions missing it?


> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12016) Typo in FileSystem:: listStatusIterator

2015-05-21 Thread Arthur Vigil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arthur Vigil reassigned HADOOP-12016:
-

Assignee: Arthur Vigil

> Typo in FileSystem:: listStatusIterator
> ---
>
> Key: HADOOP-12016
> URL: https://issues.apache.org/jira/browse/HADOOP-12016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Jakob Homan
>Assignee: Arthur Vigil
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12016.001.patch
>
>
> {code}  public RemoteIterator listStatusIterator(final Path p)
>   throws FileNotFoundException, IOException {
> return new RemoteIterator() {
> //...
>   throw new NoSuchElementException("No more entry in " + p);
> }
> return stats[i++];
>   }{code}
> Should be 'no more entries'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12016) Typo in FileSystem:: listStatusIterator

2015-05-21 Thread Arthur Vigil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arthur Vigil updated HADOOP-12016:
--
Status: Patch Available  (was: Open)

Fixes typo in FileSystem:: listStatusIterator

> Typo in FileSystem:: listStatusIterator
> ---
>
> Key: HADOOP-12016
> URL: https://issues.apache.org/jira/browse/HADOOP-12016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Jakob Homan
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12016.001.patch
>
>
> {code}  public RemoteIterator listStatusIterator(final Path p)
>   throws FileNotFoundException, IOException {
> return new RemoteIterator() {
> //...
>   throw new NoSuchElementException("No more entry in " + p);
> }
> return stats[i++];
>   }{code}
> Should be 'no more entries'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12016) Typo in FileSystem:: listStatusIterator

2015-05-21 Thread Arthur Vigil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arthur Vigil updated HADOOP-12016:
--
Attachment: HADOOP-12016.001.patch

fixes typo

> Typo in FileSystem:: listStatusIterator
> ---
>
> Key: HADOOP-12016
> URL: https://issues.apache.org/jira/browse/HADOOP-12016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Jakob Homan
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12016.001.patch
>
>
> {code}  public RemoteIterator listStatusIterator(final Path p)
>   throws FileNotFoundException, IOException {
> return new RemoteIterator() {
> //...
>   throw new NoSuchElementException("No more entry in " + p);
> }
> return stats[i++];
>   }{code}
> Should be 'no more entries'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2015-05-21 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555032#comment-14555032
 ] 

Sanjay Radia commented on HADOOP-9984:
--

[~asuresh] thanks for your comment on Hive.
* if you configure hiveserver2 with sql-standard-auth then the file system 
permissions do not apply and symlinks should not be an issue. The data should 
be owned by hiverserver and users should not have hdfs access to those 
directories and files.,
* if you configure file-system-auth then the issue you describe will occur 
*when impersonation is tuned off*. Recall we had to fix Hive to work with 
encryption; likewise Hive will need to understand symlinks. To deal with 
atomicity issues (race between checking and setting symlink) we may have to add 
an api to HDFS to resolve to inode# and then resolve from inode#. (HDFS does 
have inode number that were added for NFS.). However, isn't file-system-auth 
usually used with impersonation where symlinks are not an issue?
* With impersonation turned on, the job will run as the user and symlinks will 
work. Correct?

> FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
> default
> --
>
> Key: HADOOP-9984
> URL: https://issues.apache.org/jira/browse/HADOOP-9984
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.1.0-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
> HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
> HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch, 
> HADOOP-9984.013.patch, HADOOP-9984.014.patch, HADOOP-9984.015.patch
>
>
> During the process of adding symlink support to FileSystem, we realized that 
> many existing HDFS clients would be broken by listStatus and globStatus 
> returning symlinks.  One example is applications that assume that 
> !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
> HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
> resolved paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12016) Typo in FileSystem:: listStatusIterator

2015-05-21 Thread Jakob Homan (JIRA)
Jakob Homan created HADOOP-12016:


 Summary: Typo in FileSystem:: listStatusIterator
 Key: HADOOP-12016
 URL: https://issues.apache.org/jira/browse/HADOOP-12016
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Jakob Homan
Priority: Trivial


{code}  public RemoteIterator listStatusIterator(final Path p)
  throws FileNotFoundException, IOException {
return new RemoteIterator() {
//...
  throw new NoSuchElementException("No more entry in " + p);
}
return stats[i++];
  }{code}
Should be 'no more entries'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554847#comment-14554847
 ] 

Allen Wittenauer edited comment on HADOOP-11933 at 5/21/15 7:22 PM:


I'm playing shell games. :D

--dockermode is a standalone option that says "hey, we were invoked underneath 
docker, don't spawn another one" TESTPATCHMODE has all of the user passed 
parameters.  We then append --basedir, etc to *override* the values in 
TESTPATCHMODE.

So this code is adding --dockermode to the parameter list.  If you want, I can 
move --dockermode to be after TESTPATCHMODE if it makes it cleaner.  Or maybe 
just a comment here, warning of the black magic in the works.


was (Author: aw):
I'm play shell games. :D

--dockermode is a standalone option that says "hey, we were invoked underneath 
docker, don't spawn another one" TESTPATCHMODE has all of the user passed 
parameters.  We then append --basedir, etc to *override* the values in 
TESTPATCHMODE.

So this code is adding --dockermode to the parameter list.  If you want, I can 
move --dockermode to be after TESTPATCHMODE if it makes it cleaner.  Or maybe 
just a comment here, warning of the black magic in the works.

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12013) Generate fixed data to perform erasure coder test

2015-05-21 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554906#comment-14554906
 ] 

Zhe Zhang commented on HADOOP-12013:


Thanks Kai for the work. The overall structure of allowing both fixed and 
random encoding inputs looks good.

My main question is how {{FIXED_DATA_GENERATOR}} leads to the desired 
repeatable behavior. It is a static variable of the base testing class, so the 
generated byte is not really very predictable. In {{SimulatedFSDataset}}, the 
simulated byte is predictable because it can be statically calculated from the 
block ID and offset.

Do you have an example of how to use it in a real test? Like setting an 
expected value and assert it (I assume the assertion should fail without the 
fixed value but will pass with it).

Nit: the below code can be replaced by a byte mask, similar to 
{{SimulatedFSDataset#simulatedByte}}:

{code}
+  buffer[i] = (byte) FIXED_DATA_GENERATOR++;
+  if (FIXED_DATA_GENERATOR == 256) {
+FIXED_DATA_GENERATOR = 0;
+  }
+}
{code}

> Generate fixed data to perform erasure coder test
> -
>
> Key: HADOOP-12013
> URL: https://issues.apache.org/jira/browse/HADOOP-12013
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12013-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to allow 
> generating and using fixed data to test raw erasure coders to ease the 
> debugging some coding issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554872#comment-14554872
 ] 

Sean Busbey commented on HADOOP-11933:
--

The note here is enough I think. I should have done more work to trace back 
what ends up in TESTPATCHMODE. If it comes up again we can always add a comment.

+1 lgtm.

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554847#comment-14554847
 ] 

Allen Wittenauer commented on HADOOP-11933:
---

I'm play shell games. :D

--dockermode is a standalone option that says "hey, we were invoked underneath 
docker, don't spawn another one" TESTPATCHMODE has all of the user passed 
parameters.  We then append --basedir, etc to *override* the values in 
TESTPATCHMODE.

So this code is adding --dockermode to the parameter list.  If you want, I can 
move --dockermode to be after TESTPATCHMODE if it makes it cleaner.  Or maybe 
just a comment here, warning of the black magic in the works.

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554816#comment-14554816
 ] 

Sean Busbey commented on HADOOP-11933:
--

{code}
+
+"${tpbin}" \
+   --dockermode ${TESTPATCHMODE} \
+   --basedir="${BASEDIR}" \
{code}

{code}
+  --dockermode)
+DOCKERMODE=true
+  ;;
{code}

How do these two work together? from what I can tell, the parsing says 
dockermode takes no args, but it looks like we're invoking it with one (and no 
=).

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12015) Jenkins jobs will attempt to patch using an image

2015-05-21 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang resolved HADOOP-12015.
-
Resolution: Won't Fix

Closing as won't fix for the reasons above.

> Jenkins jobs will attempt to patch using an image
> -
>
> Key: HADOOP-12015
> URL: https://issues.apache.org/jira/browse/HADOOP-12015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Allen Wittenauer
>
> For MAPREDUCE-6222, I attached a patch and a screenshot and got no Jenkins 
> message.
> Robert Kanter helped me re-launch the job in Jenkins and it failed attempting 
> to use the screenshot as the patch.  Ideally, something in the build scripts 
> needs to be improved in order to ignore non-patch files.
> Re-launched Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5748/
> Original Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5746/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12015) Jenkins jobs will attempt to patch using an image

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554669#comment-14554669
 ] 

Allen Wittenauer commented on HADOOP-12015:
---

bq. Do you think it's worthwhile to use "file" to identify the file as "diff 
output text"? 

HADOOP-11906

:D

> Jenkins jobs will attempt to patch using an image
> -
>
> Key: HADOOP-12015
> URL: https://issues.apache.org/jira/browse/HADOOP-12015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Allen Wittenauer
>
> For MAPREDUCE-6222, I attached a patch and a screenshot and got no Jenkins 
> message.
> Robert Kanter helped me re-launch the job in Jenkins and it failed attempting 
> to use the screenshot as the patch.  Ideally, something in the build scripts 
> needs to be improved in order to ignore non-patch files.
> Re-launched Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5748/
> Original Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5746/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12015) Jenkins jobs will attempt to patch using an image

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554666#comment-14554666
 ] 

Allen Wittenauer commented on HADOOP-12015:
---

BTW, ignoring the last file if it is a non-patch is going to result in a lot of 
churn:

* multiple downloads to find a file since people name things weird
* lots of excess patch precommit checking for patches that have already been 
processed bz the image/doc/whatever came after the patch precommit ran 
(extremely common in the cases where there is more than just a patch present)

IMO, this is pretty much a won't fix.  Just upload the patch last and it all 
works.

> Jenkins jobs will attempt to patch using an image
> -
>
> Key: HADOOP-12015
> URL: https://issues.apache.org/jira/browse/HADOOP-12015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Allen Wittenauer
>
> For MAPREDUCE-6222, I attached a patch and a screenshot and got no Jenkins 
> message.
> Robert Kanter helped me re-launch the job in Jenkins and it failed attempting 
> to use the screenshot as the patch.  Ideally, something in the build scripts 
> needs to be improved in order to ignore non-patch files.
> Re-launched Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5748/
> Original Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5746/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12015) Jenkins jobs will attempt to patch using an image

2015-05-21 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554663#comment-14554663
 ] 

Ray Chiang commented on HADOOP-12015:
-

Got it.

Do you think it's worthwhile to use "file" to identify the file as "diff output 
text"?  Otherwise, I'm fine with closing this up and re-submitting the patch on 
my JIRA.

> Jenkins jobs will attempt to patch using an image
> -
>
> Key: HADOOP-12015
> URL: https://issues.apache.org/jira/browse/HADOOP-12015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Allen Wittenauer
>
> For MAPREDUCE-6222, I attached a patch and a screenshot and got no Jenkins 
> message.
> Robert Kanter helped me re-launch the job in Jenkins and it failed attempting 
> to use the screenshot as the patch.  Ideally, something in the build scripts 
> needs to be improved in order to ignore non-patch files.
> Re-launched Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5748/
> Original Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5746/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12015) Jenkins jobs will attempt to patch using an image

2015-05-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554649#comment-14554649
 ] 

Allen Wittenauer commented on HADOOP-12015:
---

test-patch.sh looks at the last file to determine what it should do.  This is 
pretty much by design.

> Jenkins jobs will attempt to patch using an image
> -
>
> Key: HADOOP-12015
> URL: https://issues.apache.org/jira/browse/HADOOP-12015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Allen Wittenauer
>
> For MAPREDUCE-6222, I attached a patch and a screenshot and got no Jenkins 
> message.
> Robert Kanter helped me re-launch the job in Jenkins and it failed attempting 
> to use the screenshot as the patch.  Ideally, something in the build scripts 
> needs to be improved in order to ignore non-patch files.
> Re-launched Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5748/
> Original Jenkins job at
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5746/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12015) Jenkins jobs will attempt to patch using an image

2015-05-21 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-12015:
---

 Summary: Jenkins jobs will attempt to patch using an image
 Key: HADOOP-12015
 URL: https://issues.apache.org/jira/browse/HADOOP-12015
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ray Chiang
Assignee: Allen Wittenauer


For MAPREDUCE-6222, I attached a patch and a screenshot and got no Jenkins 
message.

Robert Kanter helped me re-launch the job in Jenkins and it failed attempting 
to use the screenshot as the patch.  Ideally, something in the build scripts 
needs to be improved in order to ignore non-patch files.

Re-launched Jenkins job at

https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5748/

Original Jenkins job at

https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/5746/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11418) Property "io.compression.codec.lzo.class" does not work with other value besides default

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554611#comment-14554611
 ] 

Hadoop QA commented on HADOOP-11418:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  4s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 41s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 41s | Tests passed in 
hadoop-common. |
| | |  59m 53s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734565/HADOOP-11418.006.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 2b6bcfd |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6795/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6795/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6795/console |


This message was automatically generated.

> Property "io.compression.codec.lzo.class" does not work with other value 
> besides default
> 
>
> Key: HADOOP-11418
> URL: https://issues.apache.org/jira/browse/HADOOP-11418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: fang fang chen
>Assignee: fang fang chen
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11418-004.patch, HADOOP-11418-1.patch, 
> HADOOP-11418-2.patch, HADOOP-11418-3.patch, HADOOP-11418.005.patch, 
> HADOOP-11418.006.patch, HADOOP-11418.patch
>
>
> From following code, seems "io.compression.codec.lzo.class" does not work for 
> other codec besides default. Hadoop will always treat it as defaultClazz. I 
> think it is a bug. Please let me know if this is a work as design thing. 
> Thanks
>  77   private static final String defaultClazz =
>  78   "org.apache.hadoop.io.compress.LzoCodec";
>  82   public synchronized boolean isSupported() {
>  83 if (!checked) {
>  84   checked = true;
>  85   String extClazz =
>  86   (conf.get(CONF_LZO_CLASS) == null ? System
>  87   .getProperty(CONF_LZO_CLASS) : null);
>  88   String clazz = (extClazz != null) ? extClazz : defaultClazz;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554605#comment-14554605
 ] 

Kengo Seki commented on HADOOP-12014:
-

The error above was also occurred on my environment, but {{git apply -p0 
HADOOP-12014.001.patch}} passed successfully.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554597#comment-14554597
 ] 

Hadoop QA commented on HADOOP-12014:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734576/HADOOP-12014.001.patch 
|
| Optional Tests |  |
| git revision | trunk / 2b6bcfd |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6796/console |


This message was automatically generated.

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12014:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12014:

Attachment: HADOOP-12014.001.patch

> hadoop-config.cmd displays a wrong error message
> 
>
> Key: HADOOP-12014
> URL: https://issues.apache.org/jira/browse/HADOOP-12014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12014.001.patch
>
>
> If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
> error message as follows.
> {code}
>echo Error: JAVA_HOME is incorrectly set.
>echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
> {code}
> But the default configuration directory has moved to etc/hadoop, so it should 
> be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12014) hadoop-config.cmd displays a wrong error message

2015-05-21 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12014:
---

 Summary: hadoop-config.cmd displays a wrong error message
 Key: HADOOP-12014
 URL: https://issues.apache.org/jira/browse/HADOOP-12014
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Kengo Seki
Priority: Trivial


If an incorrect value is set for %JAVA_HOME%, hadoop-config.cmd displays an 
error message as follows.

{code}
   echo Error: JAVA_HOME is incorrectly set.
   echoPlease update %HADOOP_HOME%\conf\hadoop-env.cmd
{code}

But the default configuration directory has moved to etc/hadoop, so it should 
be  %HADOOP_CONF_DIR%\hadoop-env.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554555#comment-14554555
 ] 

Hudson commented on HADOOP-11772:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2150 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2150/])
HADOOP-11772. RPC Invoker relies on static ClientCache which has 
synchronized(this) blocks. Contributed by Haohui Mai. (wheat9: rev 
fb6b38d67d8b997eca498fc5010b037e3081ace7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
> HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
> cached-connections.png, cached-locking.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554554#comment-14554554
 ] 

Hudson commented on HADOOP-10366:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2150 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2150/])
HADOOP-10366. Add whitespaces between classes for values in core-default.xml to 
fit better in browser. Contributed by kanaka kumar avvaru. (aajisaka: rev 
0e4f1081c7a98e1c0c4f922f5e2afe467a0d763f)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11418) Property "io.compression.codec.lzo.class" does not work with other value besides default

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11418:
---
Attachment: HADOOP-11418.006.patch

Fixed checkstyle issue.

> Property "io.compression.codec.lzo.class" does not work with other value 
> besides default
> 
>
> Key: HADOOP-11418
> URL: https://issues.apache.org/jira/browse/HADOOP-11418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: fang fang chen
>Assignee: fang fang chen
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11418-004.patch, HADOOP-11418-1.patch, 
> HADOOP-11418-2.patch, HADOOP-11418-3.patch, HADOOP-11418.005.patch, 
> HADOOP-11418.006.patch, HADOOP-11418.patch
>
>
> From following code, seems "io.compression.codec.lzo.class" does not work for 
> other codec besides default. Hadoop will always treat it as defaultClazz. I 
> think it is a bug. Please let me know if this is a work as design thing. 
> Thanks
>  77   private static final String defaultClazz =
>  78   "org.apache.hadoop.io.compress.LzoCodec";
>  82   public synchronized boolean isSupported() {
>  83 if (!checked) {
>  84   checked = true;
>  85   String extClazz =
>  86   (conf.get(CONF_LZO_CLASS) == null ? System
>  87   .getProperty(CONF_LZO_CLASS) : null);
>  88   String clazz = (extClazz != null) ? extClazz : defaultClazz;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554352#comment-14554352
 ] 

Hudson commented on HADOOP-10366:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #192 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/192/])
HADOOP-10366. Add whitespaces between classes for values in core-default.xml to 
fit better in browser. Contributed by kanaka kumar avvaru. (aajisaka: rev 
0e4f1081c7a98e1c0c4f922f5e2afe467a0d763f)
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md


> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554353#comment-14554353
 ] 

Hudson commented on HADOOP-11772:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #192 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/192/])
HADOOP-11772. RPC Invoker relies on static ClientCache which has 
synchronized(this) blocks. Contributed by Haohui Mai. (wheat9: rev 
fb6b38d67d8b997eca498fc5010b037e3081ace7)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
> HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
> cached-connections.png, cached-locking.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554301#comment-14554301
 ] 

Hudson commented on HADOOP-11772:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #202 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/202/])
HADOOP-11772. RPC Invoker relies on static ClientCache which has 
synchronized(this) blocks. Contributed by Haohui Mai. (wheat9: rev 
fb6b38d67d8b997eca498fc5010b037e3081ace7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
> HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
> cached-connections.png, cached-locking.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554300#comment-14554300
 ] 

Hudson commented on HADOOP-10366:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #202 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/202/])
HADOOP-10366. Add whitespaces between classes for values in core-default.xml to 
fit better in browser. Contributed by kanaka kumar avvaru. (aajisaka: rev 
0e4f1081c7a98e1c0c4f922f5e2afe467a0d763f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md


> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12013) Generate fixed data to perform erasure coder test

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554294#comment-14554294
 ] 

Hadoop QA commented on HADOOP-12013:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 19s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 13s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  3s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 56s | Tests passed in 
hadoop-common. |
| | |  41m 58s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734439/HADOOP-12013-HDFS-7285-v1.patch
 |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 9fdb5be |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6794/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6794/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6794/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6794/console |


This message was automatically generated.

> Generate fixed data to perform erasure coder test
> -
>
> Key: HADOOP-12013
> URL: https://issues.apache.org/jira/browse/HADOOP-12013
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12013-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to allow 
> generating and using fixed data to test raw erasure coders to ease the 
> debugging some coding issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554289#comment-14554289
 ] 

Hadoop QA commented on HADOOP-12011:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 24s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 13s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  8s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 47s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 45s | Tests passed in 
hadoop-common. |
| | |  42m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734430/HADOOP-12011-HDFS-7285-v1.patch
 |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 9fdb5be |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6793/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6793/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6793/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6793/console |


This message was automatically generated.

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12011-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) FileSystemContractBaseTest:testListStatus should not assume listStatus returns sorted results

2015-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554290#comment-14554290
 ] 

Steve Loughran commented on HADOOP-12009:
-

good catch. 

# could you update the specification documentation to make this fact clear
# what does this mean for the iterator query? it means there's no guarantees 
there either

> FileSystemContractBaseTest:testListStatus should not assume listStatus 
> returns sorted results
> -
>
> Key: HADOOP-12009
> URL: https://issues.apache.org/jira/browse/HADOOP-12009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jakob Homan
>Assignee: J.Andreina
>Priority: Minor
>
> FileSystem.listStatus does not guarantee that implementations will return 
> sorted entries:
> {code}  /**
>* List the statuses of the files/directories in the given path if the path 
> is
>* a directory.
>* 
>* @param f given path
>* @return the statuses of the files/directories in the given patch
>* @throws FileNotFoundException when the path does not exist;
>* IOException see specific implementation
>*/
>   public abstract FileStatus[] listStatus(Path f) throws 
> FileNotFoundException, 
>  IOException;{code}
> However, FileSystemContractBaseTest, expects the elements to come back sorted:
> {code}Path[] testDirs = { path("/test/hadoop/a"),
> path("/test/hadoop/b"),
> path("/test/hadoop/c/1"), };
>
> // ...
> paths = fs.listStatus(path("/test/hadoop"));
> assertEquals(3, paths.length);
> assertEquals(path("/test/hadoop/a"), paths[0].getPath());
> assertEquals(path("/test/hadoop/b"), paths[1].getPath());
> assertEquals(path("/test/hadoop/c"), paths[2].getPath());{code}
> We should pass this test as long as all the paths are there, regardless of 
> their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554233#comment-14554233
 ] 

Hudson commented on HADOOP-10366:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2132 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2132/])
HADOOP-10366. Add whitespaces between classes for values in core-default.xml to 
fit better in browser. Contributed by kanaka kumar avvaru. (aajisaka: rev 
0e4f1081c7a98e1c0c4f922f5e2afe467a0d763f)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554234#comment-14554234
 ] 

Hudson commented on HADOOP-11772:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2132 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2132/])
HADOOP-11772. RPC Invoker relies on static ClientCache which has 
synchronized(this) blocks. Contributed by Haohui Mai. (wheat9: rev 
fb6b38d67d8b997eca498fc5010b037e3081ace7)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
> HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
> cached-connections.png, cached-locking.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11418) Property "io.compression.codec.lzo.class" does not work with other value besides default

2015-05-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554229#comment-14554229
 ] 

Hadoop QA commented on HADOOP-11418:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 17s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 10s | The applied patch generated  1 
new checkstyle issues (total was 4, now 5). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 41s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 34s | Tests passed in 
hadoop-common. |
| | |  62m  6s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734427/HADOOP-11418.005.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0e4f108 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6792/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6792/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6792/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6792/console |


This message was automatically generated.

> Property "io.compression.codec.lzo.class" does not work with other value 
> besides default
> 
>
> Key: HADOOP-11418
> URL: https://issues.apache.org/jira/browse/HADOOP-11418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: fang fang chen
>Assignee: fang fang chen
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11418-004.patch, HADOOP-11418-1.patch, 
> HADOOP-11418-2.patch, HADOOP-11418-3.patch, HADOOP-11418.005.patch, 
> HADOOP-11418.patch
>
>
> From following code, seems "io.compression.codec.lzo.class" does not work for 
> other codec besides default. Hadoop will always treat it as defaultClazz. I 
> think it is a bug. Please let me know if this is a work as design thing. 
> Thanks
>  77   private static final String defaultClazz =
>  78   "org.apache.hadoop.io.compress.LzoCodec";
>  82   public synchronized boolean isSupported() {
>  83 if (!checked) {
>  84   checked = true;
>  85   String extClazz =
>  86   (conf.get(CONF_LZO_CLASS) == null ? System
>  87   .getProperty(CONF_LZO_CLASS) : null);
>  88   String clazz = (extClazz != null) ? extClazz : defaultClazz;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-11588) Benchmark framework and test for erasure coders

2015-05-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11588 started by Kai Zheng.
--
> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11588-v1.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12013) Generate fixed data to perform erasure coder test

2015-05-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12013:
---
Attachment: HADOOP-12013-HDFS-7285-v1.patch

Uploaded a patch allowing to generate and use fixed repeatable data to perform 
a test.

> Generate fixed data to perform erasure coder test
> -
>
> Key: HADOOP-12013
> URL: https://issues.apache.org/jira/browse/HADOOP-12013
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12013-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to allow 
> generating and using fixed data to test raw erasure coders to ease the 
> debugging some coding issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12013) Generate fixed data to perform erasure coder test

2015-05-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12013:
---
Fix Version/s: HDFS-7285
   Status: Patch Available  (was: Open)

> Generate fixed data to perform erasure coder test
> -
>
> Key: HADOOP-12013
> URL: https://issues.apache.org/jira/browse/HADOOP-12013
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12013-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to allow 
> generating and using fixed data to test raw erasure coders to ease the 
> debugging some coding issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12013) Generate fixed data to perform erasure coder test

2015-05-21 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-12013:
--

 Summary: Generate fixed data to perform erasure coder test
 Key: HADOOP-12013
 URL: https://issues.apache.org/jira/browse/HADOOP-12013
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


While working on native erasure coders, it was found useful to allow generating 
and using fixed data to test raw erasure coders to ease the debugging some 
coding issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11594) Improve the readability of site index of documentation

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554189#comment-14554189
 ] 

Akira AJISAKA commented on HADOOP-11594:


Found a dead link when reviewing the patch. Filed YARN-3694.

> Improve the readability of site index of documentation
> --
>
> Key: HADOOP-11594
> URL: https://issues.apache.org/jira/browse/HADOOP-11594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11594.001.patch, HADOOP-11594.002.patch, 
> HADOOP-11594.003.patch, HADOOP-11594.004.patch
>
>
> * change the order of items
> * make redundant title shorter and fit it in single line as far as possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8793) hadoop-core has dependencies on two different versions of commons-httpclient

2015-05-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8793:
---
   Resolution: Cannot Reproduce
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

We're not seeing this any more; updated dependencies (including Jetty) has made 
it go away (for now)

> hadoop-core has dependencies on two different versions of commons-httpclient
> 
>
> Key: HADOOP-8793
> URL: https://issues.apache.org/jira/browse/HADOOP-8793
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.205.0
> Environment: Seen on 0.20.205.0, but may be an issue for other 
> versions of hadoop-core (and probably other hadoop builds)
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: BB2015-05-TBR, convergence, dependency, hadoop, 
> httpclient, maven
> Fix For: 2.6.0
>
> Attachments: HADOOP-8793.patch
>
>
> hadoop-core fails to enforce dependency convergence, resulting in potential 
> conflicts.
> At the very least, there appears to be a direct dependency on
> {code}commons-httpclient:commons-httpclient:3.0.1{code}
> but a transitive dependency on
> {code}commons-httpclient:commons-httpclient:3.1{code}
> via
> {code}net.java.dev.jets3t:jets3t:0.7.1{code}
> See 
> http://maven.apache.org/enforcer/enforcer-rules/dependencyConvergence.html 
> for details on how to enforce dependency convergence in Maven.
> Please enforce dependency convergence... it helps projects that depend on 
> hadoop libraries build much more reliably and safely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11594) Improve the readability of site index of documentation

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554183#comment-14554183
 ] 

Akira AJISAKA commented on HADOOP-11594:


+1, built the document and the index looks good to me.

> Improve the readability of site index of documentation
> --
>
> Key: HADOOP-11594
> URL: https://issues.apache.org/jira/browse/HADOOP-11594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11594.001.patch, HADOOP-11594.002.patch, 
> HADOOP-11594.003.patch, HADOOP-11594.004.patch
>
>
> * change the order of items
> * make redundant title shorter and fit it in single line as far as possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554180#comment-14554180
 ] 

Steve Loughran commented on HADOOP-10105:
-

Note that if the dependency is cut from httpclient, this must go down as an 
incompatible change -anything downstream that expected it there is in trouble. 

> remove httpclient dependency
> 
>
> Key: HADOOP-10105
> URL: https://issues.apache.org/jira/browse/HADOOP-10105
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
> HADOOP-10105.part2.patch, HADOOP-10105.patch
>
>
> httpclient is now end-of-life and is no longer being developed.  Now that we 
> have a dependency on {{httpcore}}, we should phase out our use of the old 
> discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
> {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12011:
---
Fix Version/s: HDFS-7285

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12011-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12011:
---
Status: Patch Available  (was: Open)

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12011-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12011:
---
Attachment: HADOOP-12011-HDFS-7285-v1.patch

Uploaded a patch allowing to dump some information that's useful for debugging.

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12011-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11418) Property "io.compression.codec.lzo.class" does not work with other value besides default

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11418:
---
Attachment: HADOOP-11418.005.patch

v5 patch:
* Fix inconsistency between
{code}
  String extClazz = (extClazz_conf == null) ?
  System.getProperty(CONF_LZO_CLASS) : extClazz_conf;
{code}
and
{code}
  String clazz = (extClazz != null) ? extClazz : defaultClazz;
{code}
. One is using {{==}}, the other is using {{!=}}.

> Property "io.compression.codec.lzo.class" does not work with other value 
> besides default
> 
>
> Key: HADOOP-11418
> URL: https://issues.apache.org/jira/browse/HADOOP-11418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: fang fang chen
>Assignee: fang fang chen
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11418-004.patch, HADOOP-11418-1.patch, 
> HADOOP-11418-2.patch, HADOOP-11418-3.patch, HADOOP-11418.005.patch, 
> HADOOP-11418.patch
>
>
> From following code, seems "io.compression.codec.lzo.class" does not work for 
> other codec besides default. Hadoop will always treat it as defaultClazz. I 
> think it is a bug. Please let me know if this is a work as design thing. 
> Thanks
>  77   private static final String defaultClazz =
>  78   "org.apache.hadoop.io.compress.LzoCodec";
>  82   public synchronized boolean isSupported() {
>  83 if (!checked) {
>  84   checked = true;
>  85   String extClazz =
>  86   (conf.get(CONF_LZO_CLASS) == null ? System
>  87   .getProperty(CONF_LZO_CLASS) : null);
>  88   String clazz = (extClazz != null) ? extClazz : defaultClazz;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554150#comment-14554150
 ] 

Hudson commented on HADOOP-10366:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #203 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/203/])
HADOOP-10366. Add whitespaces between classes for values in core-default.xml to 
fit better in browser. Contributed by kanaka kumar avvaru. (aajisaka: rev 
0e4f1081c7a98e1c0c4f922f5e2afe467a0d763f)
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md


> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554151#comment-14554151
 ] 

Hudson commented on HADOOP-11772:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #203 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/203/])
HADOOP-11772. RPC Invoker relies on static ClientCache which has 
synchronized(this) blocks. Contributed by Haohui Mai. (wheat9: rev 
fb6b38d67d8b997eca498fc5010b037e3081ace7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
> HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
> cached-connections.png, cached-locking.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554092#comment-14554092
 ] 

Hudson commented on HADOOP-11772:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #934 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/934/])
HADOOP-11772. RPC Invoker relies on static ClientCache which has 
synchronized(this) blocks. Contributed by Haohui Mai. (wheat9: rev 
fb6b38d67d8b997eca498fc5010b037e3081ace7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
> HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
> cached-connections.png, cached-locking.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554091#comment-14554091
 ] 

Hudson commented on HADOOP-10366:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #934 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/934/])
HADOOP-10366. Add whitespaces between classes for values in core-default.xml to 
fit better in browser. Contributed by kanaka kumar avvaru. (aajisaka: rev 
0e4f1081c7a98e1c0c4f922f5e2afe467a0d763f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md


> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12012) Investigate JNI for improving byte array comparison performance

2015-05-21 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12012:
--

 Summary: Investigate JNI for improving byte array comparison 
performance
 Key: HADOOP-12012
 URL: https://issues.apache.org/jira/browse/HADOOP-12012
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: benchmarks, io, performance
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor


HADOOP-7761 added functionality to compare byte arrays by treating them as 
arrays of 64-bit longs for performance. However HADOOP-11466 reverted this 
change for the SPARC architecture as it causes misaligned traps which causes 
performance to be worse rather than better.

Most platforms have a highly-optimised memcmp() libc function that uses 
processor-specific functionality to perform byte array comparison as quickly as 
is possible for the platform.

We have done some preliminary benchmarking on Solaris that suggests that, for 
reasonably-sized byte arrays, JNI code using memcmp() outperforms both of the 
current Java byte-size and long-size implementations on both SPARC and x86 . We 
are confirming the results and will repeat the same benchmark on Linux and 
report the results here for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553947#comment-14553947
 ] 

Hudson commented on HADOOP-10366:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7880 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7880/])
HADOOP-10366. Add whitespaces between classes for values in core-default.xml to 
fit better in browser. Contributed by kanaka kumar avvaru. (aajisaka: rev 
0e4f1081c7a98e1c0c4f922f5e2afe467a0d763f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md


> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-05-21 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553926#comment-14553926
 ] 

Vinayakumar B commented on HADOOP-9922:
---

I think as of now, for 2.5.2 you can try-out HADOOP-9922-002.patch ?

> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch, patch 
> error.txt
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10366:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~kanaka] for the contribution!

> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553905#comment-14553905
 ] 

Akira AJISAKA commented on HADOOP-10366:


+1. I'll commit this shortly.

> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10366) Add whitespaces between the classes for values in core-default.xml to fit better in browser

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10366:
---
Priority: Minor  (was: Trivial)
  Labels: documentation newbie  (was: BB2015-05-RFC documentation 
newbie patch)
 Summary: Add whitespaces between the classes for values in 
core-default.xml to fit better in browser  (was: [Doc] wrap value of 
io.serializations in core-default.xml to fit better in browser)
Hadoop Flags: Reviewed

> Add whitespaces between the classes for values in core-default.xml to fit 
> better in browser
> ---
>
> Key: HADOOP-10366
> URL: https://issues.apache.org/jira/browse/HADOOP-10366
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Chengwei Yang
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: documentation, newbie
> Attachments: HADOOP-10366-03.patch, HADOOP-10366-04.patch, 
> HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, HADOOP-10366-wrap02.patch, 
> HADOOP-10366.patch
>
>
> The io.serialization property in core-default.xml has a very long value in a 
> single line, as below
> {code}
> 
>   io.serializations
>   
> org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
>   A list of serialization classes that can be used for
>   obtaining serializers and deserializers.
> 
> {code}
> which not only break the code style (a very long line) but also not fit well 
> in browser. Due to this single very long line, the "description" column can 
> not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-05-21 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553895#comment-14553895
 ] 

Kiran Kumar M R commented on HADOOP-9922:
-

I suggest you download and start with 2.7 version. Once you are able to compile 
and run on Win32 or Win64, then you can consider rebasing this patch to 2.5.2.

Refer this link for compilation steps
http://zutai.blogspot.com/2014/06/build-install-and-run-hadoop-24-240-on.html?showComment=1422091525887#c2264594416650430988



> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch, patch 
> error.txt
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-05-21 Thread Rohan Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553873#comment-14553873
 ] 

Rohan Kulkarni commented on HADOOP-9922:


Yes Kiran , I am interested if i knew what has to be done, frankly played whole 
day with this on visual studio but reaching no where.I am new to hadoop and 
dont understand most concepts , got stuck at it at wrong time i guess .

> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch, patch 
> error.txt
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-05-21 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553859#comment-14553859
 ] 

Kiran Kumar M R commented on HADOOP-9922:
-

Patch was given for 2.7 version and trunk (3.0).
I see that you are using 2.5.2, can you upgrade to 2.7?
If you download 2.7, then this fix is already included.

If you cannot upgrade and want to continue with 2.5.2, then patch has to be 
rebased.
Are you interested in rebasing and submitting a patch for 2.5.x? 



> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch, patch 
> error.txt
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553806#comment-14553806
 ] 

Akira AJISAKA commented on HADOOP-8751:
---

Thanks [~kanaka] for taking this issue. Mostly looks good to me. Two comments:
1. For Token.addBinaryBuffer, we can drop null check for {{bytes}} since the 
variable is guaranteed to be non null.

{code}
+assertEquals(token1, token2);
+assertEquals(token1.toString(), token2.toString());
+assertEquals(token1.encodeToUrlString(), token2.encodeToUrlString());
{code}
2. I'm thinking comparing {{Token.toString()}} is redundant and can be removed.

> NPE in Token.toString() when Token is constructed using null identifier
> ---
>
> Key: HADOOP-8751
> URL: https://issues.apache.org/jira/browse/HADOOP-8751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha
>Reporter: Vlad Rozov
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
> HADOOP-8751-02.patch, HADOOP-8751.patch
>
>
> Token constructor allows null to be passed leading to NPE in 
> Token.toString(). Simple fix is to check for null in constructor and use 
> empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8751:
--
Labels: BB2015-05-TBR  (was: BB2015-05-RFC)

> NPE in Token.toString() when Token is constructed using null identifier
> ---
>
> Key: HADOOP-8751
> URL: https://issues.apache.org/jira/browse/HADOOP-8751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha
>Reporter: Vlad Rozov
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
> HADOOP-8751-02.patch, HADOOP-8751.patch
>
>
> Token constructor allows null to be passed leading to NPE in 
> Token.toString(). Simple fix is to check for null in constructor and use 
> empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)