[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2019-04-03 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808508#comment-16808508
 ] 

Adam Antal commented on HDFS-13960:
---

Great, thanks [~ljain] for working on this and [~jojochuang] for the commit!

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13960.001.patch, HDFS-13960.002.patch, 
> HDFS-13960.003.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2019-04-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808088#comment-16808088
 ] 

Hudson commented on HDFS-13960:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16329 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16329/])
HDFS-13960. hdfs dfs -checksum command should optionally show block size 
(weichiu: rev cf268114c9af2e33f35d0c24b57e31ef4d5e8353)
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java


> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13960.001.patch, HDFS-13960.002.patch, 
> HDFS-13960.003.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2019-04-02 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808062#comment-16808062
 ] 

Wei-Chiu Chuang commented on HDFS-13960:


+1 failed tests unrelated. Will commit soon.

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Attachments: HDFS-13960.001.patch, HDFS-13960.002.patch, 
> HDFS-13960.003.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-22 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16695926#comment-16695926
 ] 

Adam Antal commented on HDFS-13960:
---

Awesome, thanks for taking care of items [~ljain]. 

The patch looks good to me, but a second opinion would be great as well.

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Attachments: HDFS-13960.001.patch, HDFS-13960.002.patch, 
> HDFS-13960.003.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16695707#comment-16695707
 ] 

Hadoop QA commented on HDFS-13960:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949145/HDFS-13960.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7ddbce131a82 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personali

[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-21 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16695580#comment-16695580
 ] 

Lokesh Jain commented on HDFS-13960:


v3 patch fixes a spelling error which caused TestDFSShell failure.

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Attachments: HDFS-13960.001.patch, HDFS-13960.002.patch, 
> HDFS-13960.003.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16695195#comment-16695195
 ] 

Hadoop QA commented on HDFS-13960:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949072/HDFS-13960.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1035e2a38e1d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precomm

[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-21 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694928#comment-16694928
 ] 

Lokesh Jain commented on HDFS-13960:


[~adam.antal] Thanks for reviewing the patch! v2 patch addresses your comments.

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Attachments: HDFS-13960.001.patch, HDFS-13960.002.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-20 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16693205#comment-16693205
 ] 

Adam Antal commented on HDFS-13960:
---

Thanks for the patch [~ljain], good work!
 I have some minor remarks to patch v001:
 - CLI test fails because in {{testConf.xml:715}}
{code:xml}

  RegexpComparator
  ^-checksum  \.\.\. :\s*

{code}
should be modified since the output of the checksum command is modified as well.

 - I think in {{TestDFSShell.java:1143}}
{code:java}
...
} finally {
  if (printStream != null) {
System.setOut(printStream);
  }
}
{code}
If {{System.out}} was null at the beginning, then you leave the created 
{{PrintStream}} as {{System.out}} - I think this is not good for the following 
tests. I believe the condition here is unnecessary, because if somehow the 
{{System.out}} was null I'd rather throw an assertion error. Also it can only 
be null, if {{System.out}} was null at the beginning of the test, so maybe an 
assertion there is justified.

 - In {{Display.java:203}}
{code:java}
  FileChecksum checksum = item.fs.getFileChecksum(item.path);
  if (checksum == null) {
out.printf("%s\tNONE\t%n", item.toString());
  } else {
  ...
{code}
In my opinion if the {{-v}} was provided intentionally, even if the checksum is 
null there should be displayed the blocksize as well. Also, we should handle 
the case where the FileStatus object is null in order to avoid NPE: so we 
should have an Unknown or some text displayed just like "NONE" when the 
checksum is not found.

 - That one checkstyle warning can be ignored I guess (input parameters are 
initialized that way).

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Attachments: HDFS-13960.001.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691974#comment-16691974
 ] 

Hadoop QA commented on HDFS-13960:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  4s{color} | {color:orange} root: The patch generated 1 new + 203 unchanged 
- 0 fixed = 204 total (was 203) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestCLI |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948720/HDFS-13960.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 879d71014b0d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1

[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-10-30 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16668406#comment-16668406
 ] 

Adam Antal commented on HDFS-13960:
---

Great, thanks! Its a low prio issue, so it's not urgent, just checked in.

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-10-30 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16668368#comment-16668368
 ] 

Lokesh Jain commented on HDFS-13960:


[~adam.antal] I am working on it. I will upload a patch in couple of days.

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-10-26 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664949#comment-16664949
 ] 

Adam Antal commented on HDFS-13960:
---

Hi [~ljain], did you have chance to look at this issue?

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org