[jira] [Commented] (HDFS-3286) When the threshold value for balancer is 0(zero) ,unexpected output is displayed

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260296#comment-13260296
 ] 

Uma Maheswara Rao G commented on HDFS-3286:
---

@Nicholas, do we need to mark this as incompatible change? As we are throwing 
IllegalArgumentException instead of NumberFormatException now.

> When the threshold value for balancer is 0(zero) ,unexpected output is 
> displayed
> 
>
> Key: HDFS-3286
> URL: https://issues.apache.org/jira/browse/HDFS-3286
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 0.23.0
>Reporter: J.Andreina
>Assignee: Ashish Singhi
> Fix For: 0.24.0
>
> Attachments: HDFS-3286.patch
>
>
> Replication factor =1
> Step 1: Start NN,DN1.write 4 GB of data
> Step 2: Start DN2
> Step 3: issue the balancer command(./hdfs balancer -threshold 0)
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%
> When the above scenario is executed the Source DN and Target DN is choosen 
> and the number of bytes to be moved from source to target DN is also 
> calculated .
> Then the balancer is exiting with the following message "No block can be 
> moved. Exiting..." which is not expected.
> {noformat}
> HOST-xx-xx-xx-xx:/home/Andreina/APril10/install/hadoop/namenode/bin # ./hdfs 
> balancer -threshold 0
> 12/04/16 16:22:07 INFO balancer.Balancer: Using a threshold of 0.0
> 12/04/16 16:22:07 INFO balancer.Balancer: namenodes = 
> [hdfs://HOST-xx-xx-xx-xx:9000]
> 12/04/16 16:22:07 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=0.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/yy.yy.yy.yy:50176
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/xx.xx.xx.xx:50010
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 over-utilized: 
> [Source[xx.xx.xx.xx:50010, utilization=7.212458091389678]]
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 underutilized: 
> [BalancerDatanode[yy.yy.yy.yy:50176, utilization=4.650670324367203E-5]]
> 12/04/16 16:22:10 INFO balancer.Balancer: Need to move 1.77 GB to make the 
> cluster balanced.
> No block can be moved. Exiting...
> Balancing took 5.142 seconds
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3286) When the threshold value for balancer is 0(zero) ,unexpected output is displayed

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260293#comment-13260293
 ] 

Uma Maheswara Rao G commented on HDFS-3286:
---

Ashish, patch looks good. Some comments.

1) doTest javadoc may need to update with parameters?

2) 'If null Balancer will take the default values.' --- may be typo here?

3) Since the CLI is static class, we can access directly parse api. Why can't 
we add the tests directly to it. May be with all boundary values. As a unit 
test, that should be sufficient for this change and also will ensure all 
boundary conditions satisfied. So, we need not start cluster, balancer right.





> When the threshold value for balancer is 0(zero) ,unexpected output is 
> displayed
> 
>
> Key: HDFS-3286
> URL: https://issues.apache.org/jira/browse/HDFS-3286
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 0.23.0
>Reporter: J.Andreina
>Assignee: Ashish Singhi
> Fix For: 0.24.0
>
> Attachments: HDFS-3286.patch
>
>
> Replication factor =1
> Step 1: Start NN,DN1.write 4 GB of data
> Step 2: Start DN2
> Step 3: issue the balancer command(./hdfs balancer -threshold 0)
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%
> When the above scenario is executed the Source DN and Target DN is choosen 
> and the number of bytes to be moved from source to target DN is also 
> calculated .
> Then the balancer is exiting with the following message "No block can be 
> moved. Exiting..." which is not expected.
> {noformat}
> HOST-xx-xx-xx-xx:/home/Andreina/APril10/install/hadoop/namenode/bin # ./hdfs 
> balancer -threshold 0
> 12/04/16 16:22:07 INFO balancer.Balancer: Using a threshold of 0.0
> 12/04/16 16:22:07 INFO balancer.Balancer: namenodes = 
> [hdfs://HOST-xx-xx-xx-xx:9000]
> 12/04/16 16:22:07 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=0.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/yy.yy.yy.yy:50176
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/xx.xx.xx.xx:50010
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 over-utilized: 
> [Source[xx.xx.xx.xx:50010, utilization=7.212458091389678]]
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 underutilized: 
> [BalancerDatanode[yy.yy.yy.yy:50176, utilization=4.650670324367203E-5]]
> 12/04/16 16:22:10 INFO balancer.Balancer: Need to move 1.77 GB to make the 
> cluster balanced.
> No block can be moved. Exiting...
> Balancing took 5.142 seconds
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3316) The tar ball doesn't include jsvc any more

2012-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260285#comment-13260285
 ] 

Hadoop QA commented on HDFS-3316:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523933/hdfs-3316.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2319//console

This message is automatically generated.

> The tar ball doesn't include jsvc any more
> --
>
> Key: HDFS-3316
> URL: https://issues.apache.org/jira/browse/HDFS-3316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 1.0.3
>
> Attachments: hdfs-3316.patch
>
>
> The current release tarballs on the 1.0 branch don't include jsvc by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3316) The tar ball doesn't include jsvc any more

2012-04-23 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HDFS-3316:


Status: Patch Available  (was: Open)

> The tar ball doesn't include jsvc any more
> --
>
> Key: HDFS-3316
> URL: https://issues.apache.org/jira/browse/HDFS-3316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 1.0.3
>
> Attachments: hdfs-3316.patch
>
>
> The current release tarballs on the 1.0 branch don't include jsvc by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3316) The tar ball doesn't include jsvc any more

2012-04-23 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HDFS-3316:


Attachment: hdfs-3316.patch

> The tar ball doesn't include jsvc any more
> --
>
> Key: HDFS-3316
> URL: https://issues.apache.org/jira/browse/HDFS-3316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 1.0.3
>
> Attachments: hdfs-3316.patch
>
>
> The current release tarballs on the 1.0 branch don't include jsvc by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3316) The tar ball doesn't include jsvc any more

2012-04-23 Thread Owen O'Malley (JIRA)
Owen O'Malley created HDFS-3316:
---

 Summary: The tar ball doesn't include jsvc any more
 Key: HDFS-3316
 URL: https://issues.apache.org/jira/browse/HDFS-3316
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 1.0.3


The current release tarballs on the 1.0 branch don't include jsvc by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3316) The tar ball doesn't include jsvc any more

2012-04-23 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HDFS-3316:


 Component/s: build
Target Version/s: 1.0.3

> The tar ball doesn't include jsvc any more
> --
>
> Key: HDFS-3316
> URL: https://issues.apache.org/jira/browse/HDFS-3316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 1.0.3
>
>
> The current release tarballs on the 1.0 branch don't include jsvc by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3286) When the threshold value for balancer is 0(zero) ,unexpected output is displayed

2012-04-23 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HDFS-3286:


Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Patch Available  (was: Open)

> When the threshold value for balancer is 0(zero) ,unexpected output is 
> displayed
> 
>
> Key: HDFS-3286
> URL: https://issues.apache.org/jira/browse/HDFS-3286
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 0.23.0
>Reporter: J.Andreina
>Assignee: Ashish Singhi
> Fix For: 0.24.0
>
> Attachments: HDFS-3286.patch
>
>
> Replication factor =1
> Step 1: Start NN,DN1.write 4 GB of data
> Step 2: Start DN2
> Step 3: issue the balancer command(./hdfs balancer -threshold 0)
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%
> When the above scenario is executed the Source DN and Target DN is choosen 
> and the number of bytes to be moved from source to target DN is also 
> calculated .
> Then the balancer is exiting with the following message "No block can be 
> moved. Exiting..." which is not expected.
> {noformat}
> HOST-xx-xx-xx-xx:/home/Andreina/APril10/install/hadoop/namenode/bin # ./hdfs 
> balancer -threshold 0
> 12/04/16 16:22:07 INFO balancer.Balancer: Using a threshold of 0.0
> 12/04/16 16:22:07 INFO balancer.Balancer: namenodes = 
> [hdfs://HOST-xx-xx-xx-xx:9000]
> 12/04/16 16:22:07 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=0.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/yy.yy.yy.yy:50176
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/xx.xx.xx.xx:50010
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 over-utilized: 
> [Source[xx.xx.xx.xx:50010, utilization=7.212458091389678]]
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 underutilized: 
> [BalancerDatanode[yy.yy.yy.yy:50176, utilization=4.650670324367203E-5]]
> 12/04/16 16:22:10 INFO balancer.Balancer: Need to move 1.77 GB to make the 
> cluster balanced.
> No block can be moved. Exiting...
> Balancing took 5.142 seconds
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3286) When the threshold value for balancer is 0(zero) ,unexpected output is displayed

2012-04-23 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HDFS-3286:


Attachment: HDFS-3286.patch

Uploaded the patch based on the above comments.
Uma/Nicholas, can u please review the patch.

> When the threshold value for balancer is 0(zero) ,unexpected output is 
> displayed
> 
>
> Key: HDFS-3286
> URL: https://issues.apache.org/jira/browse/HDFS-3286
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 0.23.0
>Reporter: J.Andreina
>Assignee: Ashish Singhi
> Fix For: 0.24.0
>
> Attachments: HDFS-3286.patch
>
>
> Replication factor =1
> Step 1: Start NN,DN1.write 4 GB of data
> Step 2: Start DN2
> Step 3: issue the balancer command(./hdfs balancer -threshold 0)
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%
> When the above scenario is executed the Source DN and Target DN is choosen 
> and the number of bytes to be moved from source to target DN is also 
> calculated .
> Then the balancer is exiting with the following message "No block can be 
> moved. Exiting..." which is not expected.
> {noformat}
> HOST-xx-xx-xx-xx:/home/Andreina/APril10/install/hadoop/namenode/bin # ./hdfs 
> balancer -threshold 0
> 12/04/16 16:22:07 INFO balancer.Balancer: Using a threshold of 0.0
> 12/04/16 16:22:07 INFO balancer.Balancer: namenodes = 
> [hdfs://HOST-xx-xx-xx-xx:9000]
> 12/04/16 16:22:07 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=0.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/yy.yy.yy.yy:50176
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/xx.xx.xx.xx:50010
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 over-utilized: 
> [Source[xx.xx.xx.xx:50010, utilization=7.212458091389678]]
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 underutilized: 
> [BalancerDatanode[yy.yy.yy.yy:50176, utilization=4.650670324367203E-5]]
> 12/04/16 16:22:10 INFO balancer.Balancer: Need to move 1.77 GB to make the 
> cluster balanced.
> No block can be moved. Exiting...
> Balancing took 5.142 seconds
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread amith (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260252#comment-13260252
 ] 

amith commented on HDFS-3275:
-

Thanks for the comments Aaron Will provide the patch soon


> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Fix For: 3.0.0
>
> Attachments: HDFS-3275.patch, HDFS-3275_1.patch, HDFS-3275_1.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260249#comment-13260249
 ] 

Uma Maheswara Rao G commented on HDFS-3275:
---

Thanks Aaron, for taking a look. Amith, could you please address these comments 
as well for commit?

> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Fix For: 3.0.0
>
> Attachments: HDFS-3275.patch, HDFS-3275_1.patch, HDFS-3275_1.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260243#comment-13260243
 ] 

Aaron T. Myers commented on HDFS-3275:
--

Patch looks pretty good to me. Just a few little comments. +1 once these are 
addressed:

# Don't declare the "DEFAULT_SCHEME" constant in the NameNode class. Instead, 
use the NNStorage.LOCAL_URI_SCHEME constant, which is used in FSEditLog to 
identify local edits logs.
# I think it's better to include the URI of the dir we're skipping, and the 
scheme we expect. So, instead of this:
{code}
System.err.println("Formatting supported only for file based storage"
  + " directories. Current directory scheme is \""
  + dirUri.getScheme() + "\". So, ignoring it for format");
{code}
How about something like this:
{code}
System.err.println("Skipping format for directory \"" + dirUri
  + "\". Can only format local directories with scheme \""
  + NNStorage.LOCAL_URI_SCHEME + "\".");
{code}
# {{"supported for" + dirUri;}} - put a space after "for"
# Odd javadoc formatting, and typo "with out" -> "without":
{code}
+  /** Sets the required configurations for performing failover.
+   *  with out any dependency on MiniDFSCluster
+   *  */
{code}
# Recommend adding a comment to the assert in NameNode#confirmFormat that the 
presence of the assert is necessary for the validity of the test.

> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Fix For: 3.0.0
>
> Attachments: HDFS-3275.patch, HDFS-3275_1.patch, HDFS-3275_1.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3315) Improve bootstrapStandby error message when other NN has misconfigured http address

2012-04-23 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260224#comment-13260224
 ] 

Aaron T. Myers commented on HDFS-3315:
--

Another thing we could do would be to see if the HTTP/S address we determine 
for the other NN is the same as the HTTP/S address for _this_ NN. I suspect 
that would catch the most common case.

> Improve bootstrapStandby error message when other NN has misconfigured http 
> address
> ---
>
> Key: HDFS-3315
> URL: https://issues.apache.org/jira/browse/HDFS-3315
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 2.0.0
>Reporter: Todd Lipcon
>Priority: Minor
>
> Currently, if the user forgets to configure the HTTP server address 
> distinctly for each NN, bootstrapStandby emits and ugly stack trace with 
> little indication as to which config is the issue. We should catch the 
> exception and display a more actionable error message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3314) HttpFS operation for getHomeDirectory is incorrect

2012-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260218#comment-13260218
 ] 

Hadoop QA commented on HDFS-3314:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523913/HDFS-3314.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2317//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2317//console

This message is automatically generated.

> HttpFS operation for getHomeDirectory is incorrect
> --
>
> Key: HDFS-3314
> URL: https://issues.apache.org/jira/browse/HDFS-3314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3314.patch
>
>
> HttpFS is using GETHOMEDIR when it should be using GETHOMEDIRECTORY based on 
> the WebHdfs HTTP API spec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3315) Improve bootstrapStandby error message when other NN has misconfigured http address

2012-04-23 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3315:
-

 Summary: Improve bootstrapStandby error message when other NN has 
misconfigured http address
 Key: HDFS-3315
 URL: https://issues.apache.org/jira/browse/HDFS-3315
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Priority: Minor


Currently, if the user forgets to configure the HTTP server address distinctly 
for each NN, bootstrapStandby emits and ugly stack trace with little indication 
as to which config is the issue. We should catch the exception and display a 
more actionable error message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3314) HttpFS operation for getHomeDirectory is incorrect

2012-04-23 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260210#comment-13260210
 ] 

Eli Collins commented on HDFS-3314:
---

Sounds good. +1 pending jenkins

> HttpFS operation for getHomeDirectory is incorrect
> --
>
> Key: HDFS-3314
> URL: https://issues.apache.org/jira/browse/HDFS-3314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3314.patch
>
>
> HttpFS is using GETHOMEDIR when it should be using GETHOMEDIRECTORY based on 
> the WebHdfs HTTP API spec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260202#comment-13260202
 ] 

Hadoop QA commented on HDFS-3275:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523911/HDFS-3275_1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2316//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2316//console

This message is automatically generated.

> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Fix For: 3.0.0
>
> Attachments: HDFS-3275.patch, HDFS-3275_1.patch, HDFS-3275_1.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-2645) Webhdfs & HttpFS (Hoop) should share the same codebase

2012-04-23 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HDFS-2645:


Assignee: Alejandro Abdelnur

> Webhdfs & HttpFS (Hoop) should share the same codebase
> --
>
> Key: HDFS-2645
> URL: https://issues.apache.org/jira/browse/HDFS-2645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3314) HttpFS operation for getHomeDirectory is incorrect

2012-04-23 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260200#comment-13260200
 ] 

Alejandro Abdelnur commented on HDFS-3314:
--

This was not caught by the compat  test in HttpFS because the WebHdfsFileSystem 
implementation of getHomeDirectory() does not make web service call but 
resolves the value locally -via the default implementation of 
FileSystem.getHomeDirectory().

After adding support for delegation tokens to HttpFS (HDFS-3113, patch coming 
soon), we'll have function parity between HttpFS and WebHdfs, then it will be 
easier to tackle code sharing (HDFS-2645).


> HttpFS operation for getHomeDirectory is incorrect
> --
>
> Key: HDFS-3314
> URL: https://issues.apache.org/jira/browse/HDFS-3314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3314.patch
>
>
> HttpFS is using GETHOMEDIR when it should be using GETHOMEDIRECTORY based on 
> the WebHdfs HTTP API spec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3314) HttpFS operation for getHomeDirectory is incorrect

2012-04-23 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260191#comment-13260191
 ] 

Eli Collins commented on HDFS-3314:
---

Change looks good, but better to define this problem away via code sharing 
right? Also, shouldn't the compat test have caught this?

> HttpFS operation for getHomeDirectory is incorrect
> --
>
> Key: HDFS-3314
> URL: https://issues.apache.org/jira/browse/HDFS-3314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3314.patch
>
>
> HttpFS is using GETHOMEDIR when it should be using GETHOMEDIRECTORY based on 
> the WebHdfs HTTP API spec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3298) Add HdfsDataOutputStream as a public API

2012-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260187#comment-13260187
 ] 

Hadoop QA commented on HDFS-3298:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523910/h3298_20120423.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 2 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2315//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2315//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2315//console

This message is automatically generated.

> Add HdfsDataOutputStream as a public API
> 
>
> Key: HDFS-3298
> URL: https://issues.apache.org/jira/browse/HDFS-3298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3298_20120423.patch
>
>
> We need a public API to access HDFS specific features like 
> getNumCurrentReplicas as mentioned in [Uma's 
> comment|https://issues.apache.org/jira/browse/HDFS-1599?focusedCommentId=13243105&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13243105].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3314) HttpFS operation for getHomeDirectory is incorrect

2012-04-23 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-3314:
-

Attachment: HDFS-3314.patch

> HttpFS operation for getHomeDirectory is incorrect
> --
>
> Key: HDFS-3314
> URL: https://issues.apache.org/jira/browse/HDFS-3314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3314.patch
>
>
> HttpFS is using GETHOMEDIR when it should be using GETHOMEDIRECTORY based on 
> the WebHdfs HTTP API spec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3314) HttpFS operation for getHomeDirectory is incorrect

2012-04-23 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-3314:
-

Status: Patch Available  (was: Open)

> HttpFS operation for getHomeDirectory is incorrect
> --
>
> Key: HDFS-3314
> URL: https://issues.apache.org/jira/browse/HDFS-3314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3314.patch
>
>
> HttpFS is using GETHOMEDIR when it should be using GETHOMEDIRECTORY based on 
> the WebHdfs HTTP API spec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3314) HttpFS operation for getHomeDirectory is incorrect

2012-04-23 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HDFS-3314:


 Summary: HttpFS operation for getHomeDirectory is incorrect
 Key: HDFS-3314
 URL: https://issues.apache.org/jira/browse/HDFS-3314
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0, 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.0


HttpFS is using GETHOMEDIR when it should be using GETHOMEDIRECTORY based on 
the WebHdfs HTTP API spec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-3275:
--

Attachment: HDFS-3275_1.patch

Patch looks good. Assert has been added in format api. So, test ensures that 
there is no exceptions out of it when we include non-file based journals.

+1

Re-attaching the same patch as Amith to trigger Jenkins.

> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Fix For: 3.0.0
>
> Attachments: HDFS-3275.patch, HDFS-3275_1.patch, HDFS-3275_1.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3298) Add HdfsDataOutputStream as a public API

2012-04-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3298:
-

Attachment: h3298_20120423.patch

h3298_20120423.patch: adds HdfsDataOutputStream.

> Add HdfsDataOutputStream as a public API
> 
>
> Key: HDFS-3298
> URL: https://issues.apache.org/jira/browse/HDFS-3298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3298_20120423.patch
>
>
> We need a public API to access HDFS specific features like 
> getNumCurrentReplicas as mentioned in [Uma's 
> comment|https://issues.apache.org/jira/browse/HDFS-1599?focusedCommentId=13243105&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13243105].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3298) Add HdfsDataOutputStream as a public API

2012-04-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3298:
-

Status: Patch Available  (was: Open)

> Add HdfsDataOutputStream as a public API
> 
>
> Key: HDFS-3298
> URL: https://issues.apache.org/jira/browse/HDFS-3298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3298_20120423.patch
>
>
> We need a public API to access HDFS specific features like 
> getNumCurrentReplicas as mentioned in [Uma's 
> comment|https://issues.apache.org/jira/browse/HDFS-1599?focusedCommentId=13243105&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13243105].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3313) create a protocol for journal service synchronziation

2012-04-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260122#comment-13260122
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3313:
--

Looks good.  Some comments:

- We should not change NameNodeProxies.  NN is not involved in journal 
synchronization.

- For the same reason, the principals in JournalSyncProtocolPB should not be 
Namenode.
{code}
+serverPrincipal = DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY,
+clientPrincipal = DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY)
{code}

- Use FSEditLog.getEditLogManifest(long fromTxId) instead of FileJournalManager 
and don't change FileJournalManager.

- Why DN is related to JournalSyncProtocol?  Please check the comments.
{code}
+   * If you are adding/changing DN's interface then you need to change both 
this
{code}

- I think we should pass JournalInfo in getEditLogManifest(..) so that it could 
first verify the version, namespace id, etc.

- Why the test is commented out?
{code}
-  @Test
+  //@Test
   public void testHttpServer() throws Exception {
{code}

> create a protocol for journal service synchronziation  
> ---
>
> Key: HDFS-3313
> URL: https://issues.apache.org/jira/browse/HDFS-3313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-3313.HDFS-3092.patch
>
>
> This protocol is used to synchronize lagging journal service with active 
> journal service with complete finalized edit segments. Currently it supports 
> one rpc call getEditLogManifest() which lists finalized edit segments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-119) logSync() may block NameNode forever.

2012-04-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260101#comment-13260101
 ] 

Konstantin Shvachko commented on HDFS-119:
--

+1 patch for branch1.0 looks good, Brandon.

> logSync() may block NameNode forever.
> -
>
> Key: HDFS-119
> URL: https://issues.apache.org/jira/browse/HDFS-119
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Reporter: Konstantin Shvachko
>Assignee: Suresh Srinivas
> Fix For: 0.21.0, 1.1.0
>
> Attachments: HDFS-119-branch-1.0.patch, HDFS-119-branch-1.0.patch, 
> HDFS-119.patch, HDFS-119.patch, HDFS119.branch1.0.patch
>
>
> # {{FSEditLog.logSync()}} first waits until {{isSyncRunning}} is false and 
> then performs syncing to file streams by calling 
> {{EditLogOutputStream.flush()}}.
> If an exception is thrown after {{isSyncRunning}} is set to {{true}} all 
> threads will always wait on this condition.
> An {{IOException}} may be thrown by {{EditLogOutputStream.setReadyToFlush()}} 
> or a {{RuntimeException}} may be thrown by {{EditLogOutputStream.flush()}} or 
> by {{processIOError()}}.
> # The loop that calls {{eStream.flush()}} for multiple 
> {{EditLogOutputStream}}-s is not synchronized, which means that another 
> thread may encounter an error and modify {{editStreams}} by say calling 
> {{processIOError()}}. Then the iterating process in {{logSync()}} will break 
> with {{IndexOutOfBoundException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3313) create a protocol for journal service synchronziation

2012-04-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-3313:
-

Attachment: HDFS-3313.HDFS-3092.patch

> create a protocol for journal service synchronziation  
> ---
>
> Key: HDFS-3313
> URL: https://issues.apache.org/jira/browse/HDFS-3313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-3313.HDFS-3092.patch
>
>
> This protocol is used to synchronize lagging journal service with active 
> journal service with complete finalized edit segments. Currently it supports 
> one rpc call getEditLogManifest() which lists finalized edit segments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly

2012-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260089#comment-13260089
 ] 

Hudson commented on HDFS-2246:
--

Integrated in Hadoop-Hdfs-22-branch #129 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-22-branch/129/])
HDFS-2246. Enable reading a block directly from local file system for a 
client on the same node as the block file. Contributed by Andrew Purtell, 
Suresh, Jitendra and Benoy (Revision 1329468)

 Result = FAILURE
shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1329468
Files : 
* /hadoop/common/branches/branch-0.22/hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/BlockReader.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/protocol/BlockLocalPathInfo.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* /hadoop/common/branches/branch-0.22/hdfs/src/test/commit-tests
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/TestClientBlockVerification.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/TestShortCircuitLocalRead.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/server/datanode/TestDataXceiver.java
* 
/hadoop/common/branches/branch-0.22/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestBlockTokenWithDFS.java


> Shortcut a local client reads to a Datanodes files directly
> ---
>
> Key: HDFS-2246
> URL: https://issues.apache.org/jira/browse/HDFS-2246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>Assignee: Jitendra Nath Pandey
> Fix For: 0.23.1, 1.0.0, 0.22.1
>
> Attachments: 0001-HDFS-347.-Local-reads.patch, HDFS-2246-22.patch, 
> HDFS-2246-branch-0.20-security-205.1.patch, 
> HDFS-2246-branch-0.20-security-205.2.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security.3.patch, 
> HDFS-2246-branch-0.20-security.no-softref.patch, 
> HDFS-2246-branch-0.20-security.patch, HDFS-2246-branch-0.20-security.patch, 
> HDFS-2246-branch-0.20-security.patch, HDFS-2246-branch-0.20-security.patch, 
> HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, 
> HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, 
> HDFS-2246.20s.1.patch, HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, 
> HDFS-2246.20s.4.txt, HDFS-2246.20s.patch, TestShortCircuitLocalRead.java, 
> localReadShortcut20-security.2patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3313) create a protocol for journal service synchronziation

2012-04-23 Thread Brandon Li (JIRA)
Brandon Li created HDFS-3313:


 Summary: create a protocol for journal service synchronziation  
 Key: HDFS-3313
 URL: https://issues.apache.org/jira/browse/HDFS-3313
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Reporter: Brandon Li
Assignee: Brandon Li


This protocol is used to synchronize lagging journal service with active 
journal service with complete finalized edit segments. Currently it supports 
one rpc call getEditLogManifest() which lists finalized edit segments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13260067#comment-13260067
 ] 

Daryn Sharp commented on HDFS-3312:
---

bq. One minor suggestion for the future: we may use the scheme of nnUri in 
HftpFileSystem.getNamenodeURL(..). Then HsftpFileSystem.openConnection(..) 
could also use HftpFileSystem.getNamenodeURL(..).

I noticed that too.  Hsftp isn't using the secure port for data transfers.  I 
already made a patch and test for it, but when I tried to run the hftp tests 
with hsftp it produced some bizarre exceptions that I haven't had a chance to 
debug.

> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 0.23.3
>
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259997#comment-13259997
 ] 

Hudson commented on HDFS-3312:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2136 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2136/])
HDFS-3312. In HftpFileSystem, the namenode URI is non-secure but the 
delegation tokens have to use secure URI.  Contributed by Daryn Sharp (Revision 
1329462)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1329462
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java


> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 0.23.3
>
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly

2012-04-23 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2246:
--

Fix Version/s: 0.22.1

Committed to branch 0.22.1. Thanks Benoy for porting.

> Shortcut a local client reads to a Datanodes files directly
> ---
>
> Key: HDFS-2246
> URL: https://issues.apache.org/jira/browse/HDFS-2246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>Assignee: Jitendra Nath Pandey
> Fix For: 0.23.1, 1.0.0, 0.22.1
>
> Attachments: 0001-HDFS-347.-Local-reads.patch, HDFS-2246-22.patch, 
> HDFS-2246-branch-0.20-security-205.1.patch, 
> HDFS-2246-branch-0.20-security-205.2.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security.3.patch, 
> HDFS-2246-branch-0.20-security.no-softref.patch, 
> HDFS-2246-branch-0.20-security.patch, HDFS-2246-branch-0.20-security.patch, 
> HDFS-2246-branch-0.20-security.patch, HDFS-2246-branch-0.20-security.patch, 
> HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, 
> HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, 
> HDFS-2246.20s.1.patch, HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, 
> HDFS-2246.20s.4.txt, HDFS-2246.20s.patch, TestShortCircuitLocalRead.java, 
> localReadShortcut20-security.2patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259979#comment-13259979
 ] 

Hudson commented on HDFS-3312:
--

Integrated in Hadoop-Common-trunk-Commit #2120 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2120/])
HDFS-3312. In HftpFileSystem, the namenode URI is non-secure but the 
delegation tokens have to use secure URI.  Contributed by Daryn Sharp (Revision 
1329462)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1329462
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java


> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 0.23.3
>
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3094) add -nonInteractive and -force option to namenode -format command

2012-04-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259980#comment-13259980
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3094:
--

Hey Todd, are you going to commit the branch-1 patch?  Or I could commit it.

> add -nonInteractive and -force option to namenode -format command
> -
>
> Key: HDFS-3094
> URL: https://issues.apache.org/jira/browse/HDFS-3094
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.24.0, 1.0.2
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 2.0.0
>
> Attachments: HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.patch, HDFS-3094.patch, 
> HDFS-3094.patch, HDFS-3094.patch, HDFS-3094.patch, HDFS-3094.patch, 
> HDFS-3094.patch
>
>
> Currently the bin/hadoop namenode -format prompts the user for a Y/N to setup 
> the directories in the local file system.
> -force : namenode formats the directories without prompting
> -nonInterActive : namenode format will return with an exit code of 1 if the 
> dir exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259977#comment-13259977
 ] 

Hudson commented on HDFS-3312:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2194 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2194/])
HDFS-3312. In HftpFileSystem, the namenode URI is non-secure but the 
delegation tokens have to use secure URI.  Contributed by Daryn Sharp (Revision 
1329462)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1329462
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java


> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 0.23.3
>
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3312:
-

   Resolution: Fixed
Fix Version/s: 0.23.3
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Daryn!

> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 0.23.3
>
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259967#comment-13259967
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3312:
--

For hftp, the NN URI is non-secure but the dt needs to use secure URI.
For hsftp, the NN URI is secure and the dt uses the same URI.

+1 patch looks good.

One minor suggestion for the future: we may use the scheme of nnUri in 
HftpFileSystem.getNamenodeURL(..).  Then HsftpFileSystem.openConnection(..) 
could also use HftpFileSystem.getNamenodeURL(..).

> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly

2012-04-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259961#comment-13259961
 ] 

Konstantin Shvachko commented on HDFS-2246:
---

+1 the patch looks good. 
Passes all tests and it ran in internal branch for some time.

> Shortcut a local client reads to a Datanodes files directly
> ---
>
> Key: HDFS-2246
> URL: https://issues.apache.org/jira/browse/HDFS-2246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>Assignee: Jitendra Nath Pandey
> Fix For: 0.23.1, 1.0.0
>
> Attachments: 0001-HDFS-347.-Local-reads.patch, HDFS-2246-22.patch, 
> HDFS-2246-branch-0.20-security-205.1.patch, 
> HDFS-2246-branch-0.20-security-205.2.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security-205.patch, 
> HDFS-2246-branch-0.20-security.3.patch, 
> HDFS-2246-branch-0.20-security.no-softref.patch, 
> HDFS-2246-branch-0.20-security.patch, HDFS-2246-branch-0.20-security.patch, 
> HDFS-2246-branch-0.20-security.patch, HDFS-2246-branch-0.20-security.patch, 
> HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, 
> HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, HDFS-2246-trunk.patch, 
> HDFS-2246.20s.1.patch, HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, 
> HDFS-2246.20s.4.txt, HDFS-2246.20s.patch, TestShortCircuitLocalRead.java, 
> localReadShortcut20-security.2patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-3222:
--

Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Patch Available  (was: Open)

> DFSInputStream#openInfo should not silently get the length as 0 when 
> locations length is zero for last partial block.
> -
>
> Key: HDFS-3222
> URL: https://issues.apache.org/jira/browse/HDFS-3222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-3222-Test.patch, HDFS-3222.patch
>
>
> I have seen one situation with Hbase cluster.
> Scenario is as follows:
> 1)1.5 blocks has been written and synced.
> 2)Suddenly cluster has been restarted.
> Reader opened the file and trying to get the length., By this time partial 
> block contained DNs are not reported to NN. So, locations for this partial 
> block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
> final size.
> But reader also assuming that, 1 block size is the final length and setting 
> his end marker. Finally reader ending up reading only partial data. Due to 
> this, HMaster could not replay the complete edits. 
> Actually this happend with 20 version. Looking at the code, same should 
> present in trunk as well.
> {code}
> int replicaNotFoundCount = locatedblock.getLocations().length;
> 
> for(DatanodeInfo datanode : locatedblock.getLocations()) {
> ..
> ..
>  // Namenode told us about these locations, but none know about the replica
> // means that we hit the race between pipeline creation start and end.
> // we require all 3 because some other exception could have happened
> // on a DN that has it.  we want to report that error
> if (replicaNotFoundCount == 0) {
>   return 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-3222:
--

Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Open  (was: Patch Available)

> DFSInputStream#openInfo should not silently get the length as 0 when 
> locations length is zero for last partial block.
> -
>
> Key: HDFS-3222
> URL: https://issues.apache.org/jira/browse/HDFS-3222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-3222-Test.patch, HDFS-3222.patch
>
>
> I have seen one situation with Hbase cluster.
> Scenario is as follows:
> 1)1.5 blocks has been written and synced.
> 2)Suddenly cluster has been restarted.
> Reader opened the file and trying to get the length., By this time partial 
> block contained DNs are not reported to NN. So, locations for this partial 
> block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
> final size.
> But reader also assuming that, 1 block size is the final length and setting 
> his end marker. Finally reader ending up reading only partial data. Due to 
> this, HMaster could not replay the complete edits. 
> Actually this happend with 20 version. Looking at the code, same should 
> present in trunk as well.
> {code}
> int replicaNotFoundCount = locatedblock.getLocations().length;
> 
> for(DatanodeInfo datanode : locatedblock.getLocations()) {
> ..
> ..
>  // Namenode told us about these locations, but none know about the replica
> // means that we hit the race between pipeline creation start and end.
> // we require all 3 because some other exception could have happened
> // on a DN that has it.  we want to report that error
> if (replicaNotFoundCount == 0) {
>   return 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2492) BlockManager cross-rack replication checks only work for ScriptBasedMapping

2012-04-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259887#comment-13259887
 ] 

Steve Loughran commented on HDFS-2492:
--

I'm +1 for this going in to trunk and 2.0; the test is that none of the minidfs 
tests fail. 

> BlockManager cross-rack replication checks only work for ScriptBasedMapping
> ---
>
> Key: HDFS-2492
> URL: https://issues.apache.org/jira/browse/HDFS-2492
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.1, 1.0.2, 2.0.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HDFS-2492-blockmanager.patch, 
> HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, 
> HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, 
> HDFS-2492-blockmanager.patch, HDFS-2492.patch
>
>
> The BlockManager cross-rack replication checks only works if script files are 
> used for replication, not if alternate plugins provide the topology 
> information.
> This is because the BlockManager sets its rack checking flag if there is a 
> filename key
> {code}
> shouldCheckForEnoughRacks = 
> conf.get(DFSConfigKeys.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY) != null;
> {code}
> yet this filename key is only used if the topology mapper defined by 
> {code}
> DFSConfigKeys.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY
> {code}
> is an instance of {{ScriptBasedMapping}}
> If any other mapper is used, the system may be multi rack, but the Block 
> Manager will not be aware of this fact unless the filename key is set to 
> something non-null

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3092) Enable journal protocol based editlog streaming for standby namenode

2012-04-23 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259763#comment-13259763
 ] 

Bikas Saha commented on HDFS-3092:
--

>From what I understand the approach is to dedicate a disk per journal daemon. 
>That would be easy when running JD's on NN machines. For the 3rd JD one could 
>use a disk on the JobTracker/ResourceManager machine.

> Enable journal protocol based editlog streaming for standby namenode
> 
>
> Key: HDFS-3092
> URL: https://issues.apache.org/jira/browse/HDFS-3092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 0.24.0, 0.23.3
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: ComparisonofApproachesforHAJournals.pdf, 
> MultipleSharedJournals.pdf, MultipleSharedJournals.pdf, 
> MultipleSharedJournals.pdf
>
>
> Currently standby namenode relies on reading shared editlogs to stay current 
> with the active namenode, for namespace changes. BackupNode used streaming 
> edits from active namenode for doing the same. This jira is to explore using 
> journal protocol based editlog streams for the standby namenode. A daemon in 
> standby will get the editlogs from the active and write it to local edits. To 
> begin with, the existing standby mechanism of reading from a file, will 
> continue to be used, instead of from shared edits, from the local edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread amith (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

amith updated HDFS-3275:


Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Fix For: 3.0.0
>
> Attachments: HDFS-3275.patch, HDFS-3275_1.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread amith (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

amith updated HDFS-3275:


Attachment: HDFS-3275_1.patch

Corrected the comments

> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Fix For: 3.0.0
>
> Attachments: HDFS-3275.patch, HDFS-3275_1.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3275) Format command overwrites contents of non-empty shared edits dir if name dirs are empty without any prompting

2012-04-23 Thread amith (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

amith updated HDFS-3275:


Status: Open  (was: Patch Available)

> Format command overwrites contents of non-empty shared edits dir if name dirs 
> are empty without any prompting
> -
>
> Key: HDFS-3275
> URL: https://issues.apache.org/jira/browse/HDFS-3275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Vinithra Varadharajan
>Assignee: amith
> Attachments: HDFS-3275.patch
>
>
> To reproduce:
> # Configure a NameNode with namedirs and a shared edits dir, all of which are 
> empty.
> # Run hdfs namenode -format. Namedirs and shared edits dir gets populated.
> # Delete the contents of the namedirs. Leave the shared edits dir as is. 
> Check the timestamps of the shared edits dir contents.
> # Run format again. The namedirs as well as the shared edits dir get 
> formatted. The shared edits dir's contents have been replaced without any 
> prompting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259713#comment-13259713
 ] 

Hadoop QA commented on HDFS-3312:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523807/HDFS-3312.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2313//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2313//console

This message is automatically generated.

> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3312:
--

Attachment: HDFS-3312.patch

Update hftp to use secure port to find tokens.  Prior changes will select 
tokens based on uri, so updated hftp and hftps to store uris instead of socket 
addrs for the secure and non-secure uris.

> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3312:
--

Status: Patch Available  (was: Open)

> Hftp selects wrong token service
> 
>
> Key: HDFS-3312
> URL: https://issues.apache.org/jira/browse/HDFS-3312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0, 0.23.3, 2.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-3312.patch
>
>
> Hftp tries to select a token based on the non-secure port in the uri, instead 
> of the secure-port.  This breaks hftp on a secure cluster and there is no 
> workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3312) Hftp selects wrong token service

2012-04-23 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-3312:
-

 Summary: Hftp selects wrong token service
 Key: HDFS-3312
 URL: https://issues.apache.org/jira/browse/HDFS-3312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker


Hftp tries to select a token based on the non-secure port in the uri, instead 
of the secure-port.  This breaks hftp on a secure cluster and there is no 
workaround.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3307) when save FSImage ,HDFS( or SecondaryNameNode or FSImage)can't handle some file whose file name has some special messy code(乱码)

2012-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13259527#comment-13259527
 ] 

Hadoop QA commented on HDFS-3307:
-

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12523758/TestUTF8AndStringGetBytes.java
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2312//console

This message is automatically generated.

> when save FSImage  ,HDFS( or  SecondaryNameNode or FSImage)can't handle some 
> file whose file name has some special messy code(乱码)
> -
>
> Key: HDFS-3307
> URL: https://issues.apache.org/jira/browse/HDFS-3307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.1
> Environment: SUSE LINUX
>Reporter: yixiaohua
> Attachments: FSImage.java, ProblemString.txt, 
> TestUTF8AndStringGetBytes.java, TestUTF8AndStringGetBytes.java
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> this the log information  of the  exception  from the SecondaryNameNode: 
> 2012-03-28 00:48:42,553 ERROR 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: 
> java.io.IOException: Found lease for
>  non-existent file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/@???
> ??tor.qzone.qq.com/keypart-00174
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFilesUnderConstruction(FSImage.java:1211)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:959)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:589)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$000(SecondaryNameNode.java:473)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:350)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:314)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:225)
> at java.lang.Thread.run(Thread.java:619)
> this is the log information  about the file from namenode:
> 2012-03-28 00:32:26,528 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=create  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 dst=null
> perm=boss:boss:rw-r--r--
> 2012-03-28 00:37:42,387 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174. 
> blk_2751836614265659170_184668759
> 2012-03-28 00:37:42,696 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 is closed by 
> DFSClient_attempt_201203271849_0016_r_000174_0
> 2012-03-28 00:37:50,315 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=rename  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 
> dst=/user/boss/pgv/fission/task16/split/  @?
> tor.qzone.qq.com/keypart-00174  perm=boss:boss:rw-r--r--
> after check the code that save FSImage,I found there are a problem that maybe 
> a bug of HDFS Code,I past below:
> -this is the saveFSImage method  in  FSImage.java, I make some 
> mark at the problem code
> /**
>* Save the contents of the FS image to the file.
>*/
>   void saveFSImage(File newFile) throws IOException {
> FSNamesystem fsNamesys = FSNamesystem.getFSNamesystem();
> FSDirectory fsDir = fsNamesys.dir;
> long startTime = FSNamesystem.now();
> //
> // Write out data
> //
> DataOutputStream out = new DataOutputStream(
> new BufferedOutputStream(
>   

[jira] [Updated] (HDFS-3307) when save FSImage ,HDFS( or SecondaryNameNode or FSImage)can't handle some file whose file name has some special messy code(乱码)

2012-04-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated HDFS-3307:


Hadoop Flags: Incompatible change,Reviewed
  Status: Patch Available  (was: Reopened)

> when save FSImage  ,HDFS( or  SecondaryNameNode or FSImage)can't handle some 
> file whose file name has some special messy code(乱码)
> -
>
> Key: HDFS-3307
> URL: https://issues.apache.org/jira/browse/HDFS-3307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.1
> Environment: SUSE LINUX
>Reporter: yixiaohua
> Attachments: FSImage.java, ProblemString.txt, 
> TestUTF8AndStringGetBytes.java, TestUTF8AndStringGetBytes.java
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> this the log information  of the  exception  from the SecondaryNameNode: 
> 2012-03-28 00:48:42,553 ERROR 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: 
> java.io.IOException: Found lease for
>  non-existent file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/@???
> ??tor.qzone.qq.com/keypart-00174
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFilesUnderConstruction(FSImage.java:1211)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:959)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:589)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$000(SecondaryNameNode.java:473)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:350)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:314)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:225)
> at java.lang.Thread.run(Thread.java:619)
> this is the log information  about the file from namenode:
> 2012-03-28 00:32:26,528 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=create  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 dst=null
> perm=boss:boss:rw-r--r--
> 2012-03-28 00:37:42,387 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174. 
> blk_2751836614265659170_184668759
> 2012-03-28 00:37:42,696 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 is closed by 
> DFSClient_attempt_201203271849_0016_r_000174_0
> 2012-03-28 00:37:50,315 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=rename  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 
> dst=/user/boss/pgv/fission/task16/split/  @?
> tor.qzone.qq.com/keypart-00174  perm=boss:boss:rw-r--r--
> after check the code that save FSImage,I found there are a problem that maybe 
> a bug of HDFS Code,I past below:
> -this is the saveFSImage method  in  FSImage.java, I make some 
> mark at the problem code
> /**
>* Save the contents of the FS image to the file.
>*/
>   void saveFSImage(File newFile) throws IOException {
> FSNamesystem fsNamesys = FSNamesystem.getFSNamesystem();
> FSDirectory fsDir = fsNamesys.dir;
> long startTime = FSNamesystem.now();
> //
> // Write out data
> //
> DataOutputStream out = new DataOutputStream(
> new BufferedOutputStream(
>  new 
> FileOutputStream(newFile)));
> try {
>   .
> 
>   // save the rest of the nodes
>   saveImage(strbuf, 0, fsDir.rootDir, out);--problem
>   fsNamesys.saveFilesUnderConstruction(out);--problem  
> detail is below
>   strbuf = null;
> } finally {
>   out.close();
> }
> LOG.info("Image file of size " + newFile.length() + " saved in " 
> + (FSNamesystem.now() - startTime)/1000 + " seconds.");
>   }
>  /**
>* Save file tree image starting from the given root.
>* This is a recursive procedure, which