[jira] [Updated] (HDFS-3071) haadmin failover command does not provide enough detail for when target NN is not ready to be active

2012-03-23 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3071:
--

Fix Version/s: (was: 0.23.2)
   0.23.3

Fixed release version

> haadmin failover command does not provide enough detail for when target NN is 
> not ready to be active
> 
>
> Key: HDFS-3071
> URL: https://issues.apache.org/jira/browse/HDFS-3071
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 0.24.0
>Reporter: Philip Zeyliger
>Assignee: Todd Lipcon
> Fix For: 0.24.0, 0.23.3
>
> Attachments: hdfs-3071.txt, hdfs-3071.txt, hdfs-3071.txt, 
> hdfs-3071.txt, hdfs-3071.txt, hdfs-3071.txt
>
>
> When running the failover command, you can get an error message like the 
> following:
> {quote}
> $ hdfs --config $(pwd) haadmin -failover namenode2 namenode1
> Failover failed: xxx.yyy/1.2.3.4:8020 is not ready to become active
> {quote}
> Unfortunately, the error message doesn't describe why that node isn't ready 
> to be active.  In my case, the target namenode's logs don't indicate anything 
> either. It turned out that the issue was "Safe mode is ON.Resources are low 
> on NN. Safe mode must be turned off manually.", but ideally the user would be 
> told that at the time of the failover.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3286) When the threshold value for balancer is 0(zero) ,unexpected output is displayed

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3286:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.3, 0.24.0)

> When the threshold value for balancer is 0(zero) ,unexpected output is 
> displayed
> 
>
> Key: HDFS-3286
> URL: https://issues.apache.org/jira/browse/HDFS-3286
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 0.23.0
>Reporter: J.Andreina
>Assignee: Ashish Singhi
> Fix For: 0.24.0
>
>
> Replication factor =1
> Step 1: Start NN,DN1.write 4 GB of data
> Step 2: Start DN2
> Step 3: issue the balancer command(./hdfs balancer -threshold 0)
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%
> When the above scenario is executed the Source DN and Target DN is choosen 
> and the number of bytes to be moved from source to target DN is also 
> calculated .
> Then the balancer is exiting with the following message "No block can be 
> moved. Exiting..." which is not expected.
> {noformat}
> HOST-xx-xx-xx-xx:/home/Andreina/APril10/install/hadoop/namenode/bin # ./hdfs 
> balancer -threshold 0
> 12/04/16 16:22:07 INFO balancer.Balancer: Using a threshold of 0.0
> 12/04/16 16:22:07 INFO balancer.Balancer: namenodes = 
> [hdfs://HOST-xx-xx-xx-xx:9000]
> 12/04/16 16:22:07 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=0.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/yy.yy.yy.yy:50176
> 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
> /default-rack/xx.xx.xx.xx:50010
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 over-utilized: 
> [Source[xx.xx.xx.xx:50010, utilization=7.212458091389678]]
> 12/04/16 16:22:10 INFO balancer.Balancer: 1 underutilized: 
> [BalancerDatanode[yy.yy.yy.yy:50176, utilization=4.650670324367203E-5]]
> 12/04/16 16:22:10 INFO balancer.Balancer: Need to move 1.77 GB to make the 
> cluster balanced.
> No block can be moved. Exiting...
> Balancing took 5.142 seconds
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3041) DFSOutputStream.close doesn't properly handle interruption

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3041:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.3, 0.24.0)

> DFSOutputStream.close doesn't properly handle interruption
> --
>
> Key: HDFS-3041
> URL: https://issues.apache.org/jira/browse/HDFS-3041
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: test.txt
>
>
> TestHFlush.testHFlushInterrupted can fail occasionally due to a race: if a 
> thread is interrupted while calling close(), then the {{finally}} clause of 
> the {{close}} function sets {{closed = true}}. At this point it has enqueued 
> the "end of block" packet to the DNs, but hasn't called {{completeFile}}. 
> Then, if {{close}} is called again (as in the test case), it will be 
> short-circuited since {{closed}} is already true. Thus {{completeFile}} never 
> ends up getting called. This also means that the test can fail if the 
> pipeline is running slowly, since the assertion that the file is the correct 
> length won't see the last packet or two.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3141) The NN should log "missing" blocks

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3141:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.3, 1.1.0)

> The NN should log "missing" blocks
> --
>
> Key: HDFS-3141
> URL: https://issues.apache.org/jira/browse/HDFS-3141
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3141-b1.txt
>
>
> It would be help debugging if the NN logged at info level and "missing" 
> blocks. In v1 missing means  there are no live / decommissioned replicas (ie 
> they're all excess or corrupt), in trunk it means all replicas of the block 
> are corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3123) BNN is getting Nullpointer execption and shuttingdown When NameNode got formatted

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3123:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.3, 0.24.0)

> BNN is getting Nullpointer execption and shuttingdown When NameNode got 
> formatted 
> --
>
> Key: HDFS-3123
> URL: https://issues.apache.org/jira/browse/HDFS-3123
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.24.0, 0.23.4
>Reporter: Brahma Reddy Battula
>Assignee: Uma Maheswara Rao G
>
> Scenario 1
> ==
> Start NN and BNN
> stop NN and BNN
> Format NN and start only BNN
> Then BNN as getting Nullpointer and getting shutdown 
> {noformat}
> 12/03/20 21:26:05 ERROR ipc.RPC: Tried to call RPC.stopProxy on an object 
> that is not a proxy.
> java.lang.IllegalArgumentException: not a proxy instance
>   at java.lang.reflect.Proxy.getInvocationHandler(Proxy.java:637)
>   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:591)
>   at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:194)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:547)
>   at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:86)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:847)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:908)
> 12/03/20 21:26:05 ERROR ipc.RPC: Could not get invocation handler null for 
> proxy class class 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB, or invocation 
> handler is not closeable.
> 12/03/20 21:26:05 ERROR namenode.NameNode: Exception in namenode join
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:609)
>   at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:205)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:547)
>   at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:86)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:847)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:908)
> 12/03/20 21:26:05 INFO namenode.NameNode: SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down NameNode at HOST-10-18-40-233/10.18.40.233
> /
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3128) TestResolveHdfsSymlink#testFcResolveAfs shouldn't use /tmp

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3128:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.3)

> TestResolveHdfsSymlink#testFcResolveAfs shouldn't use /tmp
> --
>
> Key: HDFS-3128
> URL: https://issues.apache.org/jira/browse/HDFS-3128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Eli Collins
>Priority: Minor
>
> Saw this on jenkins, TestResolveHdfsSymlink#testFcResolveAfs creates 
> /tmp/alpha which interferes with other executors on the same machine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3025) Automatic log sync shouldn't happen inside logEdit path

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3025:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.3)

> Automatic log sync shouldn't happen inside logEdit path
> ---
>
> Key: HDFS-3025
> URL: https://issues.apache.org/jira/browse/HDFS-3025
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node, performance
>Affects Versions: 0.23.3
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3025.txt, hdfs-3025.txt, hdfs-3025.txt
>
>
> HDFS-3020 fixes the "automatic log sync" functionality so that, when logEdits 
> is called without log sync, it eventually triggers a sync. That sync ends up 
> being inline, though, which means the FSN lock is usually held during it. 
> This causes a bunch of threads to pile up.
> Instead, we should have it just set a "syncNeeded" flag and trigger a sync 
> from another thread which isn't holding the lock (or from the same thread 
> using a "logSyncIfNeeded" call).
> (credit to the FB branch for this idea)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2617:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.3)

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-2617-a.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3096) dfs.datanode.data.dir.perm is set to 755 instead of 700

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3096:
--

Target Version/s: 1.0.3, 2.0.0  (was: 1.0.3, 0.23.2)

> dfs.datanode.data.dir.perm is set to 755 instead of 700
> ---
>
> Key: HDFS-3096
> URL: https://issues.apache.org/jira/browse/HDFS-3096
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.0, 1.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
>
> dfs.datanode.data.dir.perm is used by the datanode to set the permissions of 
> it data directories. This is set by default to 755 which gives read 
> permissions to everyone to that directory, opening up possibility of reading 
> the data blocks by anyone in a secure cluster. Admins can over-ride this 
> config but its sub-optimal practice for the default to be weak. IMO, the 
> default should be strong and the admins can relax it if necessary.
> The fix is to change default permissions to 700.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2815) Namenode is not coming out of safemode when we perform ( NN crash + restart ) . Also FSCK report shows blocks missed.

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2815:
--

Target Version/s: 1.1.0, 2.0.0, 3.0.0  (was: 0.23.2, 1.1.0, 0.24.0)

> Namenode is not coming out of safemode when we perform ( NN crash + restart ) 
> .  Also FSCK report shows blocks missed.
> --
>
> Key: HDFS-2815
> URL: https://issues.apache.org/jira/browse/HDFS-2815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 0.23.1, 1.0.0, 1.1.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Fix For: 0.24.0, 0.23.2
>
> Attachments: HDFS-2815-22-branch.patch, HDFS-2815-Branch-1.patch, 
> HDFS-2815.patch, HDFS-2815.patch
>
>
> When tested the HA(internal) with continuous switch with some 5mins gap, 
> found some *blocks missed* and namenode went into safemode after next switch.
>
>After the analysis, i found that this files already deleted by clients. 
> But i don't see any delete commands logs namenode log files. But namenode 
> added that blocks to invalidateSets and DNs deleted the blocks.
>When restart of the namenode, it went into safemode and expecting some 
> more blocks to come out of safemode.
>Here the reason could be that, file has been deleted in memory and added 
> into invalidates after this it is trying to sync the edits into editlog file. 
> By that time NN asked DNs to delete that blocks. Now namenode shuts down 
> before persisting to editlogs.( log behind)
>Due to this reason, we may not get the INFO logs about delete, and when we 
> restart the Namenode (in my scenario it is again switch), Namenode expects 
> this deleted blocks also, as delete request is not persisted into editlog 
> before.
>I reproduced this scenario with bedug points. *I feel, We should not add 
> the blocks to invalidates before persisting into Editlog*. 
> Note: for switch, we used kill -9 (force kill)
>   I am currently in 0.20.2 version. Same verified in 0.23 as well in normal 
> crash + restart  scenario.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2897) Enable a single 2nn to checkpoint multiple nameservices

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2897:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1)

> Enable a single 2nn to checkpoint multiple nameservices
> ---
>
> Key: HDFS-2897
> URL: https://issues.apache.org/jira/browse/HDFS-2897
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Eli Collins
> Fix For: 0.23.0
>
>
> The dfs.namenode.secondary.http-address needs to be suffixed with a 
> particular nameservice. It would be useful to be able to be able to configure 
> a single 2NN to checkpoint all the nameservices for a NN rather than having 
> to run a separate 2NN per nameservice. It could potentially checkpoint all 
> namenode IDs for a nameservice as well but given that the standby is capable 
> of checkpointing and is required I think we can ignore this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2771) Move Federation and WebHDFS documentation into HDFS project

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2771:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1)

> Move Federation and WebHDFS documentation into HDFS project
> ---
>
> Key: HDFS-2771
> URL: https://issues.apache.org/jira/browse/HDFS-2771
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>
> For some strange reason, the WebHDFS and Federation documentation is 
> currently in the hadoop-yarn site. This is counter-intuitive. We should move 
> these documents to an hdfs site, or if we think that all documentation should 
> go on one site, it should go into the hadoop-common project somewhere.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2892) Some of property descriptions are not given(hdfs-default.xml)

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2892:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1, 0.23.0)

> Some of property descriptions are not given(hdfs-default.xml) 
> --
>
> Key: HDFS-2892
> URL: https://issues.apache.org/jira/browse/HDFS-2892
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Brahma Reddy Battula
>Priority: Trivial
>
> Hi..I taken 23.0 release form 
> http://hadoop.apache.org/common/releases.html#11+Nov%2C+2011%3A+release+0.23.0+available
> I just gone through all properties provided in the hdfs-default.xml..Some of 
> the property description not mentioned..It's better to give description of 
> property and usage(how to configure ) and Only MapReduce related jars only 
> provided..Please check following two configurations
>  *No Description*
> {noformat}
> 
>   dfs.datanode.https.address
>   0.0.0.0:50475
> 
> 
>   dfs.namenode.https-address
>   0.0.0.0:50470
> 
> {noformat}
>  Better to mention example usage (what to configure...format(syntax))in 
> desc,here I did not get what default mean whether this name of n/w interface 
> or something else
>  
>   dfs.datanode.dns.interface
>   default
>   The name of the Network Interface from which a data node 
> should 
>   report its IP address.
>   
>  
> The following property is commented..If it is not supported better to remove.
> 
>dfs.cluster.administrators
>ACL for the admins
>This configuration is used to control who can access the
> default servlets in the namenode, etc.
>
> 
>  Small clarification for following property..if some value configured this 
> then NN will be safe mode upto this much time..
> May I know usage of the following property...
> 
>   dfs.blockreport.initialDelay  0
>   Delay for first block report in seconds.
> 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2999) DN metrics should include per-disk utilization

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2999:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.2)

> DN metrics should include per-disk utilization
> --
>
> Key: HDFS-2999
> URL: https://issues.apache.org/jira/browse/HDFS-2999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.23.1
>Reporter: Aaron T. Myers
>
> We should have per-dfs.data.dir metrics in the DN's metrics report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2798) Append may race with datanode block scanner, causing replica to be incorrectly marked corrupt

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2798:
--

Target Version/s: 0.23.3, 2.0.0, 3.0.0  (was: 0.23.1)

> Append may race with datanode block scanner, causing replica to be 
> incorrectly marked corrupt
> -
>
> Key: HDFS-2798
> URL: https://issues.apache.org/jira/browse/HDFS-2798
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> When a pipeline is setup for append, the block's metadata file is renamed 
> before the block is removed from the datanode block scanner queues. This can 
> cause a race condition where the block scanner incorrectly marks the block as 
> corrupt, since it tries to scan the file corresponding to the old genstamp.
> This causes TestFileAppend2 to time out in extremely rare circumstances - the 
> corrupt replica prevents the writer thread from completing the file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2884) TestDecommission.testDecommissionFederation fails intermittently

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2884:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1)

> TestDecommission.testDecommissionFederation fails intermittently
> 
>
> Key: HDFS-2884
> URL: https://issues.apache.org/jira/browse/HDFS-2884
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.1
>Reporter: Eli Collins
>
> I saw the following assert fail on a jenkins job for branch HDFS-1623 but I 
> don't think it's HA related.
>  
> {noformat}
> java.lang.AssertionError: Number of Datanodes  expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.failNotEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:126)
>   at org.junit.Assert.assertEquals(Assert.java:470)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.validateCluster(TestDecommission.java:275)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.startCluster(TestDecommission.java:288)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testDecommission(TestDecommission.java:384)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testDecommissionFederation(TestDecommission.java:344)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2936) Provide a better way to specify a HDFS-wide minimum replication requirement

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2936:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.2, 0.24.0)

> Provide a better way to specify a HDFS-wide minimum replication requirement
> ---
>
> Key: HDFS-2936
> URL: https://issues.apache.org/jira/browse/HDFS-2936
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>
> Currently, if an admin would like to enforce a replication factor for all 
> files on his HDFS, he does not have a way. He may arguably set 
> dfs.replication.min but that is a very hard guarantee and if the pipeline 
> can't afford that number for some reason/failure, the close() does not 
> succeed on the file being written and leads to several issues.
> After discussing with Todd, we feel it would make sense to introduce a second 
> config (which is ${dfs.replication.min} by default) which would act as a 
> minimum specified replication for files. This is different than 
> dfs.replication.min which also ensures that many replicas are recorded before 
> completeFile() returns... perhaps something like ${dfs.replication.min.user}. 
> We can leave dfs.replication.min alone for hard-guarantees and add 
> ${dfs.replication.min.for.block.completion} which could be left at 1 even if 
> dfs.replication.min is >1, and let files complete normally but not be of a 
> low replication factor (so can be monitored and accounted-for later).
> I'm prefering the second option myself. Will post a patch with tests soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2148) Address all the federation TODOs

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2148:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1)

> Address all the federation TODOs
> 
>
> Key: HDFS-2148
> URL: https://issues.apache.org/jira/browse/HDFS-2148
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Suresh Srinivas
>
> The federation merge introduced a bunch of todos marked "TODO:FEDERATION" and 
> "TODO".  We should have jiras for these and address the issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2930) bin/hdfs should print a message when an invalid command is specified

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2930:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.2)

> bin/hdfs should print a message when an invalid command is specified
> 
>
> Key: HDFS-2930
> URL: https://issues.apache.org/jira/browse/HDFS-2930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.1
>Reporter: Eli Collins
>  Labels: newbie
> Attachments: HDFS-2930.patch
>
>
> hdfs will currently give a NoClassDefFoundError stacktrace if there's a typo 
> specifying the command.
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ ./bin/hdfs dfadmin
> Exception in thread "main" java.lang.NoClassDefFoundError: dfadmin
> Caused by: java.lang.ClassNotFoundException: dfadmin
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2885) Remove "federation" from the nameservice config options

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2885:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1)

> Remove "federation" from the nameservice config options
> ---
>
> Key: HDFS-2885
> URL: https://issues.apache.org/jira/browse/HDFS-2885
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.1
>Reporter: Eli Collins
>
> HDFS-1623, and potentially other HDFS features will use the nameservice 
> abstraction, even if federation is not enabled (eg you need to configure 
> {{dfs.federation.nameservices}} in HA even if you're not using federation 
> just to declare your nameservice). This is confusing to users. We should 
> consider deprecating and removing "federation" from the 
> {{dfs.federation.nameservices}} and {{dfs.federation.nameservice.id}} config 
> options, as {{dfs.nameservices}} and {{dfs.nameservice.id}} are more 
> intuitive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2911) Gracefully handle OutOfMemoryErrors

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2911:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1)

> Gracefully handle OutOfMemoryErrors
> ---
>
> Key: HDFS-2911
> URL: https://issues.apache.org/jira/browse/HDFS-2911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node, name-node
>Affects Versions: 0.23.0, 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>
> We should gracefully handle j.l.OutOfMemoryError exceptions in the NN or DN. 
> We should catch them in a high-level handler, cleanly fail the RPC (vs 
> sending back the OOM stackrace) or background thread, and shutdown the NN or 
> DN. Currently the process is left in a not well-test tested state 
> (continuously fails RPCs and internal threads, may or may not recover and 
> doesn't shutdown gracefully).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3115) Update hdfs design doc to consider HA NNs

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3115:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.24.0, 0.23.1)

> Update hdfs design doc to consider HA NNs
> -
>
> Key: HDFS-3115
> URL: https://issues.apache.org/jira/browse/HDFS-3115
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.24.0, 0.23.3
>Reporter: Todd Lipcon
>Priority: Minor
>
> The hdfs_design_doc.xml still references the NN as an SPOF, which is no 
> longer true. We should sweep docs for anything else that seems to be out of 
> date with HA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2998) OfflineImageViewer and ImageVisitor should be annotated public

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2998:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.2)

> OfflineImageViewer and ImageVisitor should be annotated public
> --
>
> Key: HDFS-2998
> URL: https://issues.apache.org/jira/browse/HDFS-2998
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.23.1
>Reporter: Aaron T. Myers
>
> The OfflineImageViewer is currently annotated as InterfaceAudience.Private. 
> It's intended for subclassing, so it should be annotated as the public API 
> that it is.
> The ImageVisitor class should similarly be annotated public (evolving is 
> fine). Note that it should also be changed to be public; it's currently 
> package-private, which means that users have to cheat with their subclass 
> package name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2896) The 2NN incorrectly daemonizes

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2896:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1)

> The 2NN incorrectly daemonizes
> --
>
> Key: HDFS-2896
> URL: https://issues.apache.org/jira/browse/HDFS-2896
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>
> The SecondaryNameNode (and Checkpointer) confuse o.a.h.u.Daemon with a Unix 
> daemon. Per below it intends to create a thread that never ends, but 
> o.a.h.u.Daemon just marks a thread with Java's Thread#setDaemon which means 
> Java will terminate the thread when there are no more non-daemon user threads 
> running
> {code}
> // Create a never ending deamon
> Daemon checkpointThread = new Daemon(secondary);
> {code}
> Perhaps they thought they were using commons Daemon. We of course don't want 
> the 2NN to exit unless it exits itself or is stopped explicitly. Currently it 
> won't do this because the main thread is not marked as a daemon thread. In 
> any case, let's make the 2NN consistent with the NN in this regard (exit when 
> the RPC thread exits).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2075) Add "Number of Reporting Nodes" to namenode web UI

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2075:
--

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1, 0.24.0)

> Add "Number of Reporting Nodes" to namenode web UI
> --
>
> Key: HDFS-2075
> URL: https://issues.apache.org/jira/browse/HDFS-2075
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node, tools
>Affects Versions: 0.20.1, 0.20.2
>Reporter: Xing Jin
>Priority: Minor
>  Labels: newbie
> Fix For: 0.20.3
>
> Attachments: HDFS-2075.patch.txt
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The namenode web UI misses some information when safemode is on (e.g., the 
> number of reporting nodes). These information will help us understand the 
> system status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-891) DataNode.instantiateDataNode calls system.exit(-1) if conf.get("dfs.network.script") != null

2012-04-18 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-891:
-

Target Version/s: 2.0.0, 3.0.0  (was: 0.23.1, 0.24.0)

> DataNode.instantiateDataNode calls system.exit(-1) if 
> conf.get("dfs.network.script") != null
> 
>
> Key: HDFS-891
> URL: https://issues.apache.org/jira/browse/HDFS-891
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.22.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-891.patch
>
>
> Looking at the code for {{DataNode.instantiateDataNode())} , I see that it 
> calls {{system.exit(-1)}} if it is not happy with the configuration
> {code}
> if (conf.get("dfs.network.script") != null) {
>   LOG.error("This configuration for rack identification is not supported" 
> +
>   " anymore. RackID resolution is handled by the NameNode.");
>   System.exit(-1);
> }
> {code}
> This is excessive. It should throw an exception and let whoever called the 
> method decide how to handle it. The {{DataNode.main()}} method will log the 
> exception and exit with a -1 value, but other callers (such as anything using 
> {{MiniDFSCluster}} will now see a meaningful message rather than some Junit 
> "tests exited without completing" warning. 
> Easy to write a test for the correct behaviour: start a {{MiniDFSCluster}} 
> with this configuration set, see what happens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2380) Security downgrade of token validation

2011-09-28 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2380:
--

Description: 
HADOOP-7119 introduced the {{KerberosAuthenticationHandler}} for web services.  
It appears to have been merged into 205 to support webhdfs.

Prior to HADOOP-7119, the web service used by hftp/hsftp would validate tokens 
using long kerberos user names.  Now the realm is truncated from the user name 
which caused hftp/hsftp to break.  The {{JspHelper}} in the namenode rejected 
the token validation due to the mismatched comparison between a now short user 
(from the web service) and a long user (in the token).  Subsequently, HDFS-2361 
changed {{JspHelper}} to use the token's short user when comparing against the 
now short web user.

The security ramification is it now appears to be easier to spoof other users 
and access their files.  Based on commentary in HDFS-2361, the case can be made 
that other parts of hadoop are insecure with respect to user names, so it 
doesn't matter that security has been further downgraded.  I don't know if this 
true, or whether higher layers effectively guard against lower level 
insecurities.  In any case, this logic makes me uneasy, especially when it 
comes to changing the security of a "front door" to hadoop.

Is there a technical reason why {{KerberosAuthenticationHandler}} should not be 
changed (1-liner) to return the long user name?  This would allow HDFS-2361 to 
be reverted and return the former level of security to token validation.

  was:
HADOOP-7119 introduced the {{KerberosAuthenticationHandler}} for web services.  
It appears to have been merged into 205 to support webhdfs.

Prior to HADOOP-7119, the web service used by hftp/hsftp would validate tokens 
using long kerberos user names.  Now the realm is truncated from the user name 
which caused hftp/hsftp to break.  The {{JspHelper}} in the namenode rejected 
the token validation due to the mismatched comparison between a now short user 
(from the web service) and a long user (in the token).  Subsequently, HDFS-2361 
changed {{JspHelper}} to use the token's short user when comparing against the 
now short web user.

The security ramification is it now appears to be easier to spoof other users 
and access their files.  Based on commentary in HDFS-2361, the case can be made 
that other parts of hadoop are insecure with respect to user names, so it 
doesn't matter that security has been further downgraded.  I don't have know 
knowledge to know if this true, or whether higher layers effectively guard 
against lower level insecurities.  In any case, this logic makes me uneasy, 
especially when it comes to changing the security of a "front door" to hadoop.

Is there a technical reason why {{KerberosAuthenticationHandler}} should not be 
changed (1-liner) to return the long user name?  This would allow HDFS-2361 to 
be reverted and return the former level of security to token validation.


> Security downgrade of token validation
> --
>
> Key: HDFS-2380
> URL: https://issues.apache.org/jira/browse/HDFS-2380
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.20.205.0, 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>
> HADOOP-7119 introduced the {{KerberosAuthenticationHandler}} for web 
> services.  It appears to have been merged into 205 to support webhdfs.
> Prior to HADOOP-7119, the web service used by hftp/hsftp would validate 
> tokens using long kerberos user names.  Now the realm is truncated from the 
> user name which caused hftp/hsftp to break.  The {{JspHelper}} in the 
> namenode rejected the token validation due to the mismatched comparison 
> between a now short user (from the web service) and a long user (in the 
> token).  Subsequently, HDFS-2361 changed {{JspHelper}} to use the token's 
> short user when comparing against the now short web user.
> The security ramification is it now appears to be easier to spoof other users 
> and access their files.  Based on commentary in HDFS-2361, the case can be 
> made that other parts of hadoop are insecure with respect to user names, so 
> it doesn't matter that security has been further downgraded.  I don't know if 
> this true, or whether higher layers effectively guard against lower level 
> insecurities.  In any case, this logic makes me uneasy, especially when it 
> comes to changing the security of a "front door" to hadoop.
> Is there a technical reason why {{KerberosAuthenticationHandler}} should not 
> be changed (1-liner) to return the long user name?  This would allow 
> HDFS-2361 to be reverted and return the former level of security to token 
> validation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:

[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2414:
--

Attachment: run-158-failed.tgz
run-106-failed.tgz

Here are the logs, run-106 corresponds to the first failure (The diff failure) 
and 158 corresponds to the second failure.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> 

[jira] [Updated] (HDFS-2433) TestFileAppend4 fails intermittently

2011-10-11 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2433:
--

Attachment: failed.tar.bz2

The complete set of logs for all of the failures. (It is rather large)

> TestFileAppend4 fails intermittently
> 
>
> Key: HDFS-2433
> URL: https://issues.apache.org/jira/browse/HDFS-2433
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node, name-node, test
>Affects Versions: 0.20.205.0, 0.20.205.1
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: failed.tar.bz2
>
>
> A Jenkins build we have running failed twice in a row with issues form 
> TestFileAppend4.testAppendSyncReplication1 in an attempt to reproduce the 
> error I ran TestFileAppend4 in a loop over night saving the results away.  
> (No clean was done in between test runs)
> When TestFileAppend4 is run in a loop the testAppendSyncReplication[012] 
> tests fail about 10% of the time (14 times out of 130 tries)  They all fail 
> with something like the following.  Often it is only one of the tests that 
> fail, but I have seen as many as two fail in one run.
> {noformat}
> Testcase: testAppendSyncReplication2 took 32.198 sec
> FAILED
> Should have 2 replicas for that block, not 1
> junit.framework.AssertionFailedError: Should have 2 replicas for that block, 
> not 1
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.replicationTest(TestFileAppend4.java:477)
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.testAppendSyncReplication2(TestFileAppend4.java:425)
> {noformat}
> I also saw several other tests that are a part of TestFileApped4 fail during 
> this experiment.  They may all be related to one another so I am filing them 
> in the same JIRA.  If it turns out that they are not related then they can be 
> split up later.
> testAppendSyncBlockPlusBbw failed 6 out of the 130 times or about 5% of the 
> time
> {noformat}
> Testcase: testAppendSyncBlockPlusBbw took 1.633 sec
> FAILED
> unexpected file size! received=0 , expected=1024
> junit.framework.AssertionFailedError: unexpected file size! received=0 , 
> expected=1024
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.assertFileSize(TestFileAppend4.java:136)
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.testAppendSyncBlockPlusBbw(TestFileAppend4.java:401)
> {noformat}
> testAppendSyncChecksum[012] failed 2 out of the 130 times or about 1.5% of 
> the time
> {noformat}
> Testcase: testAppendSyncChecksum1 took 32.385 sec
> FAILED
> Should have 1 replica for that block, not 2
> junit.framework.AssertionFailedError: Should have 1 replica for that block, 
> not 2
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.checksumTest(TestFileAppend4.java:556)
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.testAppendSyncChecksum1(TestFileAppend4.java:500)
> {noformat}
> I will attach logs for all of the failures.  Be aware that I did change some 
> of the logging messages in this test so I could better see when 
> testAppendSyncReplication started and ended.  Other then that the code is 
> stock 0.20.205 RC2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2785) Update webhdfs and httpfs for host-based token support

2012-01-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2785:
--

Attachment: HDFS-2785.txt

There are no tests included with this patch, because in reality none of the 
real functionality should have changed.  There were just 3 things modified.

# The URI returned by getUri is no longer based off of the INetAddress for the 
NameNode.  This is because the host is resolved in the INetAddress and not in 
the origninal URI.
# The code to get the INetAddress from a token goes directly to security Utils 
and not through NetUtils.  This is to make the token more opaque as it was 
originally designed to be.
# The TGT renewal no longer checks for expirey of the TGT.  This change is only 
to make it consistent with HFTP, and is not truly needed.  It can be removed if 
reviewers want it to be removed.

> Update webhdfs and httpfs for host-based token support
> --
>
> Key: HDFS-2785
> URL: https://issues.apache.org/jira/browse/HDFS-2785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node, security
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Daryn Sharp
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2785.txt
>
>
> Need to port 205 tokens into these filesystems.  Will mainly involve ensuring 
> code duplicated from hftp is updated accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2785) Update webhdfs and httpfs for host-based token support

2012-01-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2785:
--

Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
  Status: Patch Available  (was: Open)

> Update webhdfs and httpfs for host-based token support
> --
>
> Key: HDFS-2785
> URL: https://issues.apache.org/jira/browse/HDFS-2785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node, security
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Daryn Sharp
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2785.txt
>
>
> Need to port 205 tokens into these filesystems.  Will mainly involve ensuring 
> code duplicated from hftp is updated accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2836) HttpFSServer still has 2 javadoc warnings in trunk

2012-01-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2836:
--

Status: Patch Available  (was: Open)

> HttpFSServer still has 2 javadoc warnings in trunk
> --
>
> Key: HDFS-2836
> URL: https://issues.apache.org/jira/browse/HDFS-2836
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2836.txt
>
>
> {noformat}
> [WARNING] 
> hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java:241:
>  warning - @param argument "override," is not a parameter name.
> [WARNING] 
> hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java:450:
>  warning - @param argument "override," is not a parameter name.
> {noformat}
> These are causing other patches to get a -1 in automated testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2836) HttpFSServer still has 2 javadoc warnings in trunk

2012-01-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2836:
--

Attachment: HDFS-2836.txt

> HttpFSServer still has 2 javadoc warnings in trunk
> --
>
> Key: HDFS-2836
> URL: https://issues.apache.org/jira/browse/HDFS-2836
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2836.txt
>
>
> {noformat}
> [WARNING] 
> hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java:241:
>  warning - @param argument "override," is not a parameter name.
> [WARNING] 
> hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java:450:
>  warning - @param argument "override," is not a parameter name.
> {noformat}
> These are causing other patches to get a -1 in automated testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2837) mvn javadoc:javadoc not seeing LimitedPrivate class

2012-01-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2837:
--

Attachment: HDFS-2837.txt

This patch also sets the number of acceptable javadoc warnings in HDFS to 0.  
This assumes that HDFS-2836 is also applied, as it fixes two javadoc issues as 
well.

> mvn javadoc:javadoc not seeing LimitedPrivate class 
> 
>
> Key: HDFS-2837
> URL: https://issues.apache.org/jira/browse/HDFS-2837
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2837.txt
>
>
> mvn javadoc:javadoc not seeing LimitedPrivate class 
> {noformat}
> [WARNING] 
> org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class 
> file for org.apache.hadoop.classification.InterfaceAudience not found
> [WARNING] 
> org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/Groups.class(org/apache/hadoop/security:Groups.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/UserGroupInformation.class(org/apache/hadoop/security:UserGroupInformation.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/UserGroupInformation.class(org/apache/hadoop/security:UserGroupInformation.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/UserGroupInformation.class(org/apache/hadoop/security:UserGroupInformation.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSDataInputStream.class(org/apache/hadoop/fs:FSDataInputStream.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSDataOutputStream.class(org/apache/hadoop/fs:FSDataOutputStream.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] org/apache/hadoop/fs/Path.class(org/apache/hadoop/fs:Path.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/UnresolvedLinkException.class(org/apache/hadoop/fs:UnresolvedLinkException.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.class(org/apache/hadoop/fs:MD5MD5CRC32FileChecksum.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/LocalDirAllocator.class(org/apache/hadoop/fs:LocalDirAllocator.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FileContext.class(org/apache/hadoop/fs:FileContext.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FileContext.class(org/apache/hadoop/fs:FileContext.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSOutputSummer.class(org/apache/hadoop/fs:FSOutputSummer.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSInputStream.class(org/apache/hadoop/fs:FSInputStream.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.Limite

[jira] [Updated] (HDFS-2837) mvn javadoc:javadoc not seeing LimitedPrivate class

2012-01-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2837:
--

Target Version/s: 0.24.0
  Status: Patch Available  (was: Open)

> mvn javadoc:javadoc not seeing LimitedPrivate class 
> 
>
> Key: HDFS-2837
> URL: https://issues.apache.org/jira/browse/HDFS-2837
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2837.txt
>
>
> mvn javadoc:javadoc not seeing LimitedPrivate class 
> {noformat}
> [WARNING] 
> org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class 
> file for org.apache.hadoop.classification.InterfaceAudience not found
> [WARNING] 
> org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/Groups.class(org/apache/hadoop/security:Groups.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/UserGroupInformation.class(org/apache/hadoop/security:UserGroupInformation.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/UserGroupInformation.class(org/apache/hadoop/security:UserGroupInformation.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/security/UserGroupInformation.class(org/apache/hadoop/security:UserGroupInformation.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSDataInputStream.class(org/apache/hadoop/fs:FSDataInputStream.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSDataOutputStream.class(org/apache/hadoop/fs:FSDataOutputStream.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] org/apache/hadoop/fs/Path.class(org/apache/hadoop/fs:Path.class): 
> warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/UnresolvedLinkException.class(org/apache/hadoop/fs:UnresolvedLinkException.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.class(org/apache/hadoop/fs:MD5MD5CRC32FileChecksum.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/LocalDirAllocator.class(org/apache/hadoop/fs:LocalDirAllocator.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FileContext.class(org/apache/hadoop/fs:FileContext.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FileContext.class(org/apache/hadoop/fs:FileContext.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSOutputSummer.class(org/apache/hadoop/fs:FSOutputSummer.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSInputStream.class(org/apache/hadoop/fs:FSInputStream.class):
>  warning: Cannot find annotation method 'value()' in type 
> 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
> [WARNING] 
> org/apache/hadoop/fs/FSInputChecker.class(org/apache/hadoop/fs:FSInputChecker.class):
>  warning: Canno

[jira] [Updated] (HDFS-2835) Fix org.apache.hadoop.hdfs.tools.GetConf$Command Findbug issue

2012-01-26 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2835:
--

Attachment: HDFS-2835.txt

> Fix org.apache.hadoop.hdfs.tools.GetConf$Command Findbug issue
> --
>
> Key: HDFS-2835
> URL: https://issues.apache.org/jira/browse/HDFS-2835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2835.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/1804//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
>  shows a findbugs warning.  It is unrelated to the patch being tested, and 
> has shown up on a few other JIRAS as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2835) Fix org.apache.hadoop.hdfs.tools.GetConf$Command Findbug issue

2012-01-26 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2835:
--

Status: Patch Available  (was: Open)

> Fix org.apache.hadoop.hdfs.tools.GetConf$Command Findbug issue
> --
>
> Key: HDFS-2835
> URL: https://issues.apache.org/jira/browse/HDFS-2835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HDFS-2835.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/1804//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
>  shows a findbugs warning.  It is unrelated to the patch being tested, and 
> has shown up on a few other JIRAS as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2867) org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations fails intermittently

2012-01-31 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2867:
--

Attachment: 
TEST-org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.xml

org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.txt

org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations-output.txt

> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
> fails intermittently
> -
>
> Key: HDFS-2867
> URL: https://issues.apache.org/jira/browse/HDFS-2867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
> Attachments: 
> TEST-org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.xml,
>  
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations-output.txt,
>  org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.txt
>
>
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
> fails intermittently.  It appears that it is caused by a previous test, or a 
> previous part of this test not shutting down properly.
> {noformat}
> Tests run: 4, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 3.994 sec <<< 
> FAILURE!
> test2NNRegistration(org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations)
>   Time elapsed: 1.775 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost.localdomain:9928] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:675)
> at org.apache.hadoop.ipc.Server.bind(Server.java:306)
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:393)
> at org.apache.hadoop.ipc.Server.(Server.java:1733)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:830)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2881) org.apache.hadoop.hdfs.TestDatanodeBlockScanner Fails Intermittently

2012-02-02 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2881:
--

Attachment: TEST-org.apache.hadoop.hdfs.TestDatanodeBlockScanner.xml
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.txt
org.apache.hadoop.hdfs.TestDatanodeBlockScanner-output.txt

> org.apache.hadoop.hdfs.TestDatanodeBlockScanner Fails Intermittently
> 
>
> Key: HDFS-2881
> URL: https://issues.apache.org/jira/browse/HDFS-2881
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.24.0
>Reporter: Robert Joseph Evans
> Attachments: 
> TEST-org.apache.hadoop.hdfs.TestDatanodeBlockScanner.xml, 
> org.apache.hadoop.hdfs.TestDatanodeBlockScanner-output.txt, 
> org.apache.hadoop.hdfs.TestDatanodeBlockScanner.txt
>
>
> org.apache.hadoop.hdfs.TestDatanodeBlockScanner fails intermittently durring 
> test-patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3012) Exception while renewing delegation token

2012-02-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3012:
--

Attachment: HDFS-3012.txt

This patch was manually tested on a secure cluster and it does fix the issue.  
Unit tests would be very hard to write as it has to deal with configuration and 
the classpath to make it work.

> Exception while renewing delegation token
> -
>
> Key: HDFS-3012
> URL: https://issues.apache.org/jira/browse/HDFS-3012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.1
>Reporter: Ramya Sunil
>Assignee: Robert Joseph Evans
>Priority: Critical
> Attachments: HDFS-3012.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3012) Exception while renewing delegation token

2012-02-24 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3012:
--

Status: Patch Available  (was: Open)

> Exception while renewing delegation token
> -
>
> Key: HDFS-3012
> URL: https://issues.apache.org/jira/browse/HDFS-3012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.1
>Reporter: Ramya Sunil
>Assignee: Robert Joseph Evans
>Priority: Critical
> Attachments: HDFS-3012.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3012) Exception while renewing delegation token

2012-02-27 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3012:
--

Target Version/s: 0.24.0, 0.23.2, 0.23.3  (was: 0.23.2)
  Status: Open  (was: Patch Available)

Canceling patch to address the issues.

> Exception while renewing delegation token
> -
>
> Key: HDFS-3012
> URL: https://issues.apache.org/jira/browse/HDFS-3012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.1
>Reporter: Ramya Sunil
>Assignee: Robert Joseph Evans
>Priority: Critical
> Attachments: HDFS-3012.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3012) Exception while renewing delegation token

2012-02-27 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3012:
--

Target Version/s: 0.24.0, 0.23.2, 0.23.3  (was: 0.23.3, 0.23.2, 0.24.0)
  Status: Patch Available  (was: Open)

> Exception while renewing delegation token
> -
>
> Key: HDFS-3012
> URL: https://issues.apache.org/jira/browse/HDFS-3012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.1
>Reporter: Ramya Sunil
>Assignee: Robert Joseph Evans
>Priority: Critical
> Attachments: HDFS-3012.txt, HDFS-3012.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3012) Exception while renewing delegation token

2012-02-27 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3012:
--

Attachment: HDFS-3012.txt

Comments have been addressed and the code is much smaller then before.  I am 
still doing manual tests to ensure that this too fixes the issue, but I assume 
that it will so I am uploading the patch now.

> Exception while renewing delegation token
> -
>
> Key: HDFS-3012
> URL: https://issues.apache.org/jira/browse/HDFS-3012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.1
>Reporter: Ramya Sunil
>Assignee: Robert Joseph Evans
>Priority: Critical
> Attachments: HDFS-3012.txt, HDFS-3012.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira