[jira] [Created] (HDFS-5353) Short circuit reads fail when dfs.encrypt.data.transfer is enabled

2013-10-11 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5353:


 Summary: Short circuit reads fail when dfs.encrypt.data.transfer 
is enabled
 Key: HDFS-5353
 URL: https://issues.apache.org/jira/browse/HDFS-5353
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai


DataXceiver tries to establish secure channels via sasl when 
dfs.encrypt.data.transfer is turned on. However, domain socket traffic seems to 
be unencrypted therefore the client cannot communicate with the data node via 
domain sockets, which makes short circuit reads unfunctional.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5352) Server#initLog() doesn't close InputStream

2013-10-11 Thread Ted Yu (JIRA)
Ted Yu created HDFS-5352:


 Summary: Server#initLog() doesn't close InputStream
 Key: HDFS-5352
 URL: https://issues.apache.org/jira/browse/HDFS-5352
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: hdfs-5352.patch

Here is related code snippet in 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java:
{code}
  Properties props = new Properties();
  try {
InputStream is = getResource(DEFAULT_LOG4J_PROPERTIES);
props.load(is);
  } catch (IOException ex) {
throw new ServerException(ServerException.ERROR.S03, 
DEFAULT_LOG4J_PROPERTIES, ex.getMessage(), ex);
  }
{code}
is should be closed after loading.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5351) Name Nodes should shut down if no image directories

2013-10-11 Thread Rob Weltman (JIRA)
Rob Weltman created HDFS-5351:
-

 Summary: Name Nodes should shut down if no image directories
 Key: HDFS-5351
 URL: https://issues.apache.org/jira/browse/HDFS-5351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.1-beta
Reporter: Rob Weltman


If, for whatever reason, there are no image directories to write to, all Name 
Node instances should shut down.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5350) Name Node should report fsimage transfer time as a metric

2013-10-11 Thread Rob Weltman (JIRA)
Rob Weltman created HDFS-5350:
-

 Summary: Name Node should report fsimage transfer time as a metric
 Key: HDFS-5350
 URL: https://issues.apache.org/jira/browse/HDFS-5350
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Rob Weltman


If the (Secondary) Name Node reported fsimage transfer times (perhaps the last 
ten of them), monitoring tools could detect slowdowns that might jeopardize 
cluster stability.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5348) Fix error message when dfs.datanode.max.locked.memory is improperly configured

2013-10-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-5348.
---

   Resolution: Fixed
Fix Version/s: HDFS-4949
 Hadoop Flags: Reviewed

Committed to branch.

> Fix error message when dfs.datanode.max.locked.memory is improperly configured
> --
>
> Key: HDFS-5348
> URL: https://issues.apache.org/jira/browse/HDFS-5348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-4949
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: HDFS-4949
>
> Attachments: HDFS-5348-caching.001.patch
>
>
> We need to fix the error message when dfs.datanode.max.locked.memory is 
> improperly configured.  Currently it says the size is "less than the 
> datanode's available RLIMIT_MEMLOCK limit" when it really means "more"



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5349) DNA_CACHE and DNA_UNCACHE should be by blockId only

2013-10-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5349:
--

 Summary: DNA_CACHE and DNA_UNCACHE should be by blockId only 
 Key: HDFS-5349
 URL: https://issues.apache.org/jira/browse/HDFS-5349
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5349-caching.001.patch

DNA_CACHE and DNA_UNCACHE should be by blockId only.  We don't need length and 
genstamp to know what the NN asked us to cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5348) Fix error message when dfs.datanode.max.locked.memory is improperly configured

2013-10-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5348:
--

 Summary: Fix error message when dfs.datanode.max.locked.memory is 
improperly configured
 Key: HDFS-5348
 URL: https://issues.apache.org/jira/browse/HDFS-5348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


We need to fix the error message when dfs.datanode.max.locked.memory is 
improperly configured.  Currently it says the size is "less than the datanode's 
available RLIMIT_MEMLOCK limit" when it really means "more"



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5224) Refactor PathBasedCache* methods to use a Path rather than a String

2013-10-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5224.
-

   Resolution: Fixed
Fix Version/s: HDFS-4949
 Hadoop Flags: Reviewed

I've committed this to the HDFS-4949 branch.  Thanks for the reviews!

> Refactor PathBasedCache* methods to use a Path rather than a String
> ---
>
> Key: HDFS-5224
> URL: https://issues.apache.org/jira/browse/HDFS-5224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-4949
>Reporter: Andrew Wang
>Assignee: Chris Nauroth
> Fix For: HDFS-4949
>
> Attachments: HDFS-5224.1.patch, HDFS-5224.2.patch, HDFS-5224.3.patch
>
>
> As discussed in HDFS-5213, we should refactor PathBasedCacheDirective and 
> related methods in DistributedFileSystem to use a Path to represent paths to 
> cache, rather than a String.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


RE: hdfs project separation

2013-10-11 Thread Milind Bhandarkar
Doug,

Your understanding is correct. But I would like to start with a less
ambitious plan first. By duplicating common rpc etc code, and renaming
packages in common, we can independently build two different artifacts from
the same repo, one for hdfs and one for Yarn+MR. Then we can decide whether
we want to separate these projects completely, making independent releases.

I believe the last split of project failed because of common dependencies in
both MR and HDFS, which meant that changes to RPC etc were affecting both
upper level projects. I think we should avoid that, by duplicating needed
common code.

I would like to see what the community thinks, before making detailed plans.

- milind


-Original Message-
From: Doug Cutting [mailto:cutt...@apache.org]
Sent: Friday, October 11, 2013 11:12 AM
To: hdfs-dev@hadoop.apache.org
Subject: Re: hdfs project separation

On Fri, Oct 11, 2013 at 9:14 AM, Milind Bhandarkar
 wrote:
> If HDFS is released independently, with its own RPC and protocol versions,
> features such as pluggable namespaces will not have to wait for the next
> mega-release of the entire stack.

The plan as I understand it is to eventually be able to release common/hdfs
& yarn/mr independently, as two, three or perhaps four different products.
Once we've got that down we can consider splitting into multiple TLPs.  For
this to transpire requires folks to volunteer to create an independent
release, establishing a plan, helping to make the required changes, calling
the vote, etc.  Someone could propose doing this first with HDFS, YARN or
whatever someone thinks is best.  It would take concerted effort by a few
folks, along with consent of the rest of the project.

Do you have a detailed plan?  If so, you could share it and start trying to
build consensus around it.

Doug


[jira] [Created] (HDFS-5347) add HDFS NFS user guide

2013-10-11 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5347:


 Summary: add HDFS NFS user guide
 Key: HDFS-5347
 URL: https://issues.apache.org/jira/browse/HDFS-5347
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li






--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: hdfs project separation

2013-10-11 Thread Doug Cutting
On Fri, Oct 11, 2013 at 9:14 AM, Milind Bhandarkar
 wrote:
> If HDFS is released independently, with its own RPC and protocol versions, 
> features such as pluggable namespaces will not have to wait for the next 
> mega-release of the entire stack.

The plan as I understand it is to eventually be able to release
common/hdfs & yarn/mr independently, as two, three or perhaps four
different products.  Once we've got that down we can consider
splitting into multiple TLPs.  For this to transpire requires folks to
volunteer to create an independent release, establishing a plan,
helping to make the required changes, calling the vote, etc.  Someone
could propose doing this first with HDFS, YARN or whatever someone
thinks is best.  It would take concerted effort by a few folks, along
with consent of the rest of the project.

Do you have a detailed plan?  If so, you could share it and start
trying to build consensus around it.

Doug


Re: hdfs project separation

2013-10-11 Thread Milind Bhandarkar
Let me add a bit more about the feasibility of this.

I have been doing some experiments by duplicating some common code code in 
hdfs-only , and yarn/MR only; and am able to build and use hdfs independently.

Now that bigtop has matured, we can still do a single distro in apache with 
independently released mr/yarn and hdfs.

That will enable parallel development, and will also reduce stabilization 
overload at a mega-release time.

If HDFS is released independently, with its own RPC and protocol versions, 
features such as pluggable namespaces will not have to wait for the next 
mega-release of the entire stack.

Would love to hear what hdfs developers think about this.

- milind

Sent from my iPhone

> On Oct 10, 2013, at 20:31, Milind Bhandarkar  
> wrote:
> 
> ( this message is not intended for specific folks, by mistake, but for all 
> the hdfs-dev list, deliberately;)
> 
> Hello Folks,
> 
> I do not want to scratch the already bleeding wounds, and want to resolve 
> these issues amicably, without causing a big inter-vendor confrontation.
> 
> So, these are the facts, as I (and several others in the hadoop community) 
> see this.
> 
> 1. there was an attempt to separate different hadoop projects, such as 
> common, hdfs, mapreduce.
> 
> 2. that attempt was aborted because of several things. common ownership, i.e. 
> committership being the biggest issue.
> 
> 3. in the meanwhile, several important, release-worthy, hdfs improvements 
> were committed to Hadoop. (Thats why I supported Konst's appeal for 0.22. And 
> also incorporated into Hadoop products by the largest hadoop ecosystem 
> contributor, and several others.)
> 
> 4. All the apache hadoop bylaws were followed, to get these improvements into 
> Hadoop project.
> 
> 5. Yet, common project, which is not even a top-level project, since the 
> awkward re-merge happened, got an invompatible wire-protocol change, which 
> was accepted and promoted by a specific section, in spite of kicking and 
> screaming of (what I think of) a representative of a large hadoop user 
> community.
> 
> 6. That, and such other changes, has created a big issue for a part of the 
> community which has tested hdfs part of 2.x and has spent a lot of efforts to 
> stabilize hdfs, since this was the major part of assault from proprietary 
> storage systems, such as You-Know-Who.
> 
> I would like to raise this issue as an individual, regardless of my 
> affiliation, so that, we can make hdfs worthy of its association with the top 
> level ecosystem, without being closely associated with it.
> 
> What do the hdfs developers think? 
> 
> - milind
> 
> Sent from my iPhone


[jira] [Created] (HDFS-5346) Replication queues should not be initialized in the middle of IBR processing.

2013-10-11 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-5346:


 Summary: Replication queues should not be initialized in the 
middle of IBR processing.
 Key: HDFS-5346
 URL: https://issues.apache.org/jira/browse/HDFS-5346
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee






--
This message was sent by Atlassian JIRA
(v6.1#6144)


Build failed in Jenkins: Hadoop-Hdfs-trunk #1549

2013-10-11 Thread Apache Jenkins Server
See 

Changes:

[llu] Move HDFS-5276 to 2.3.0 in CHANGES.txt

[llu] HDFS-5276. Remove volatile from LightWeightHashSet. (Junping Du via llu)

[llu] YARN-7. Support CPU resource for DistributedShell. (Junping Du via llu)

[jing9] HADOOP-10039. Add Hive to the list of projects using 
AbstractDelegationTokenSecretManager. Contributed by Haohui Mai.

[suresh] HDFS-5335. Hive query failed with possible race in dfs output stream. 
Contributed by Haohui Mai.

[sandy] YARN-1265. Fair Scheduler chokes on unhealthy node reconnect (Sandy 
Ryza)

[suresh] HADOOP-10029. Specifying har file to MR job fails in secure cluster. 
Contributed by Suresh Srinivas.

--
[...truncated 11402 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.415 sec - in 
org.apache.hadoop.hdfs.TestListPathServlet
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.162 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 357.066 sec - 
in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.277 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.854 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.338 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.842 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.38 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.019 sec - in 
org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.052 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.508 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.197 sec - 
in org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestFileInputStreamCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.205 sec - in 
org.apache.hadoop.hdfs.TestFileInputStreamCache
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.701 sec - in 
org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.734 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.127 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.992 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.037 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.924 sec - in 
org.apache.hadoop.hdfs.TestQuota
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.934 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.95 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.282 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.221 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.31 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRea

Hadoop-Hdfs-trunk - Build # 1549 - Still Failing

2013-10-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1549/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11595 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:44:25.467s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [3.993s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:44:30.306s
[INFO] Finished at: Fri Oct 11 13:19:16 UTC 2013
[INFO] Final Memory: 35M/397M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-1265
Updating HADOOP-10039
Updating HADOOP-10029
Updating HDFS-5335
Updating HDFS-5276
Updating YARN-7
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Created] (HDFS-5345) NPE in block_info_xml JSP if the block has been deleted

2013-10-11 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-5345:


 Summary: NPE in block_info_xml JSP if the block has been deleted
 Key: HDFS-5345
 URL: https://issues.apache.org/jira/browse/HDFS-5345
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.2.0
Reporter: Steve Loughran
Priority: Minor


If you ask for a block info report on a block that has been deleted, you see a 
stack trace and a 500 error.

Steps to replicae
# create a file
# browse to it
# get the block info
# delete the file
# reload the block info page

Maybe a 404 is the response to raise instead



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Hadoop-Hdfs-0.23-Build - Build # 757 - Still Failing

2013-10-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/757/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7876 lines...]
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5334,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5339,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/h

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #757

2013-10-11 Thread Apache Jenkins Server
See 

Changes:

[jlowe] svn merge -c 1520964 FIXES: MAPREDUCE-5414. TestTaskAttempt fails in 
JDK7 with NPE. Contributed by Nemon Lou

[jlowe] svn merge -c 1377943 FIXES: MAPREDUCE-4579. Split TestTaskAttempt into 
two so as to pass tests on jdk7. Contributed by Thomas Graves

[jlowe] YARN-155. TestAppManager intermittently fails with jdk7. Contributed by 
Thomas Graves

[jlowe] svn merge -c 1511464 FIXES: MAPREDUCE-5425. Junit in 
TestJobHistoryServer failing in jdk 7. Contributed by Robert Parker

[jlowe] svn merge -c 1457061 FIXES: MAPREDUCE-4571. TestHsWebServicesJobs fails 
on jdk7. Contributed by Thomas Graves

[jlowe] svn merge -c 1457065 FIXES: MAPREDUCE-4716. 
TestHsWebServicesJobsQuery.testJobsQueryStateInvalid fails with jdk7. 
Contributed by Thomas Graves

--
[...truncated 7683 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR]