[jira] [Created] (HDFS-4773) Fix bugs in quota usage updating/computation

2013-04-28 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4773:
---

 Summary: Fix bugs in quota usage updating/computation
 Key: HDFS-4773
 URL: https://issues.apache.org/jira/browse/HDFS-4773
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor


1. FileWithSnapshot#updateQuotaAndCollectBlocks did not consider the scenario 
that all the snapshots has been deleted from a snapshot copy of deleted file. 
This may lead to a divide-by-0 error.

2. When computing the quota usage for a WithName node and its subtree, if the 
snapshot associated with the WithName node at the time of rename operation has 
been deleted, we should compute the quota based on the posterior snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Heads up - 2.0.5-beta

2013-04-28 Thread Arun C Murthy
Agreed Luke. Thanks for pointing it out, I'll track it as such.

Arun

On Apr 26, 2013, at 1:37 PM, Luke Lu wrote:

> If protocol compatibility of v2 and v3 is a goal, HADOOP-8990 should be a
> blocker for v2.
> 
> __Luke
> 
> On Fri, Apr 26, 2013 at 12:07 PM, Eli Collins  wrote:
> 
>> On Fri, Apr 26, 2013 at 11:15 AM, Arun C Murthy 
>> wrote:
>>> 
>>> On Apr 25, 2013, at 7:31 PM, Roman Shaposhnik wrote:
>>> 
 On Thu, Apr 25, 2013 at 6:34 PM, Arun C Murthy 
>> wrote:
 
> With that in mind, I really want to make a serious push to lock down
>> APIs and wire-protocols for hadoop-2.0.5-beta.
> Thus, we can confidently support hadoop-2.x in a compatible manner in
>> the future. So, it's fine to add new features,
> but please ensure that all APIs are frozen for hadoop-2.0.5-beta
 
 Arun, since it sounds like you have a pretty definite idea
 in mind for what you want 'beta' label to actually mean,
 could you, please, share the exact criteria?
>>> 
>>> Sorry, I'm not sure if this is exactly what you are looking for but, as
>> I mentioned above, the primary aim would be make the final set of required
>> API/write-protocol changes so that we can call it a 'beta' i.e. once
>> 2.0.5-beta ships users & downstream projects can be confident about forward
>> compatibility in hadoop-2.x line. Obviously, we might discover a blocker
>> bug post 2.0.5 which *might* necessitate an unfortunate change - but that
>> should be an outstanding exception.
>> 
>> Arun, Suresh,
>> 
>> Mind reviewing the following page Karthik put together on
>> compatibility?   http://wiki.apache.org/hadoop/Compatibility
>> 
>> I think we should do something similar to what Sanjay proposed in
>> HADOOP-5071 for Hadoop v2.   If we get on the same page on
>> compatibility terms/APIs then we can quickly draft the policy, at
>> least for the things we've already got consensus on.  I think our new
>> developers, users, downstream projects, and partners would really
>> appreciate us making this clear.  If people like the content we can
>> move it to the Hadoop website and maintain it in svn like the bylaws.
>> 
>> The reason I think we need to do so is because there's been confusion
>> about what types of compatibility we promise and some open questions
>> which I'm not sure everyone is clear on. Examples:
>> - Are we going to preserve Hadoop v3 clients against v2 servers now
>> that we have protobuf support?  (I think so..)
>> - Can we break rolling upgrade of daemons in updates post GA? (I don't
>> think so..)
>> - Do we disallow HDFS metadata changes that require an HDFS upgrade in
>> an update? (I think so..)
>> - Can we remove methods from v2 and v2 updates that were deprecated in
>> v0.20-22?  (Unclear)
>> - Will we preserve binary compatibility for MR2 going forward? (I think
>> so..)
>> - Does the ability to support multiple versions of MR simultaneously
>> via MR2 change the MR API compatibility story? (I don't think so..)
>> - Are the RM protocols sufficiently stable to disallow incompatible
>> changes potentially required by non-MR projects? (Unclear, most large
>> Yarn deployments I'm aware of are running 0.23, not v2 alphas)
>> 
>> I'm also not sure there's currently consensus on what an incompatible
>> change is. For example, I think HADOOP-9151 is incompatible because it
>> broke client/server wire compatibility with previous releases and any
>> change that breaks wire compatibility is incompatible.  Suresh felt it
>> was not an incompatible change because it did not affect API
>> compatibility (ie PB is not considered part of the API) and the change
>> occurred while v2 is in alpha.  Not sure we need to go through the
>> whole exercise of what's allowed in an alpha and beta (water under the
>> bridge, hopefully), but I do think we should clearly define an
>> incompatible change.  It's fine that v2 has been a bit wild wild west
>> in the alpha development stage but I think we need to get a little
>> more rigorous.
>> 
>> Thanks,
>> Eli
>> 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




[jira] [Created] (HDFS-4772) Add number of children in HdfsFileStatus

2013-04-28 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4772:


 Summary: Add number of children in HdfsFileStatus
 Key: HDFS-4772
 URL: https://issues.apache.org/jira/browse/HDFS-4772
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor


This JIRA is to track the change to return the number of children for a 
directory, so the client doesn't need to make a getListing() call to calculate 
the number of dirents. This makes it convenient for the client to check 
directory size change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4771) Provide a way to set symlink attributes

2013-04-28 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4771:


 Summary: Provide a way to set symlink attributes
 Key: HDFS-4771
 URL: https://issues.apache.org/jira/browse/HDFS-4771
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li


Currently HDFS always resolves symlink when setting certain file attributes, 
such as setPermission and setTime. And thus the client can't set some file 
attributes of the symlink itself.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk - Build # 1386 - Still Failing

2013-04-28 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1386/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 14353 lines...]
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

Running org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.669 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.111 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.561 sec

Results :

Failed tests:   
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints):
 SBN should have still been checkpointing.

Tests run: 32, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:30:12.334s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:17.514s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [59.448s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:33:30.072s
[INFO] Finished at: Sun Apr 28 13:07:08 UTC 2013
[INFO] Final Memory: 47M/794M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4740
Updating HDFS-4722
Updating HADOOP-9490
Updating HDFS-4768
Updating HDFS-4743
Updating HDFS-4705
Updating HDFS-4741
Updating HADOOP-9500
Updating HDFS-4748
Updating HADOOP-9290
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1386

2013-04-28 Thread Apache Jenkins Server
See 

Changes:

[suresh] HDFS-4705. Address HDFS test failures on Windows because of invalid 
dfs.namenode.name.dir. Contributed by Ivan Mitic.

[suresh] HADOOP-9490. LocalFileSystem#reportChecksumFailure not closing the 
checksum file handle before rename. Contributed by Ivan Mitic.

[suresh] HADOOP-9500. TestUserGroupInformation#testGetServerSideGroups fails on 
Windows due to failure to find winutils.exe. Contributed by Chris Nauroth.

[suresh] HDFS-4722. TestGetConf#testFederation times out on Windows. 
Contributed by Ivan Mitic.

[suresh] HDFS-4740. Fixes for a few test failures on Windows. Contributed by 
Arpit Agarwal.

[suresh] HDFS-4743. TestNNStorageRetentionManager fails on Windows. Contributed 
by Chris Nauroth.

[suresh] HDFS-4748. MiniJournalCluster#restartJournalNode leaks resources, 
which causes sporadic test failures. Contributed by Chris Nauroth.

[suresh] HADOOP-9290. Some tests cannot load native library on windows. 
Contributed by Chris Nauroth.

[suresh] HDFS-4741. TestStorageRestore#testStorageRestoreFailure fails on 
Windows. Contributed by Arpit Agarwal.

[suresh] HDFS-4768. File handle leak in datanode when a block pool is removed. 
Contributed by Chris Nauroth.

--
[...truncated 14160 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 


Jenkins build is back to normal : Hadoop-Hdfs-0.23-Build #595

2013-04-28 Thread Apache Jenkins Server
See