Build failed in Jenkins: Hadoop-Common-0.23-Build #312

2012-07-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/312/

--
[...truncated 13086 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.189 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.133 sec
Running org.apache.hadoop.fs.TestGlobPattern
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.171 sec
Running org.apache.hadoop.fs.TestS3_LocalFileContextURI
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.151 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.757 sec
Running org.apache.hadoop.fs.TestHarFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.343 sec
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.79 sec
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec
Running org.apache.hadoop.fs.TestHardLink
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec
Running org.apache.hadoop.fs.TestCommandFormat
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.175 sec
Running org.apache.hadoop.fs.TestLocal_S3FileContextURI
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.223 sec
Running org.apache.hadoop.fs.TestLocalFileSystem
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.847 sec
Running org.apache.hadoop.fs.TestFcLocalFsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.716 sec
Running org.apache.hadoop.fs.TestListFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.582 sec
Running org.apache.hadoop.fs.TestPath
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.828 sec
Running org.apache.hadoop.fs.kfs.TestKosmosFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.6 sec
Running org.apache.hadoop.fs.TestGlobExpander
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec
Running org.apache.hadoop.fs.TestFilterFileSystem
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.655 sec
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.596 sec
Running org.apache.hadoop.fs.TestGetFileBlockLocations
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.037 sec  
FAILURE!
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.84 sec
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.112 sec
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.223 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.259 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.213 sec
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.134 sec
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.071 sec
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.286 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.533 sec
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.474 sec
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.152 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.459 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.304 sec
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.499 sec
Running org.apache.hadoop.metrics2.impl.TestSinkQueue

[jira] [Reopened] (HADOOP-8362) Improve exception message when Configuration.set() is called with a null key or value

2012-07-15 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reopened HADOOP-8362:
-


Hi Suresh,

Looks like this wasn't committed? I'm going ahead and committing it in. 
Reopening for until it is done.

 Improve exception message when Configuration.set() is called with a null key 
 or value
 -

 Key: HADOOP-8362
 URL: https://issues.apache.org/jira/browse/HADOOP-8362
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: madhukara phatak
Priority: Trivial
  Labels: newbie
 Fix For: 3.0.0

 Attachments: HADOOP-8362-1.patch, HADOOP-8362-2.patch, 
 HADOOP-8362-3.patch, HADOOP-8362-4.patch, HADOOP-8362-5.patch, 
 HADOOP-8362-6.patch, HADOOP-8362-7.patch, HADOOP-8362-8.patch, 
 HADOOP-8362.9.patch, HADOOP-8362.patch


 Currently, calling Configuration.set(...) with a null value results in a 
 NullPointerException within Properties.setProperty. We should check for null 
 key/value and throw a better exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8362) Improve exception message when Configuration.set() is called with a null key or value

2012-07-15 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8362.
-

  Resolution: Fixed
   Fix Version/s: (was: 3.0.0)
  2.0.1-alpha
Target Version/s:   (was: 2.0.0-alpha)

Committed to branch-2 and trunk. Thanks Madhukara and Suresh!

 Improve exception message when Configuration.set() is called with a null key 
 or value
 -

 Key: HADOOP-8362
 URL: https://issues.apache.org/jira/browse/HADOOP-8362
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: madhukara phatak
Priority: Trivial
  Labels: newbie
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8362-1.patch, HADOOP-8362-2.patch, 
 HADOOP-8362-3.patch, HADOOP-8362-4.patch, HADOOP-8362-5.patch, 
 HADOOP-8362-6.patch, HADOOP-8362-7.patch, HADOOP-8362-8.patch, 
 HADOOP-8362.10.patch, HADOOP-8362.9.patch, HADOOP-8362.patch


 Currently, calling Configuration.set(...) with a null value results in a 
 NullPointerException within Properties.setProperty. We should check for null 
 key/value and throw a better exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8598) Server-side Trash

2012-07-15 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8598:
---

 Summary: Server-side Trash
 Key: HADOOP-8598
 URL: https://issues.apache.org/jira/browse/HADOOP-8598
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Critical


There are a number of problems with Trash that continues to result in permanent 
data loss for users. The primary reasons trash is not used:

- Trash is configured client-side and not enabled by default.
- Trash is shell-only. FileSystem, WebHDFS, HttpFs, etc never use trash.
- If trash fails, for example, because we can't create the trash directory or 
the move itself fails, trash is bypassed and the data is deleted.

Trash was designed as a feature to help end users via the shell, however in my 
experience the primary use of trash is to help administrators implement data 
retention policies (this was also the motivation for HADOOP-7460).  One could 
argue that (periodic read-only) snapshots are a better solution to this 
problem, however snapshots are not slated for Hadoop 2.x and trash is 
complimentary to snapshots (and backup) - eg you may create and delete data 
within your snapshot or backup window - so it makes sense to revisit trash's 
design. I think it's worth bringing trash's functionality in line with what 
users need.

I propose we enable trash on a per-filesystem basis and implement it 
server-side. Ie trash becomes an HDFS feature enabled by administrators. 
Because the trash emptier lives in HDFS and users already have a per-filesystem 
trash directory we're mostly there already. The design preference from 
HADOOP-2514 was for trash to be implemented in user code however (a) in light 
of these problems, (b) we have a lot more user-facing APIs than the shell and 
(c) clients increasingly span file systems (via federation and symlinks) this 
design choice makes less sense. This is why we already use a per-filesystem 
trash/home directory instead of the user's client-configured one - otherwise 
trash would not work because renames can't span file systems.

In short, HDFS trash would work similarly to how it does today, the difference 
is that client delete APIs would result in a rename into trash (ala 
TrashPolicyDefault#moveToTrash) if trash is enabled. Like today it would be 
renamed to the trash directory on the file system where the file being removed 
resides. The primary difference is that enablement and policy are configured 
server-side by adminstrators and is used regardless of the API used to access 
the filesytem. The one execption to this is that I think we should continue to 
support the explict skipTrash shell option. The rationale for skipTrash 
(HADOOP-6080) is that a move to trash may fail in cases where a rm may not, if 
a user has a home directory quota and does a rmr /tonsOfData, for example. 
Without a way to bypass this the user has no way (unless we revisit quotas, 
permissions or trash paths) to remove a directory they have permissions to 
remove without getting their quota adjusted by an admin. The skip trash API can 
be implemented by adding an explicit FileSystem API that bypasses trash and 
modifying the shell to use it when skipTrash is enabled. Given that users must 
explicitly specify skipTrash the API is less error prone. We could have the 
shell ask confirmation and annotate the API private to FsShell to discourage 
programatic use. This is not ideal but can be done compatibly (unlike 
redefining quotas, permissions or trash paths).

In terms of compatibility, while this proposal is technically an incompatible 
change (client side configuration that disables trash and uses skipTrash with a 
previous FsShell release will now both be ignored if server-side trash is 
enabled, and non-HDFS file systems would need to make similar changes) I think 
it's worth targeting for Hadoop 2.x given that the new semantics preserve the 
current semantics. In 2.x I think we should preserve FsShell based trash and 
support both it and server-side trash (defaults to disabled). For trunk/3.x I 
think we should remove the FsShell based trash entirely and enable server-side 
trash by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: New JIRA version field for branch-2's next release?

2012-07-15 Thread Arun C Murthy
Done.

On Jul 13, 2012, at 11:12 PM, Harsh J wrote:

 Hey devs,
 
 I noticed 2.0.1 has already been branched, but there's no newer JIRA
 version field added in for 2.1.0? Can someone with the right powers
 add it across all projects, so that backports to branch-2 can be
 marked properly in their fix versions field?
 
 Thanks!
 -- 
 Harsh J

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




Re: New JIRA version field for branch-2's next release?

2012-07-15 Thread Harsh J
Thanks Arun! I will now diff both branches and fix any places the JIRA
fix version needs to be corrected at.

On Mon, Jul 16, 2012 at 8:30 AM, Arun C Murthy a...@hortonworks.com wrote:
 Done.

 On Jul 13, 2012, at 11:12 PM, Harsh J wrote:

 Hey devs,

 I noticed 2.0.1 has already been branched, but there's no newer JIRA
 version field added in for 2.1.0? Can someone with the right powers
 add it across all projects, so that backports to branch-2 can be
 marked properly in their fix versions field?

 Thanks!
 --
 Harsh J

 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/





-- 
Harsh J


Re: New JIRA version field for branch-2's next release?

2012-07-15 Thread Harsh J
Ah looks like you've covered that edge too, many thanks!

On Mon, Jul 16, 2012 at 8:40 AM, Harsh J ha...@cloudera.com wrote:
 Thanks Arun! I will now diff both branches and fix any places the JIRA
 fix version needs to be corrected at.

 On Mon, Jul 16, 2012 at 8:30 AM, Arun C Murthy a...@hortonworks.com wrote:
 Done.

 On Jul 13, 2012, at 11:12 PM, Harsh J wrote:

 Hey devs,

 I noticed 2.0.1 has already been branched, but there's no newer JIRA
 version field added in for 2.1.0? Can someone with the right powers
 add it across all projects, so that backports to branch-2 can be
 marked properly in their fix versions field?

 Thanks!
 --
 Harsh J

 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/





 --
 Harsh J



-- 
Harsh J