Oracle Java SE 7 Update 6 released (with ARM JDK!)
FYI, especially for those interested in ARM support: Oracle Releases New Java Updates - Java SE 7 Update 6, JavaFX 2.2 and JavaFX Scene Builder 1.0 http://www.oracle.com/us/corporate/press/1735645 This release includes a full JDK (as opposed to an EJRE) for Linux on ARM, along with a non-experimental server JIT and free general-purpose licensing: "Java SE 7 Update 6 introduces a JDK for Linux on ARM v6 and v7 to address “general purpose” ARM systems, such as those used for the emerging micro-server ARM market, and for development platforms such as Raspberry Pi. This new JDK for Linux on ARM is made available under the Oracle Binary Code License and is available for download at no cost for development and production use on general-purpose platforms." -Trevor
Re: S3 FS tests broken?
This is me, https://issues.apache.org/jira/browse/HADOOP-8699 On it Thx On Tue, Aug 14, 2012 at 10:33 AM, Eli Collins wrote: > Trevor, > Forgot to ask, since you can reproduce this can you confirm and see > why S3Conf.get is returning null for test.fs.s3.name? > > On Mon, Aug 13, 2012 at 6:35 PM, Eli Collins wrote: >> Passes for me locally, and the precondition that's failing (passing >> null to Conf#set) from the backtrace looks like the null is coming >> from: >> >> S3Conf.set(FS_DEFAULT_NAME_DEFAULT, S3Conf.get("test.fs.s3.name")); >> >> which is set in core-site.xml so something strange is going on. >> HADOOP-6296 looks related btw. >> >> >> On Mon, Aug 13, 2012 at 6:04 PM, Trevor wrote: >>> Anyone know why these tests have started failing? It happens for me locally >>> and it just happened in Jenkins: >>> https://builds.apache.org/job/PreCommit-HADOOP-Build/1288/ >>> >>> I don't see any obvious changes recently that would cause it. >>> >>> Tests in error: >>> testCreateFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >>> value must not be null >>> >>> testCreateFileWithNullName(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testCreateExistingFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> >>> testCreateFileInNonExistingDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testCreateDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> >>> testMkdirsFailsForSubdirectoryOfExistingFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testIsDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testDeleteFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >>> value must not be null >>> >>> testDeleteNonExistingFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> >>> testDeleteNonExistingFileInDir(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testDeleteDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> >>> testDeleteNonExistingDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testModificationTime(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testFileStatus(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >>> value must not be null >>> >>> testGetFileStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> >>> testListStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >>> Property value must not be null >>> testListStatus(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >>> value must not be null >>> testBlockSize(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> testFsStatus(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testWorkingDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> testMkdirs(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testMkdirsFailsForSubdirectoryOfExistingFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testGetFileStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testListStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> testListStatus(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testWriteReadAndDeleteEmptyFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testWriteReadAndDeleteHalfABlock(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testWriteReadAndDeleteOneBlock(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> testOverwrite(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testWriteInNonExistentDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testDeleteNonExistentFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testDeleteRecursively(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testDeleteEmptyDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testRenameNonExistentPath(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >>> >>> testRenameFileMoveToExistingDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemC
[jira] [Created] (HADOOP-8699) some common testcases create core-site.xml in test-classes making other testcases to fail
Alejandro Abdelnur created HADOOP-8699: -- Summary: some common testcases create core-site.xml in test-classes making other testcases to fail Key: HADOOP-8699 URL: https://issues.apache.org/jira/browse/HADOOP-8699 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 2.2.0-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Priority: Critical Fix For: 2.2.0-alpha Some of the testcases (HADOOP-8581, MAPREDUCE-4417) create core-site.xml files on the fly in test-classes, overriding the core-site.xml that is part of the test/resources. Things fail/pass depending on the order testcases are run (which seems dependent on the platform/jvm you are using). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8558) Hadoop RPC does not allow protocol extension with common interfaces.
[ https://issues.apache.org/jira/browse/HADOOP-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HADOOP-8558. - Resolution: Not A Problem Fix Version/s: 3.0.0 Assignee: Konstantin Shvachko Resolving as not a problem. > Hadoop RPC does not allow protocol extension with common interfaces. > > > Key: HADOOP-8558 > URL: https://issues.apache.org/jira/browse/HADOOP-8558 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Fix For: 3.0.0 > > Attachments: TestProtocolExtension.java, TestProtocolExtension.java > > > Hadoop RPC fails if MyProtocol extends an interface, which is not a > VersionedProtocol even if MyProtocol extends also VersionedProtocol. The > reason is that Invocation uses Method.getDeclaringClass(), which returns the > interface class rather than the class of MyProtocol. > This is incompatible with former versions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: S3 FS tests broken?
Trevor, Forgot to ask, since you can reproduce this can you confirm and see why S3Conf.get is returning null for test.fs.s3.name? On Mon, Aug 13, 2012 at 6:35 PM, Eli Collins wrote: > Passes for me locally, and the precondition that's failing (passing > null to Conf#set) from the backtrace looks like the null is coming > from: > > S3Conf.set(FS_DEFAULT_NAME_DEFAULT, S3Conf.get("test.fs.s3.name")); > > which is set in core-site.xml so something strange is going on. > HADOOP-6296 looks related btw. > > > On Mon, Aug 13, 2012 at 6:04 PM, Trevor wrote: >> Anyone know why these tests have started failing? It happens for me locally >> and it just happened in Jenkins: >> https://builds.apache.org/job/PreCommit-HADOOP-Build/1288/ >> >> I don't see any obvious changes recently that would cause it. >> >> Tests in error: >> testCreateFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >> value must not be null >> >> testCreateFileWithNullName(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testCreateExistingFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> >> testCreateFileInNonExistingDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testCreateDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> >> testMkdirsFailsForSubdirectoryOfExistingFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testIsDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testDeleteFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >> value must not be null >> >> testDeleteNonExistingFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> >> testDeleteNonExistingFileInDir(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testDeleteDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> >> testDeleteNonExistingDirectory(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testModificationTime(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testFileStatus(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >> value must not be null >> >> testGetFileStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> >> testListStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.TestS3_LocalFileContextURI): >> Property value must not be null >> testListStatus(org.apache.hadoop.fs.TestS3_LocalFileContextURI): Property >> value must not be null >> testBlockSize(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> testFsStatus(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testWorkingDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> testMkdirs(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testMkdirsFailsForSubdirectoryOfExistingFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testGetFileStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testListStatusThrowsExceptionForNonExistentFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> testListStatus(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testWriteReadAndDeleteEmptyFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testWriteReadAndDeleteHalfABlock(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testWriteReadAndDeleteOneBlock(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> testOverwrite(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testWriteInNonExistentDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testDeleteNonExistentFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testDeleteRecursively(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testDeleteEmptyDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testRenameNonExistentPath(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testRenameFileMoveToExistingDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testRenameFileAsExistingFile(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testRenameFileAsExistingDirectory(org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract) >> >> testRenameDirectoryMoveToNonEx
[jira] [Created] (HADOOP-8698) Do not call unneceseary setConf(null) in Configured constructor
Radim Kolar created HADOOP-8698: --- Summary: Do not call unneceseary setConf(null) in Configured constructor Key: HADOOP-8698 URL: https://issues.apache.org/jira/browse/HADOOP-8698 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 0.23.3, 3.0.0 Reporter: Radim Kolar Priority: Minor Fix For: 0.23.3, 3.0.0 no-arg constructor of /org/apache/hadoop/conf/Configured calls setConf(null). This is unnecessary and it increases complexity of setConf() code because you have to check for not null object reference before using it. Under normal conditions setConf() is never called with null reference, so not null check is unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira