[jira] [Reopened] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reopened HADOOP-9241:
-


Thanks Nicholas; I have reverted HADOOP-9241 from trunk and branch-2. I will 
attach a proper patch now.

 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Common-0.23-Build #509

2013-01-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/509/changes

Changes:

[kihwal] merge -r 1439652:1439653 Merging YARN-133 to branch-0.23

[tgraves] HADOOP-9255. relnotes.py missing last jira (tgraves)

[suresh] HADOOP-9247. Merge r1438698 from trunk

[tgraves] Fix HDFS change log from left over merge entries

--
[...truncated 10345 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.745 sec
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.811 sec
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.05 sec
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.133 sec
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.166 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.052 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.146 sec
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.362 sec
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.985 sec
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.212 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.386 sec
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.082 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.41 sec
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.468 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.369 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.581 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.14 sec
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.824 sec
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.555 sec
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec
Running org.apache.hadoop.io.TestText
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.823 sec
Running org.apache.hadoop.io.TestMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.164 sec
Running org.apache.hadoop.io.compress.TestCodecFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec
Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec
Running org.apache.hadoop.io.compress.TestCodec
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 58.004 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.337 sec
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, 

Build failed in Jenkins: Hadoop-Common-trunk #668

2013-01-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/668/changes

Changes:

[harsh] Revert HADOOP-9241 properly this time. Left the core-default.xml in 
previous commit.

[harsh] Reverting HADOOP-9241. To be fixed and reviewed.

[sseth] MAPREDUCE-4838. Add additional fields like Locality, Avataar to the 
JobHistory logs. Contributed by Zhijie Shen

[kihwal] YARN-133. Update web services docs for RM clusterMetrics. Contributed 
by Ravi Prakash.

[jlowe] HADOOP-9246. Execution phase for hadoop-maven-plugin should be 
process-resources. Contributed by Karthik Kambatla and Chris Nauroth

[sseth] MAPREDUCE-4803. Remove duplicate copy of TestIndexCache. Contributed by 
Mariappan Asokan

[tgraves] HADOOP-9255. relnotes.py missing last jira (tgraves)

[tucu] Revering MAPREDUCE-2264

[suresh] HDFS-. Add space between total transaction time and number of 
transactions in FSEditLog#printStatistics. Contributed by Stephen Chu.

[suresh] Move HADOOP-9247 to release 0.23.7 section in CHANGES.txt

--
[...truncated 32299 lines...]
 [exec] 
 [exec] unpack-plugin:
 [exec] 
 [exec] install-plugin:
 [exec] 
 [exec] configure-plugin:
 [exec] 
 [exec] configure-output-plugin:
 [exec] Mounting output plugin: org.apache.forrest.plugin.output.pdf
 [exec] Processing 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/tmp/output.xmap
 to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/tmp/output.xmap.new
 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginMountSnippet.xsl
 [exec] Moving 1 file to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/tmp
 [exec] 
 [exec] configure-plugin-locationmap:
 [exec] Mounting plugin locationmap for org.apache.forrest.plugin.output.pdf
 [exec] Processing 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/tmp/locationmap.xml
 to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/tmp/locationmap.xml.new
 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginLmMountSnippet.xsl
 [exec] Moving 1 file to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/tmp
 [exec] 
 [exec] init:
 [exec] 
 [exec] -prepare-classpath:
 [exec] 
 [exec] check-contentdir:
 [exec] 
 [exec] examine-proj:
 [exec] 
 [exec] validation-props:
 [exec] Using these catalog descriptors: 
/home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/src/documentation/resources/schema/catalog.xcat
 [exec] 
 [exec] validate-xdocs:
 [exec] 7 file(s) have been successfully validated.
 [exec] ...validated xdocs
 [exec] 
 [exec] validate-skinconf:
 [exec] 1 file(s) have been successfully validated.
 [exec] ...validated skinconf
 [exec] 
 [exec] validate-sitemap:
 [exec] 
 [exec] validate-skins-stylesheets:
 [exec] 
 [exec] validate-skins:
 [exec] 
 [exec] validate-skinchoice:
 [exec] Warning: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/webapp/resources
 not found.
 [exec] ...validated existence of skin 'pelt'
 [exec] 
 [exec] validate-stylesheets:
 [exec] 
 [exec] validate:
 [exec] 
 [exec] site:
 [exec] 
 [exec] Copying the various non-generated resources to site.
 [exec] Warnings will be issued if the optional project resources are not 
found.
 [exec] This is often the case, because they are optional and so may not be 
available.
 [exec] Copying project resources and images to site ...
 [exec] Copied 1 empty directory to 1 empty directory under 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/site
 [exec] Copying main skin images to site ...
 [exec] Created dir: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/site/skin/images
 [exec] Copying 20 files to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/site/skin/images
 [exec] Copying 14 files to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/target/docs-src/build/site/skin/images
 [exec] Warning: 

[jira] [Resolved] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties

2013-01-29 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky resolved HADOOP-9256.


Resolution: Duplicate

Duplicate of YARN-361.

 A number of Yarn and Mapreduce tests fail due to not substituted values in 
 *-version-info.properties
 

 Key: HADOOP-9256
 URL: https://issues.apache.org/jira/browse/HADOOP-9256
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky

 Newly added plugin VersionInfoMojo should calculate properties (like time, 
 scm branch, etc.), and after that the resource plugin should make 
 replacements in the following files: 
 ./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties
 ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
 , that are read later in test run-time. 
 But for some reason it does not do that.
 As a result, a bunch of tests are permanently failing because the code of 
 these tests is veryfying the corresponding property files for correctness:
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML
 org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault
 Some of these failures can be observed in Apache builds, e.g.: 
 https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/
 As far as I see the substitution does not happen because corresponding 
 properties are set by the VersionInfoMojo plugin *after* the corresponding 
 resource plugin task is executed.
 Workaround: manually change files 
 ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
 and set arbitrary reasonable non-${} string parameters as the values.
 After that the tests pass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9101) make s3n NativeFileSystemStore interface public instead of package-private

2013-01-29 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9101.


Resolution: Won't Fix

wontfix -there's enough of a difference between swift and s3 that I don't see 
that this would work

 make s3n NativeFileSystemStore interface public instead of package-private
 --

 Key: HADOOP-9101
 URL: https://issues.apache.org/jira/browse/HADOOP-9101
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 It would be easier to implement new blockstore filesystems if the the 
 {{NativeFileSystemStore} and dependent classes in the 
 {{org.apache.hadoop.fs.s3native}} package were public -currently you need to 
 put them into the s3 directory.
 They could be made public with the appropriate scope attribute. Internal?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Hadoop 1.1.2-rc4 release candidate vote

2013-01-29 Thread Chris Nauroth
Hello Matt,

Would it be better to wait for committing the fix of blocker HDFS-4423:
Checkpoint exception causes fatal damage to fsimage?  I have uploaded a
patch, and I expect to receive a code review in the next day or two.

Thank you,
--Chris


On Mon, Jan 28, 2013 at 3:32 PM, Matt Foley mfo...@hortonworks.com wrote:

 A new build of Hadoop-1.1.2 is available at
 http://people.apache.org/~mattf/hadoop-1.1.2-rc4/
 or in SVN at
 http://svn.apache.org/viewvc/hadoop/common/tags/release-1.1.2-rc4/
 or in the Maven repo.

 This candidate for a stabilization release of the Hadoop-1.1 branch has 23
 patches and several cleanups compared to the Hadoop-1.1.1 release.  Release
 notes are available at
 http://people.apache.org/~mattf/hadoop-1.1.2-rc4/releasenotes.html

 Please vote for this as the next release of Hadoop-1.  Voting will close
 next Monday, 4 Feb, at 3:30pm PST.

 Thanks,
 --Matt



Re: [VOTE] Hadoop 1.1.2-rc4 release candidate vote

2013-01-29 Thread Matt Foley
Hi Chris,
Okay, please get it in as soon as possible, and I'll respin the build.

Suresh, can you code review?

Thanks,
--Matt


On Tue, Jan 29, 2013 at 11:11 AM, Chris Nauroth cnaur...@hortonworks.comwrote:

 Hello Matt,

 Would it be better to wait for committing the fix of blocker HDFS-4423:
 Checkpoint exception causes fatal damage to fsimage?  I have uploaded a
 patch, and I expect to receive a code review in the next day or two.

 Thank you,
 --Chris


 On Mon, Jan 28, 2013 at 3:32 PM, Matt Foley mfo...@hortonworks.com
 wrote:

  A new build of Hadoop-1.1.2 is available at
  http://people.apache.org/~mattf/hadoop-1.1.2-rc4/
  or in SVN at
  http://svn.apache.org/viewvc/hadoop/common/tags/release-1.1.2-rc4/
  or in the Maven repo.
 
  This candidate for a stabilization release of the Hadoop-1.1 branch has
 23
  patches and several cleanups compared to the Hadoop-1.1.1 release.
  Release
  notes are available at
  http://people.apache.org/~mattf/hadoop-1.1.2-rc4/releasenotes.html
 
  Please vote for this as the next release of Hadoop-1.  Voting will close
  next Monday, 4 Feb, at 3:30pm PST.
 
  Thanks,
  --Matt