[jira] [Reopened] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reopened HADOOP-9241:
-


Thanks Nicholas; I have reverted HADOOP-9241 from trunk and branch-2. I will 
attach a proper patch now.

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Common-0.23-Build #509

2013-01-29 Thread Apache Jenkins Server
See 

Changes:

[kihwal] merge -r 1439652:1439653 Merging YARN-133 to branch-0.23

[tgraves] HADOOP-9255. relnotes.py missing last jira (tgraves)

[suresh] HADOOP-9247. Merge r1438698 from trunk

[tgraves] Fix HDFS change log from left over merge entries

--
[...truncated 10345 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.745 sec
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.811 sec
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.05 sec
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.133 sec
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.166 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.052 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.146 sec
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.362 sec
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.985 sec
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.212 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.386 sec
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.082 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.41 sec
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.468 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.369 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.581 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.14 sec
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.824 sec
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.555 sec
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec
Running org.apache.hadoop.io.TestText
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.823 sec
Running org.apache.hadoop.io.TestMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.164 sec
Running org.apache.hadoop.io.compress.TestCodecFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec
Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec
Running org.apache.hadoop.io.compress.TestCodec
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 58.004 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.337 sec
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Err

Build failed in Jenkins: Hadoop-Common-trunk #668

2013-01-29 Thread Apache Jenkins Server
See 

Changes:

[harsh] Revert HADOOP-9241 properly this time. Left the core-default.xml in 
previous commit.

[harsh] Reverting HADOOP-9241. To be fixed and reviewed.

[sseth] MAPREDUCE-4838. Add additional fields like Locality, Avataar to the 
JobHistory logs. Contributed by Zhijie Shen

[kihwal] YARN-133. Update web services docs for RM clusterMetrics. Contributed 
by Ravi Prakash.

[jlowe] HADOOP-9246. Execution phase for hadoop-maven-plugin should be 
process-resources. Contributed by Karthik Kambatla and Chris Nauroth

[sseth] MAPREDUCE-4803. Remove duplicate copy of TestIndexCache. Contributed by 
Mariappan Asokan

[tgraves] HADOOP-9255. relnotes.py missing last jira (tgraves)

[tucu] Revering MAPREDUCE-2264

[suresh] HDFS-. Add space between total transaction time and number of 
transactions in FSEditLog#printStatistics. Contributed by Stephen Chu.

[suresh] Move HADOOP-9247 to release 0.23.7 section in CHANGES.txt

--
[...truncated 32299 lines...]
 [exec] 
 [exec] unpack-plugin:
 [exec] 
 [exec] install-plugin:
 [exec] 
 [exec] configure-plugin:
 [exec] 
 [exec] configure-output-plugin:
 [exec] Mounting output plugin: org.apache.forrest.plugin.output.pdf
 [exec] Processing 

 to 

 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginMountSnippet.xsl
 [exec] Moving 1 file to 

 [exec] 
 [exec] configure-plugin-locationmap:
 [exec] Mounting plugin locationmap for org.apache.forrest.plugin.output.pdf
 [exec] Processing 

 to 

 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginLmMountSnippet.xsl
 [exec] Moving 1 file to 

 [exec] 
 [exec] init:
 [exec] 
 [exec] -prepare-classpath:
 [exec] 
 [exec] check-contentdir:
 [exec] 
 [exec] examine-proj:
 [exec] 
 [exec] validation-props:
 [exec] Using these catalog descriptors: 
/home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:
 [exec] 
 [exec] validate-xdocs:
 [exec] 7 file(s) have been successfully validated.
 [exec] ...validated xdocs
 [exec] 
 [exec] validate-skinconf:
 [exec] 1 file(s) have been successfully validated.
 [exec] ...validated skinconf
 [exec] 
 [exec] validate-sitemap:
 [exec] 
 [exec] validate-skins-stylesheets:
 [exec] 
 [exec] validate-skins:
 [exec] 
 [exec] validate-skinchoice:
 [exec] Warning: 

 not found.
 [exec] ...validated existence of skin 'pelt'
 [exec] 
 [exec] validate-stylesheets:
 [exec] 
 [exec] validate:
 [exec] 
 [exec] site:
 [exec] 
 [exec] Copying the various non-generated resources to site.
 [exec] Warnings will be issued if the optional project resources are not 
found.
 [exec] This is often the case, because they are optional and so may not be 
available.
 [exec] Copying project resources and images to site ...
 [exec] Copied 1 empty directory to 1 empty directory under 

 [exec] Copying main skin images to site ...
 [exec] Created dir: 

 [exec] Copying 20 files to 

 [exec] Copying 14 files to 

 [exec] Warning: 


[jira] [Resolved] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties

2013-01-29 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky resolved HADOOP-9256.


Resolution: Duplicate

Duplicate of YARN-361.

> A number of Yarn and Mapreduce tests fail due to not substituted values in 
> *-version-info.properties
> 
>
> Key: HADOOP-9256
> URL: https://issues.apache.org/jira/browse/HADOOP-9256
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan A. Veselovsky
>
> Newly added plugin VersionInfoMojo should calculate properties (like time, 
> scm branch, etc.), and after that the resource plugin should make 
> replacements in the following files: 
> ./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties
> ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
> , that are read later in test run-time. 
> But for some reason it does not do that.
> As a result, a bunch of tests are permanently failing because the code of 
> these tests is veryfying the corresponding property files for correctness:
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML
> org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault
> Some of these failures can be observed in Apache builds, e.g.: 
> https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/
> As far as I see the substitution does not happen because corresponding 
> properties are set by the VersionInfoMojo plugin *after* the corresponding 
> resource plugin task is executed.
> Workaround: manually change files 
> ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
> and set arbitrary reasonable non-${} string parameters as the values.
> After that the tests pass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Jenkins build is back to normal : Hadoop-Common-0.23-Build #510

2013-01-29 Thread Apache Jenkins Server
See 



[jira] [Resolved] (HADOOP-9101) make s3n NativeFileSystemStore interface public instead of package-private

2013-01-29 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9101.


Resolution: Won't Fix

wontfix -there's enough of a difference between swift and s3 that I don't see 
that this would work

> make s3n NativeFileSystemStore interface public instead of package-private
> --
>
> Key: HADOOP-9101
> URL: https://issues.apache.org/jira/browse/HADOOP-9101
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Trivial
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> It would be easier to implement new blockstore filesystems if the the 
> {{NativeFileSystemStore} and dependent classes in the 
> {{org.apache.hadoop.fs.s3native}} package were public -currently you need to 
> put them into the s3 directory.
> They could be made public with the appropriate scope attribute. Internal?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Hadoop 1.1.2-rc4 release candidate vote

2013-01-29 Thread Chris Nauroth
Hello Matt,

Would it be better to wait for committing the fix of blocker HDFS-4423:
Checkpoint exception causes fatal damage to fsimage?  I have uploaded a
patch, and I expect to receive a code review in the next day or two.

Thank you,
--Chris


On Mon, Jan 28, 2013 at 3:32 PM, Matt Foley  wrote:

> A new build of Hadoop-1.1.2 is available at
> http://people.apache.org/~mattf/hadoop-1.1.2-rc4/
> or in SVN at
> http://svn.apache.org/viewvc/hadoop/common/tags/release-1.1.2-rc4/
> or in the Maven repo.
>
> This candidate for a stabilization release of the Hadoop-1.1 branch has 23
> patches and several cleanups compared to the Hadoop-1.1.1 release.  Release
> notes are available at
> http://people.apache.org/~mattf/hadoop-1.1.2-rc4/releasenotes.html
>
> Please vote for this as the next release of Hadoop-1.  Voting will close
> next Monday, 4 Feb, at 3:30pm PST.
>
> Thanks,
> --Matt
>


Re: [VOTE] Hadoop 1.1.2-rc4 release candidate vote

2013-01-29 Thread Matt Foley
Hi Chris,
Okay, please get it in as soon as possible, and I'll respin the build.

Suresh, can you code review?

Thanks,
--Matt


On Tue, Jan 29, 2013 at 11:11 AM, Chris Nauroth wrote:

> Hello Matt,
>
> Would it be better to wait for committing the fix of blocker HDFS-4423:
> Checkpoint exception causes fatal damage to fsimage?  I have uploaded a
> patch, and I expect to receive a code review in the next day or two.
>
> Thank you,
> --Chris
>
>
> On Mon, Jan 28, 2013 at 3:32 PM, Matt Foley 
> wrote:
>
> > A new build of Hadoop-1.1.2 is available at
> > http://people.apache.org/~mattf/hadoop-1.1.2-rc4/
> > or in SVN at
> > http://svn.apache.org/viewvc/hadoop/common/tags/release-1.1.2-rc4/
> > or in the Maven repo.
> >
> > This candidate for a stabilization release of the Hadoop-1.1 branch has
> 23
> > patches and several cleanups compared to the Hadoop-1.1.1 release.
>  Release
> > notes are available at
> > http://people.apache.org/~mattf/hadoop-1.1.2-rc4/releasenotes.html
> >
> > Please vote for this as the next release of Hadoop-1.  Voting will close
> > next Monday, 4 Feb, at 3:30pm PST.
> >
> > Thanks,
> > --Matt
> >
>


Release numbering for branch-2 releases

2013-01-29 Thread Arun C Murthy
Folks,

 There has been some discussions about incompatible changes in the 
hadoop-2.x.x-alpha releases on HADOOP-9070, HADOOP-9151, HADOOP-9192 and few 
other jiras. Frankly, I'm surprised about some of them since the 'alpha' 
moniker was precisely to harden apis by changing them if necessary, borne out 
by the fact that every  single release in hadoop-2 chain has had incompatible 
changes. This happened since we were releasing early, moving fast and breaking 
things. Furthermore, we'll have more in future as move towards stability of 
hadoop-2 similar to HDFS-4362, HDFS-4364 et al in HDFS and YARN-142 (api 
changes) for YARN.

 So, rather than debate more, I had a brief chat with Suresh and Todd. Todd 
suggested calling the next release as hadoop-2.1.0-alpha to indicate the 
incompatibility a little better. This makes sense to me, as long as we are 
clear that we won't make any further *feature* releases in hadoop-2.0.x series 
(obviously we might be forced to do security/bug-fix release).

 Going forward, I'd like to start locking down apis/protocols for a 'beta' 
release. This way we'll have one *final* opportunity post hadoop-2.1.0-alpha to 
make incompatible changes if necessary and we can call it hadoop-2.2.0-beta. 

 Post hadoop-2.2.0-beta we *should* lock down and not allow incompatible 
changes. This will allow us to go on to a hadoop-2.3.0 as a GA release. This 
forces us to do a real effort on making sure we lock down for hadoop-2.2.0-beta.

 In summary:
 # I plan to now release hadoop-2.1.0-alpha (this week).
 # We make a real effort to lock down apis/protocols and release 
hadoop-2.2.0-beta, say in March.
 # Post 'beta' release hadoop-2.3.0 as 'stable' sometime in May.

 I'll start a separate thread on 'locking protocols' w.r.t client-protocols v/s 
internal protocols (to facilitate rolling upgrades etc.), let's discuss this 
one separately.

 Makes sense? Thoughts?

thanks,
Arun
 
PS:  Between hadoop-2.2.0-beta and hadoop-2.3.0 we *might* be forced to make 
some incompatible changes due to *unforeseen circumstances*, but no more 
gratuitous changes are allowed.



Re: Release numbering for branch-2 releases

2013-01-29 Thread Arun C Murthy
Thanks Suresh. Adding back other *-dev lists.

On Jan 29, 2013, at 1:58 PM, Suresh Srinivas wrote:

> +1 for a release with all the changes that are committed. That way it
> carries all the important bug fixes.
> 
> 
> So, rather than debate more, I had a brief chat with Suresh and Todd. Todd
>> suggested calling the next release as hadoop-2.1.0-alpha to indicate the
>> incompatibility a little better. This makes sense to me, as long as we are
>> clear that we won't make any further *feature* releases in hadoop-2.0.x
>> series (obviously we might be forced to do security/bug-fix release).
>> 
> 
> 
> We have been incorrectly using point releases to introduce features. Given
> there are many features in this release, calling it 2.1.0 instead of 2.0.3
> makes sense. As you said, I am okay with the proposed plan as long as we do
> not lapse back to introducing new features in point releases meant for
> critical bugs.




[jira] [Created] (HADOOP-9261) S3 and S3 filesystems can move a directory under itself -and so lose data

2013-01-29 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9261:
--

 Summary: S3 and S3 filesystems can move a directory under itself 
-and so lose data
 Key: HADOOP-9261
 URL: https://issues.apache.org/jira/browse/HADOOP-9261
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.0.2-alpha, 1.1.1
 Environment: Testing against S3 bucket stored on US West (Read after 
Write consistency; eventual for read-after-delete or write-after-write)
Reporter: Steve Loughran


In the S3 filesystem clients, {{rename()}} doesn't make sure that the 
destination directory is not a child or other descendant of the source 
directory. The files are copied to the new destination, then the source 
directory is recursively deleted, so losing data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9262) Allow jobs to override the input Key/Value read from a sequence file's headers

2013-01-29 Thread David Parks (JIRA)
David Parks created HADOOP-9262:
---

 Summary: Allow jobs to override the input Key/Value read from a 
sequence file's headers
 Key: HADOOP-9262
 URL: https://issues.apache.org/jira/browse/HADOOP-9262
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 1.0.3
Reporter: David Parks
Priority: Minor


There's no clean way to upgrade a sequence file when the model objects in an 
existing sequence file change in the development process.

If we could override the Key/Value class types read from the sequence file 
headers we could write jobs that read in the old version of a model object 
using a different name (MyModel_old for example) make the necessary updates, 
and write out the new version of the object (MyModel for example).

The problem we experience now is that we have to hack up the code to match the 
Key/Value class types written to the sequence file, or manually change the 
headers of each sequence file.

Versioning model objects every time they change isn't a good approach to 
development because it introduces the likelyhood of less maintained code using 
an incorrect, old version of the model object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Hadoop 1.1.2-rc4 release candidate vote

2013-01-29 Thread Chris Nauroth
HDFS-4423 has been committed to branch-1.  Thank you, Matt.

--Chris


On Tue, Jan 29, 2013 at 11:57 AM, Matt Foley  wrote:

> Hi Chris,
> Okay, please get it in as soon as possible, and I'll respin the build.
>
> Suresh, can you code review?
>
> Thanks,
> --Matt
>
>
> On Tue, Jan 29, 2013 at 11:11 AM, Chris Nauroth  >wrote:
>
> > Hello Matt,
> >
> > Would it be better to wait for committing the fix of blocker HDFS-4423:
> > Checkpoint exception causes fatal damage to fsimage?  I have uploaded a
> > patch, and I expect to receive a code review in the next day or two.
> >
> > Thank you,
> > --Chris
> >
> >
> > On Mon, Jan 28, 2013 at 3:32 PM, Matt Foley 
> > wrote:
> >
> > > A new build of Hadoop-1.1.2 is available at
> > > http://people.apache.org/~mattf/hadoop-1.1.2-rc4/
> > > or in SVN at
> > > http://svn.apache.org/viewvc/hadoop/common/tags/release-1.1.2-rc4/
> > > or in the Maven repo.
> > >
> > > This candidate for a stabilization release of the Hadoop-1.1 branch has
> > 23
> > > patches and several cleanups compared to the Hadoop-1.1.1 release.
> >  Release
> > > notes are available at
> > > http://people.apache.org/~mattf/hadoop-1.1.2-rc4/releasenotes.html
> > >
> > > Please vote for this as the next release of Hadoop-1.  Voting will
> close
> > > next Monday, 4 Feb, at 3:30pm PST.
> > >
> > > Thanks,
> > > --Matt
> > >
> >
>