[jira] [Created] (HDFS-2419) hadoop calls cat, tail, get, copyToLocal, on a secure cluster with an webhdfs uri fail with a 401

2011-10-07 Thread Arpit Gupta (Created) (JIRA)
hadoop calls cat, tail, get, copyToLocal, on a secure cluster with an webhdfs 
uri fail with a 401
-

 Key: HDFS-2419
 URL: https://issues.apache.org/jira/browse/HDFS-2419
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Arpit Gupta
Assignee: Jitendra Nath Pandey


a dfs -cat returns the following...

cat: Unauthorized (error code=401)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails

2011-10-07 Thread Arpit Gupta (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HDFS-2416:
--

Summary: distcp with a webhdfs uri on a secure cluster fails  (was: hadoop 
calls cat, tail, get, copyToLocal, on a secure cluster with an webhdfs uri fail 
with a 401)

> distcp with a webhdfs uri on a secure cluster fails
> ---
>
> Key: HDFS-2416
> URL: https://issues.apache.org/jira/browse/HDFS-2416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Jitendra Nath Pandey
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2416) hadoop calls cat, tail, get, copyToLocal, on a secure cluster with an webhdfs uri fail with a 401

2011-10-07 Thread Arpit Gupta (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HDFS-2416:
--

Summary: hadoop calls cat, tail, get, copyToLocal, on a secure cluster with 
an webhdfs uri fail with a 401  (was: hadoop calls cat, tail, get, copyToLocal, 
distcp on a secure cluster with an webhdfs uri fail with a 401)

> hadoop calls cat, tail, get, copyToLocal, on a secure cluster with an webhdfs 
> uri fail with a 401
> -
>
> Key: HDFS-2416
> URL: https://issues.apache.org/jira/browse/HDFS-2416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Jitendra Nath Pandey
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2231) Configuration changes for HA namenode

2011-10-07 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123265#comment-13123265
 ] 

Aaron T. Myers commented on HDFS-2231:
--

That is to say, +1 - the latest patch looks good to me. :)

> Configuration changes for HA namenode
> -
>
> Key: HDFS-2231
> URL: https://issues.apache.org/jira/browse/HDFS-2231
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-2231.txt, HDFS-2231.txt
>
>
> This jira tracks the changes required for configuring HA setup for namenodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2410) Further clean up hard-coded configuration keys

2011-10-07 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2410:
-

Summary: Further clean up hard-coded configuration keys  (was: Further 
clanup hardcoded configuration keys)

> Further clean up hard-coded configuration keys
> --
>
> Key: HDFS-2410
> URL: https://issues.apache.org/jira/browse/HDFS-2410
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node, name-node, test
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Minor
> Attachments: HDFS-2410.txt
>
>
> HDFS code is littered with hardcoded config key names. This jira changes to 
> use DFSConfigKeys constants.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123264#comment-13123264
 ] 

Aaron T. Myers commented on HDFS-2414:
--

+1, looks good to me.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, 
> run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> at java.util.Properties.load0(Properti

[jira] [Commented] (HDFS-2417) Warnings about attempt to override final parameter while getting delegation token

2011-10-07 Thread Rajit Saha (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123259#comment-13123259
 ] 

Rajit Saha commented on HDFS-2417:
--

Thanks Aaron, we have started seeing this in .20.205 as well , we found 
HADOOP-7664 in .23, are they coming from same source

> Warnings about attempt to override final parameter while getting delegation 
> token
> -
>
> Key: HDFS-2417
> URL: https://issues.apache.org/jira/browse/HDFS-2417
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>
> I am seeing whenever I run any Mapreduce job and its trying to acquire 
> delegation from NN, In JT log following warnings coming about "a attempt to 
> override final parameter:"
> The log snippet in JT log
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapred.job.reuse.jvm.num.tasks;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: mapred.system.dir;
>  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> hadoop.job.history.user.location;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: mapred.local.dir; 
> Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apred.job.tracker.http.address;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: d
> fs.data.dir;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: d
> fs.http.address;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apreduce.admin.map.child.java.opts;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapreduce.history.server.http.address;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apreduce.history.server.embedded;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apreduce.jobtracker.split.metainfo.maxsize;  Ignoring.2011-10-07 20:29:19,096 
> WARN
> org.apache.hadoop.conf.Configuration: 
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to
> override final parameter: m
> apreduce.admin.reduce.child.java.opts;  Ignoring.2011-10-07 20:29:19,096 WARN 
> org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: h
> adoop.tmp.dir;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapred.jobtracker.maxtasks.per.job;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapred.job.tracker;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: dfs.name.dir; 
> Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apred.temp.dir;  Ignoring.2011-10-07 20:29:19,103 INFO
> org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: 
> registering token for renewal for service
> =:50470 and jobID 
> = job_201110072015_0005
> 2011-10-07 20:29:19,103 INFO 
> org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: 
> registering token for
> renewal for service =:8020 and jobID = job_201110072015_0005
> The STDOUT of distcp job

[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123256#comment-13123256
 ] 

Hadoop QA commented on HDFS-2414:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498244/hdfs-2414.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 12 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1355//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1355//console

This message is automatically generated.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, 
> run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>

[jira] [Commented] (HDFS-2231) Configuration changes for HA namenode

2011-10-07 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123255#comment-13123255
 ] 

Aaron T. Myers commented on HDFS-2231:
--

Just realized I said over in HDFS-1973 that I would comment on this JIRA about 
client-side conf changes, but it totally slipped my mind.

Anyway, the only change to {{DFSConfigKeys}} in HDFS-1973 was to introduce 
{{"dfs.client.failover.proxy.provider"}}, which is a prefix which allows one to 
configure a particular implementation of a {{FailoverProxyProvider}} for a 
given NN logical URI. That should should be compatible with the changes you've 
proposed here, though some of what went in to HDFS-1973 will need a little 
adaptation to take advantage of what you've implemented here.

My intention in HDFS-1973 was that the various {{FailoverProxyProvider}} 
implementations would be responsible for their own configurations. For example, 
a ZK-based {{FailoverProxyProvider}} might need to know the quroum members. So, 
the {{ConfiguredFailoverProxyProvider}} introduced by HDFS-1973 introduced the 
config parameter "{{dfs.ha.namenode.addresses}}", which is a comma-separated 
list of actual (not logical) URIs. This is equivalent to the functionality 
introduced by the pair of configuration options {{dfs.namenode.ids}} and 
{{dfs.namenode.rpc-address.*}}, introduced this patch, and I like the design 
you have here better.

I think it's fine to commit this as-designed now, and then we can fix up 
{{ConfiguredFailoverProxyProvider}} once this goes in. I've filed HDFS-2418 to 
take care of that.

> Configuration changes for HA namenode
> -
>
> Key: HDFS-2231
> URL: https://issues.apache.org/jira/browse/HDFS-2231
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-2231.txt, HDFS-2231.txt
>
>
> This jira tracks the changes required for configuring HA setup for namenodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2418) Change ConfiguredFailoverProxyProvider to take advantage of HDFS-2231

2011-10-07 Thread Aaron T. Myers (Created) (JIRA)
Change ConfiguredFailoverProxyProvider to take advantage of HDFS-2231
-

 Key: HDFS-2418
 URL: https://issues.apache.org/jira/browse/HDFS-2418
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


The {{ConfiguredFailoverProxyProvider}} will need to be amended to take 
advantage of the improvements to HA configuration introduced by HDFS-2231, once 
it's committed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2417) Warnings about attempt to override final parameter while getting delegation token

2011-10-07 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123238#comment-13123238
 ] 

Aaron T. Myers commented on HDFS-2417:
--

Hi Rajit, is this a duplicate of HADOOP-7664?

> Warnings about attempt to override final parameter while getting delegation 
> token
> -
>
> Key: HDFS-2417
> URL: https://issues.apache.org/jira/browse/HDFS-2417
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>
> I am seeing whenever I run any Mapreduce job and its trying to acquire 
> delegation from NN, In JT log following warnings coming about "a attempt to 
> override final parameter:"
> The log snippet in JT log
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapred.job.reuse.jvm.num.tasks;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: mapred.system.dir;
>  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> hadoop.job.history.user.location;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: mapred.local.dir; 
> Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apred.job.tracker.http.address;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: d
> fs.data.dir;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: d
> fs.http.address;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apreduce.admin.map.child.java.opts;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapreduce.history.server.http.address;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apreduce.history.server.embedded;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apreduce.jobtracker.split.metainfo.maxsize;  Ignoring.2011-10-07 20:29:19,096 
> WARN
> org.apache.hadoop.conf.Configuration: 
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to
> override final parameter: m
> apreduce.admin.reduce.child.java.opts;  Ignoring.2011-10-07 20:29:19,096 WARN 
> org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: h
> adoop.tmp.dir;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapred.jobtracker.maxtasks.per.job;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter:
> mapred.job.tracker;  Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: dfs.name.dir; 
> Ignoring.
> 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
> /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
> final parameter: m
> apred.temp.dir;  Ignoring.2011-10-07 20:29:19,103 INFO
> org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: 
> registering token for renewal for service
> =:50470 and jobID 
> = job_201110072015_0005
> 2011-10-07 20:29:19,103 INFO 
> org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: 
> registering token for
> renewal for service =:8020 and jobID = job_201110072015_0005
> The STDOUT of distcp job when these warnings logged into JT log
> --

[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123236#comment-13123236
 ] 

Robert Joseph Evans commented on HDFS-2414:
---

+1 for the fix (non-binding) I am happy to see that the corruption to the file 
is now deterministic Throwing random data at code is good for testing, but it 
needs to be reproducible with a random seed or something.

Thanks for jumping on this so quickly.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, 
> run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'f

[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2414:
--

Assignee: Todd Lipcon
Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
  Status: Patch Available  (was: Open)

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, 
> run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)

[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2414:
--

Attachment: hdfs-2414.txt

Attached fix for the other error. The problem was that in the corrupted file, 
we happened to corrupt in such a way that we got a "\u" sequence with some 
random junk after the "\u". Very improbably but it happens :) This caused the 
IAE to be thrown.

This patch changes the code to corrupt the file in a deterministic way to check 
exactly the expected code path.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, 
> run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.Asserti

[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2414:
--

Attachment: hdfs-2414.txt

sorry, some unused imports in previous patch. this one's good to go.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, 
> run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> at java.util.Properties.load0(Properties.java:374)
>  

[jira] [Created] (HDFS-2417) Warnings about attempt to override final parameter while getting delegation token

2011-10-07 Thread Rajit Saha (Created) (JIRA)
Warnings about attempt to override final parameter while getting delegation 
token
-

 Key: HDFS-2417
 URL: https://issues.apache.org/jira/browse/HDFS-2417
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.205.0
Reporter: Rajit Saha


I am seeing whenever I run any Mapreduce job and its trying to acquire 
delegation from NN, In JT log following warnings coming about "a attempt to 
override final parameter:"


The log snippet in JT log

2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter:
mapred.job.reuse.jvm.num.tasks;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: mapred.system.dir;
 Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter:
hadoop.job.history.user.location;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: mapred.local.dir; 
Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: m
apred.job.tracker.http.address;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: d
fs.data.dir;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: d
fs.http.address;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: m
apreduce.admin.map.child.java.opts;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter:
mapreduce.history.server.http.address;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: m
apreduce.history.server.embedded;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: m
apreduce.jobtracker.split.metainfo.maxsize;  Ignoring.2011-10-07 20:29:19,096 
WARN
org.apache.hadoop.conf.Configuration: 
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to
override final parameter: m
apreduce.admin.reduce.child.java.opts;  Ignoring.2011-10-07 20:29:19,096 WARN 
org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: h
adoop.tmp.dir;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter:
mapred.jobtracker.maxtasks.per.job;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter:
mapred.job.tracker;  Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: dfs.name.dir; 
Ignoring.
2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration:
/tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override 
final parameter: m
apred.temp.dir;  Ignoring.2011-10-07 20:29:19,103 INFO
org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: registering 
token for renewal for service
=:50470 and jobID 
= job_201110072015_0005
2011-10-07 20:29:19,103 INFO 
org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: registering 
token for
renewal for service =:8020 and jobID = job_201110072015_0005



The STDOUT of distcp job when these warnings logged into JT log

$hadoop  distcp hftp://:50070/tmp/inp out
11/10/07 20:29:17 INFO tools.DistCp:
srcPaths=[hftp://:50070/tmp/inp]
11/10/07 20:29:17 INFO tools.DistCp: destPath=out
11/10/07 20:29:18 INFO security.TokenCache: Got dt for
hftp://:50070/tmp/inp;uri=:50470;t.service=:50470
11/10/07 20:29:18 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 7 
for  on :8020
11/10/07 20:29:18 INFO security.TokenCache: Got dt for
1318019341/as;uri=:8020;t.service=:8020


[jira] [Commented] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.

2011-10-07 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123190#comment-13123190
 ] 

Hadoop QA commented on HDFS-2205:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498236/HDFS-2205.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1354//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1354//console

This message is automatically generated.

> Log message for failed connection to datanode is not followed by a success 
> message.
> ---
>
> Key: HDFS-2205
> URL: https://issues.apache.org/jira/browse/HDFS-2205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.23.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 0.23.0
>
> Attachments: HDFS-2205.patch, HDFS-2205.patch, HDFS-2205.patch
>
>
> To avoid confusing users on whether their HDFS operation was succesful or 
> not, a success message should be printed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123179#comment-13123179
 ] 

Hudson commented on HDFS-2412:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1059 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1059/])
HDFS-2412. Add backwards-compatibility layer for renamed FSConstants class. 
Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180202
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java


> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123184#comment-13123184
 ] 

Todd Lipcon commented on HDFS-2414:
---

oh... this is also just a test issue - this part of the test case calls 
UpgradeUtilities.corruptFile to corrupt one of the VERSION files, and then 
expects that it fails with a certain exception string. Instead, we're failing 
with a different exception because we've corrupted it in non-UTF8-compliant way.

I think the fix is probably to catch this kind of exception when reading a 
storage file and rethrow as an IOException, then change the test to expect 
either type of error.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(

[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123177#comment-13123177
 ] 

Robert Joseph Evans commented on HDFS-2414:
---

Sorry I cannot be more help on this, I really don't know the Rollback code at 
all.  I have a script and I am trying to reproduce it on 0.20 and 0.22 though.  
Will let you know how it goes.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
>   

[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123166#comment-13123166
 ] 

Todd Lipcon commented on HDFS-2414:
---

I managed to reproduce the other failure too... I see this in one of the 
VERSION files:
{code}
CFºE ^Utt<8c>áà 1&:45:à2OüØT <8d><8d>11Þ^YÙmespacd"a=054a0Ç3¯<8d>3
æ\uÉt·r^FD=^_ºstCl<98>stërIâ
cwime^S<99>
sto:aÆeTypr¨MA6E^DNOjEg_Ëo^[k^RoflID=£P-15<94>32Í073^QkÚ27.<82>.0ÆO-1Y+1020pðIA<9f>î
Eayout<9f>erÌ_<83>n¸-38o
{code}

wow... putting my thinking cap on here.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: 

[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123153#comment-13123153
 ] 

Hudson commented on HDFS-2412:
--

Integrated in Hadoop-Common-trunk-Commit #1039 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1039/])
HDFS-2412. Add backwards-compatibility layer for renamed FSConstants class. 
Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180202
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java


> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123152#comment-13123152
 ] 

Hudson commented on HDFS-2412:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1117 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1117/])
HDFS-2412. Add backwards-compatibility layer for renamed FSConstants class. 
Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180202
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java


> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2412:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Resolved this in 23 and trunk. If we have more issues related to 1620, let's 
just revert both this and 1620.

> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.

2011-10-07 Thread Ravi Prakash (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-2205:
---

Attachment: HDFS-2205.patch

Thanks for your suggestions Steve. I've incorporated the changes in this new 
patch. Could you please review and commit?


> Log message for failed connection to datanode is not followed by a success 
> message.
> ---
>
> Key: HDFS-2205
> URL: https://issues.apache.org/jira/browse/HDFS-2205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.23.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 0.23.0
>
> Attachments: HDFS-2205.patch, HDFS-2205.patch, HDFS-2205.patch
>
>
> To avoid confusing users on whether their HDFS operation was succesful or 
> not, a success message should be printed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.

2011-10-07 Thread Steve Loughran (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123124#comment-13123124
 ] 

Steve Loughran commented on HDFS-2205:
--

instead of going {{LOG.warn("text" + ex.getMessage() );}}

can you go {{LOG.warn("text" + ex, ex );}}

Two reasons
# not all exceptions have a message (e.g NullPointerException)
# the second argument hands off the potentially nested exception chain to the 
logger to process as its formatter sees fit.

A lot of the existing code doesn't get this right, but this patch makes things 
slightly worse in terms of reporting the exception itself.

Also, why log at debug the connection failure when more info is being printed 
at warn() level? I'd delete the debug logging clause. 


> Log message for failed connection to datanode is not followed by a success 
> message.
> ---
>
> Key: HDFS-2205
> URL: https://issues.apache.org/jira/browse/HDFS-2205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.23.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 0.23.0
>
> Attachments: HDFS-2205.patch, HDFS-2205.patch
>
>
> To avoid confusing users on whether their HDFS operation was succesful or 
> not, a success message should be printed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123085#comment-13123085
 ] 

Robert Joseph Evans commented on HDFS-2414:
---

Wow that was fast.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> at java.util.Properties.load0(Properties.java:374)
> at java.util.Properties.load(Propertie

[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2414:
--

Attachment: hdfs-2414.txt

Here's a test fix that addresses the issue of timestamp comparison. Will look 
into the unicode-related one now.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> at java.util.Properties.load0(Properties.java:3

[jira] [Commented] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.

2011-10-07 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123077#comment-13123077
 ] 

Hadoop QA commented on HDFS-2205:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498209/HDFS-2205.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1353//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1353//console

This message is automatically generated.

> Log message for failed connection to datanode is not followed by a success 
> message.
> ---
>
> Key: HDFS-2205
> URL: https://issues.apache.org/jira/browse/HDFS-2205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.23.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 0.23.0
>
> Attachments: HDFS-2205.patch, HDFS-2205.patch
>
>
> To avoid confusing users on whether their HDFS operation was succesful or 
> not, a success message should be printed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123061#comment-13123061
 ] 

Robert Joseph Evans commented on HDFS-2414:
---

Another data point is that either our internal Jenkins build has hit this 
failure twice in a row.  It might be that it is just very un-lucky, I really 
don't know.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert

[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123060#comment-13123060
 ] 

Todd Lipcon commented on HDFS-2414:
---

I found one source of the test failure: the VERSION file contains a timestamp 
at the top, which could be different between the different directories. The 
md5sum check of course does not ignore this line in the file. This is the top 
(more common) failure.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentExcept

[jira] [Updated] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched

2011-10-07 Thread Alejandro Abdelnur (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-2322:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> the build fails in Windows because commons-daemon TAR cannot be fetched
> ---
>
> Key: HDFS-2322
> URL: https://issues.apache.org/jira/browse/HDFS-2322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2322v1.patch
>
>
> For windows there is no commons-daemon TAR but a ZIP, plus the name follows a 
> different convention. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2294) Download of commons-daemon TAR should not be under target

2011-10-07 Thread Alejandro Abdelnur (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-2294:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Download of commons-daemon TAR should not be under target
> -
>
> Key: HDFS-2294
> URL: https://issues.apache.org/jira/browse/HDFS-2294
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2289.patch
>
>
> Committed HDFS-2289 downloads commons-daemon TAR in the hadoop-hdfs/target/, 
> earlier patches for HDFS-2289 were using hadoop-hdfs/download/ as the 
> location for the download.
> The motivation not to use the 'target/' directory is that on every clean 
> build the TAR will be downloaded from Apache archives. Using a 'download' 
> directory this happens once per workspace.
> The patch was also adding the 'download/' directory to the .gitignore file 
> (it should also be svn ignored).
> Besides downloading it only once, it allows to do a clean build in 
> disconnected mode.
> IMO, the later is a nice developer capability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123049#comment-13123049
 ] 

Todd Lipcon commented on HDFS-2414:
---

Thanks. I'm also looping this here, haven't seen a failure yet. I'll take a 
look at your logs.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> at java.util.Properties.load0(Properties.java:374)
> 

[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-2414:
--

Attachment: run-158-failed.tgz
run-106-failed.tgz

Here are the logs, run-106 corresponds to the first failure (The diff failure) 
and 158 corresponds to the second failure.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: run-106-failed.tgz, run-158-failed.tgz
>
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> 

[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123030#comment-13123030
 ] 

Aaron T. Myers commented on HDFS-2412:
--

+1

> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123028#comment-13123028
 ] 

Robert Joseph Evans commented on HDFS-2414:
---

I have the full logs for 1039 runs, of which 28 failed.  I will upload the logs 
for a couple of them.  I have not tried it on 22 or 20 yet.  I have some 
on-call stuff I have to slug through before I can spend the time trying to get 
that to happen.

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.

[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123029#comment-13123029
 ] 

Todd Lipcon commented on HDFS-2412:
---

Can I get a +1 on this small patch in the meantime, so we can get the 
HBase-on-23 build back to compiling? Then we can consider whether we should 
just revert the whole thing.

> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.

2011-10-07 Thread Ravi Prakash (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-2205:
---

Attachment: HDFS-2205.patch

Rebased patch to current HEAD. Applied to both, trunk and branch-0.23

> Log message for failed connection to datanode is not followed by a success 
> message.
> ---
>
> Key: HDFS-2205
> URL: https://issues.apache.org/jira/browse/HDFS-2205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.23.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 0.23.0
>
> Attachments: HDFS-2205.patch, HDFS-2205.patch
>
>
> To avoid confusing users on whether their HDFS operation was succesful or 
> not, a success message should be printed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123020#comment-13123020
 ] 

Todd Lipcon commented on HDFS-2414:
---

Interesting. Do you have the full logs for either of these cases? Can you 
verify that these failures are new in 23 vs 22 or 20?

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> at java.util.Properties.load0(Properties.java:374)
> at java.util.Properties

[jira] [Assigned] (HDFS-2416) hadoop calls cat, tail, get, copyToLocal, distcp on a secure cluster with an webhdfs uri fail with a 401

2011-10-07 Thread Jitendra Nath Pandey (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey reassigned HDFS-2416:
--

Assignee: Jitendra Nath Pandey

> hadoop calls cat, tail, get, copyToLocal, distcp on a secure cluster with an 
> webhdfs uri fail with a 401
> 
>
> Key: HDFS-2416
> URL: https://issues.apache.org/jira/browse/HDFS-2416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Jitendra Nath Pandey
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2416) hadoop calls cat, tail, get, copyToLocal, distcp on a secure cluster with an webhdfs uri fail with a 401

2011-10-07 Thread Arpit Gupta (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HDFS-2416:
--

Summary: hadoop calls cat, tail, get, copyToLocal, distcp on a secure 
cluster with an webhdfs uri fail with a 401  (was: hadoop calls cat, tail, 
copyToLocal, distcp on a secure cluster with an webhdfs uri fail with a 401)

> hadoop calls cat, tail, get, copyToLocal, distcp on a secure cluster with an 
> webhdfs uri fail with a 401
> 
>
> Key: HDFS-2416
> URL: https://issues.apache.org/jira/browse/HDFS-2416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2416) hadoop calls cat, tail, copyToLocal, distcp on a secure cluster with an webhdfs uri fail with a 401

2011-10-07 Thread Arpit Gupta (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122971#comment-13122971
 ] 

Arpit Gupta commented on HDFS-2416:
---

Here is the output of the distcp job

org.apache.hadoop.ipc.RemoteException: Delegation Token can be issued only with 
kerberos or web authentication
at 
org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:111)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:229)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:321)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:502)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:118)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:79)
at org.apache.hadoop.tools.DistCp.checkSrcPath(DistCp.java:632)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:656)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)



a dfs -cat returns the following...


cat: Unauthorized (error code=401)

> hadoop calls cat, tail, copyToLocal, distcp on a secure cluster with an 
> webhdfs uri fail with a 401
> ---
>
> Key: HDFS-2416
> URL: https://issues.apache.org/jira/browse/HDFS-2416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2416) hadoop calls cat, tail, copyToLocal, distcp on a secure cluster with an webhdfs uri fail with a 401

2011-10-07 Thread Arpit Gupta (Created) (JIRA)
hadoop calls cat, tail, copyToLocal, distcp on a secure cluster with an webhdfs 
uri fail with a 401
---

 Key: HDFS-2416
 URL: https://issues.apache.org/jira/browse/HDFS-2416
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Arpit Gupta




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122963#comment-13122963
 ] 

Hudson commented on HDFS-2322:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1055 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1055/])
HDFS-2322. the build fails in Windows because commons-daemon TAR cannot be 
fetched. (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180094
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml


> the build fails in Windows because commons-daemon TAR cannot be fetched
> ---
>
> Key: HDFS-2322
> URL: https://issues.apache.org/jira/browse/HDFS-2322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2322v1.patch
>
>
> For windows there is no commons-daemon TAR but a ZIP, plus the name follows a 
> different convention. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122950#comment-13122950
 ] 

Hudson commented on HDFS-2322:
--

Integrated in Hadoop-Common-trunk-Commit #1036 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1036/])
HDFS-2322. the build fails in Windows because commons-daemon TAR cannot be 
fetched. (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180094
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml


> the build fails in Windows because commons-daemon TAR cannot be fetched
> ---
>
> Key: HDFS-2322
> URL: https://issues.apache.org/jira/browse/HDFS-2322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2322v1.patch
>
>
> For windows there is no commons-daemon TAR but a ZIP, plus the name follows a 
> different convention. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122951#comment-13122951
 ] 

Hudson commented on HDFS-2322:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1114 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1114/])
HDFS-2322. the build fails in Windows because commons-daemon TAR cannot be 
fetched. (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180094
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml


> the build fails in Windows because commons-daemon TAR cannot be fetched
> ---
>
> Key: HDFS-2322
> URL: https://issues.apache.org/jira/browse/HDFS-2322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2322v1.patch
>
>
> For windows there is no commons-daemon TAR but a ZIP, plus the name follows a 
> different convention. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched

2011-10-07 Thread Alejandro Abdelnur (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122936#comment-13122936
 ] 

Alejandro Abdelnur commented on HDFS-2322:
--

committed to trunk, in a few days it will be committed to 0.23 in a combo with 
other maven related patches.

> the build fails in Windows because commons-daemon TAR cannot be fetched
> ---
>
> Key: HDFS-2322
> URL: https://issues.apache.org/jira/browse/HDFS-2322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2322v1.patch
>
>
> For windows there is no commons-daemon TAR but a ZIP, plus the name follows a 
> different convention. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched

2011-10-07 Thread Alejandro Abdelnur (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122928#comment-13122928
 ] 

Alejandro Abdelnur commented on HDFS-2322:
--

Thanks ATM, yes I do. Still I won't rename my son to Aaron.

> the build fails in Windows because commons-daemon TAR cannot be fetched
> ---
>
> Key: HDFS-2322
> URL: https://issues.apache.org/jira/browse/HDFS-2322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2322v1.patch
>
>
> For windows there is no commons-daemon TAR but a ZIP, plus the name follows a 
> different convention. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1762) Allow TestHDFSCLI to be run against a cluster

2011-10-07 Thread Roman Shaposhnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122924#comment-13122924
 ] 

Roman Shaposhnik commented on HDFS-1762:


@Cos, please commit it to .22 the Bigtop validation has passed.

> Allow TestHDFSCLI to be run against a cluster
> -
>
> Key: HDFS-1762
> URL: https://issues.apache.org/jira/browse/HDFS-1762
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 0.22.0
>Reporter: Tom White
>Assignee: Konstantin Boudnik
> Attachments: HDFS-1762-20.patch, HDFS-1762.common.patch, 
> HDFS-1762.hdfs.patch, HDFS-1762.mapreduce.patch
>
>
> Currently TestHDFSCLI starts mini clusters to run tests against. It would be 
> useful to be able to support running against arbitrary clusters for testing 
> purposes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122898#comment-13122898
 ] 

Hudson commented on HDFS-2209:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1054 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1054/])
HDFS-2209. Make MiniDFS easier to embed in other apps.

stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180077
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java


> Make MiniDFS easier to embed in other apps
> --
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2415) Move MiniDFS out of test JAR and into the main hadoop-hdfs JAR

2011-10-07 Thread Steve Loughran (Created) (JIRA)
Move MiniDFS out of test JAR and into the main hadoop-hdfs JAR
--

 Key: HDFS-2415
 URL: https://issues.apache.org/jira/browse/HDFS-2415
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Steve Loughran
Priority: Minor


This is just an idea: move the MiniDFS cluster out of the Hadoop test JAR and 
into the main redistributable.

This would make it easier for people downstream to use it. It is the easiest 
way to bring up a DFS cluster in a single JVM, and with the MiniMR cluster is a 
common way to test MR jobs in the IDE against small datasets. 

Moving the file while keeping the package name should not cause any problems; 
all it will do is make applying outstanding patches to it slightly harder. 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122881#comment-13122881
 ] 

Hudson commented on HDFS-2209:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1113 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1113/])
HDFS-2209. Make MiniDFS easier to embed in other apps.

stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180077
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java


> Make MiniDFS easier to embed in other apps
> --
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122877#comment-13122877
 ] 

Hudson commented on HDFS-2209:
--

Integrated in Hadoop-Common-trunk-Commit #1035 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1035/])
HDFS-2209. Make MiniDFS easier to embed in other apps.

stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1180077
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java


> Make MiniDFS easier to embed in other apps
> --
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2209) Make MiniDFS easier to embed in other apps

2011-10-07 Thread Steve Loughran (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-2209:
-

  Resolution: Fixed
   Fix Version/s: 0.23.0
Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
  Status: Resolved  (was: Patch Available)

committed to 0.23 and trunk; same patch applies

> Make MiniDFS easier to embed in other apps
> --
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2209) Make MiniDFS easier to embed in other apps

2011-10-07 Thread Steve Loughran (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-2209:
-

Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
 Summary: Make MiniDFS easier to embed in other apps  (was: MiniDFS 
cluster improvements)

changed title for CHANGES.TXT

> Make MiniDFS easier to embed in other apps
> --
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Mahadev konar (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahadev konar updated HDFS-2414:


Priority: Critical  (was: Major)
Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)

> TestDFSRollback fails intermittently
> 
>
> Key: HDFS-2414
> URL: https://issues.apache.org/jira/browse/HDFS-2414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, test
>Affects Versions: 0.23.0
>Reporter: Robert Joseph Evans
>Priority: Critical
>
> When running TestDFSRollback repeatedly in a loop I observed a failure rate 
> of about 3%.  Two separate stack traces are in the output and it appears to 
> have something to do with not writing out a complete snapshot of the data for 
> rollback.
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
> <<< FAILURE!
> java.lang.AssertionError: File contents differed:
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
>   
> /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
> at org.junit.Assert.fail(Assert.java:91)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
> at 
> org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
> at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
> at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
> at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> {noformat}
> is the more common one, but I also saw
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDFSRollback
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
> FAILURE!
> testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec 
>  <<< FAILURE!
> junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
> layoutVersion missing' in exception but got: 
> java.lang.IllegalArgumentException: Malformed \u encoding.
> at java.util.Properties.loadConvert(Properties.java:552)
> at java.util.Properties.load0(Properties.java:374)
> at java.util.Properties.load(Properties.java:325)
> at 
> org.apache.hadoop.hdfs.server.common.Storage.r

[jira] [Created] (HDFS-2414) TestDFSRollback fails intermittently

2011-10-07 Thread Robert Joseph Evans (Created) (JIRA)
TestDFSRollback fails intermittently


 Key: HDFS-2414
 URL: https://issues.apache.org/jira/browse/HDFS-2414
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: 0.23.0
Reporter: Robert Joseph Evans


When running TestDFSRollback repeatedly in a loop I observed a failure rate of 
about 3%.  Two separate stack traces are in the output and it appears to have 
something to do with not writing out a complete snapshot of the data for 
rollback.

{noformat}
---
Test set: org.apache.hadoop.hdfs.TestDFSRollback
---
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec <<< 
FAILURE!
testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 8.34 sec  
<<< FAILURE!
java.lang.AssertionError: File contents differed:
  
/home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec
  
/home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991
at org.junit.Assert.fail(Assert.java:91)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236)
at 
org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86)
at 
org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:232)
at junit.framework.TestSuite.run(TestSuite.java:227)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
at 
org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
{noformat}

is the more common one, but I also saw

{noformat}
---
Test set: org.apache.hadoop.hdfs.TestDFSRollback
---
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec <<< 
FAILURE!
testRollback(org.apache.hadoop.hdfs.TestDFSRollback)  Time elapsed: 7.304 sec  
<<< FAILURE!
junit.framework.AssertionFailedError: Expected substring 'file VERSION has 
layoutVersion missing' in exception but got: 
java.lang.IllegalArgumentException: Malformed \u encoding.
at java.util.Properties.loadConvert(Properties.java:552)
at java.util.Properties.load0(Properties.java:374)
at java.util.Properties.load(Properties.java:325)
at 
org.apache.hadoop.hdfs.server.common.Storage.readPropertiesFile(Storage.java:837)
at 
org.apache.hadoop.hdfs.server.common.Storage.readPreviousVersionProperties(Storage.java:789)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.doRollback(FSImage.java:439)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:270)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize

[jira] [Commented] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Steve Loughran (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122843#comment-13122843
 ] 

Steve Loughran commented on HDFS-2209:
--

+1. This makes MiniDFS much easier to embed and use in downstream test runs

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122807#comment-13122807
 ] 

Hadoop QA commented on HDFS-2209:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498148/HDFS-2209.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 35 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1352//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1352//console

This message is automatically generated.

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2294) Download of commons-daemon TAR should not be under target

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122782#comment-13122782
 ] 

Hudson commented on HDFS-2294:
--

Integrated in Hadoop-Mapreduce-trunk #853 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/853/])
HDFS-2294. Download of commons-daemon TAR should not be under target (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179894
Files : 
* /hadoop/common/trunk/.gitignore
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml


> Download of commons-daemon TAR should not be under target
> -
>
> Key: HDFS-2294
> URL: https://issues.apache.org/jira/browse/HDFS-2294
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2289.patch
>
>
> Committed HDFS-2289 downloads commons-daemon TAR in the hadoop-hdfs/target/, 
> earlier patches for HDFS-2289 were using hadoop-hdfs/download/ as the 
> location for the download.
> The motivation not to use the 'target/' directory is that on every clean 
> build the TAR will be downloaded from Apache archives. Using a 'download' 
> directory this happens once per workspace.
> The patch was also adding the 'download/' directory to the .gitignore file 
> (it should also be svn ignored).
> Besides downloading it only once, it allows to do a clean build in 
> disconnected mode.
> IMO, the later is a nice developer capability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2181) Separate HDFS Client wire protocol data types

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122772#comment-13122772
 ] 

Hudson commented on HDFS-2181:
--

Integrated in Hadoop-Mapreduce-trunk #853 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/853/])
HDFS-2181 Separate HDFS Client wire protocol data types (sanjay)

sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179877
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolProtocolBuffers
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolProtocolBuffers/overview.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientDatanodeProtocolServerSideTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientDatanodeProtocolTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientDatanodeWireProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeProtocolServerSideTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeProtocolTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeWireProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ContentSummaryWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/CorruptFileBlocksWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/DatanodeIDWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/DatanodeInfoWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/DirectoryListingWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ExtendedBlockWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/FsPermissionWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/FsServerDefaultsWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/HdfsFileStatusWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/HdfsLocatedFileStatusWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/LocatedBlockWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/LocatedBlocksWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ProtocolSignatureWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/TokenWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/

[jira] [Commented] (HDFS-2405) hadoop dfs command with webhdfs fails on secure hadoop

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122776#comment-13122776
 ] 

Hudson commented on HDFS-2405:
--

Integrated in Hadoop-Mapreduce-trunk #853 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/853/])
HDFS-2409. _HOST in dfs.web.authentication.kerberos.principal. Incorporates 
HDFS-2405 as well.

jitendra : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179861
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestAuthFilter.java


> hadoop dfs command with webhdfs fails on secure hadoop
> --
>
> Key: HDFS-2405
> URL: https://issues.apache.org/jira/browse/HDFS-2405
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Jitendra Nath Pandey
>Priority: Critical
> Fix For: 0.20.205.0, 0.24.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2403) The renewer in NamenodeWebHdfsMethods.generateDelegationToken(..) is not used

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122774#comment-13122774
 ] 

Hudson commented on HDFS-2403:
--

Integrated in Hadoop-Mapreduce-trunk #853 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/853/])
HDFS-2403. NamenodeWebHdfsMethods.generateDelegationToken(..) does not use 
the renewer parameter.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179856
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java


> The renewer in NamenodeWebHdfsMethods.generateDelegationToken(..) is not used
> -
>
> Key: HDFS-2403
> URL: https://issues.apache.org/jira/browse/HDFS-2403
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.20.205.0, 0.24.0
>
> Attachments: h2403_20111005.patch, h2403_20111005_0.20.patch
>
>
> Below are some suggestions from Suresh.
> # renewer not used in #generateDelegationToken
> # put() does not use InputStream in and should not throw URISyntaxException
> # post() does not use InputStream in and should not throw URISyntaxException
> # get() should not throw URISyntaxException

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2409) _HOST in dfs.web.authentication.kerberos.principal.

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122773#comment-13122773
 ] 

Hudson commented on HDFS-2409:
--

Integrated in Hadoop-Mapreduce-trunk #853 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/853/])
HDFS-2409. _HOST in dfs.web.authentication.kerberos.principal. Incorporates 
HDFS-2405 as well.

jitendra : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179861
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestAuthFilter.java


> _HOST in dfs.web.authentication.kerberos.principal.
> ---
>
> Key: HDFS-2409
> URL: https://issues.apache.org/jira/browse/HDFS-2409
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.24.0
>
> Attachments: HDFS-2409-trunk.patch
>
>
> This is HDFS part of HADOOP-7721. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Steve Loughran (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-2209:
-

Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
  Status: Patch Available  (was: Open)

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Steve Loughran (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-2209:
-

Attachment: HDFS-2209.patch

rm unused import

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch, HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Steve Loughran (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-2209:
-

Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
  Status: Open  (was: Patch Available)

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Steve Loughran (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122732#comment-13122732
 ] 

Steve Loughran commented on HDFS-2209:
--

the failing test is unrelated an as Jenkins seems behind on its tests, I'm 
going +1 and commit this

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2294) Download of commons-daemon TAR should not be under target

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122723#comment-13122723
 ] 

Hudson commented on HDFS-2294:
--

Integrated in Hadoop-Hdfs-trunk #823 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/823/])
HDFS-2294. Download of commons-daemon TAR should not be under target (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179894
Files : 
* /hadoop/common/trunk/.gitignore
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml


> Download of commons-daemon TAR should not be under target
> -
>
> Key: HDFS-2294
> URL: https://issues.apache.org/jira/browse/HDFS-2294
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HDFS-2289.patch
>
>
> Committed HDFS-2289 downloads commons-daemon TAR in the hadoop-hdfs/target/, 
> earlier patches for HDFS-2289 were using hadoop-hdfs/download/ as the 
> location for the download.
> The motivation not to use the 'target/' directory is that on every clean 
> build the TAR will be downloaded from Apache archives. Using a 'download' 
> directory this happens once per workspace.
> The patch was also adding the 'download/' directory to the .gitignore file 
> (it should also be svn ignored).
> Besides downloading it only once, it allows to do a clean build in 
> disconnected mode.
> IMO, the later is a nice developer capability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2181) Separate HDFS Client wire protocol data types

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122718#comment-13122718
 ] 

Hudson commented on HDFS-2181:
--

Integrated in Hadoop-Hdfs-trunk #823 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/823/])
HDFS-2181 Separate HDFS Client wire protocol data types (sanjay)

sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179877
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolProtocolBuffers
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolProtocolBuffers/overview.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientDatanodeProtocolServerSideTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientDatanodeProtocolTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientDatanodeWireProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeProtocolServerSideTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeProtocolTranslatorR23.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeWireProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ContentSummaryWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/CorruptFileBlocksWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/DatanodeIDWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/DatanodeInfoWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/DirectoryListingWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ExtendedBlockWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/FsPermissionWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/FsServerDefaultsWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/HdfsFileStatusWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/HdfsLocatedFileStatusWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/LocatedBlockWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/LocatedBlocksWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ProtocolSignatureWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/TokenWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/

[jira] [Commented] (HDFS-2409) _HOST in dfs.web.authentication.kerberos.principal.

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122719#comment-13122719
 ] 

Hudson commented on HDFS-2409:
--

Integrated in Hadoop-Hdfs-trunk #823 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/823/])
HDFS-2409. _HOST in dfs.web.authentication.kerberos.principal. Incorporates 
HDFS-2405 as well.

jitendra : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179861
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestAuthFilter.java


> _HOST in dfs.web.authentication.kerberos.principal.
> ---
>
> Key: HDFS-2409
> URL: https://issues.apache.org/jira/browse/HDFS-2409
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.24.0
>
> Attachments: HDFS-2409-trunk.patch
>
>
> This is HDFS part of HADOOP-7721. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2405) hadoop dfs command with webhdfs fails on secure hadoop

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122722#comment-13122722
 ] 

Hudson commented on HDFS-2405:
--

Integrated in Hadoop-Hdfs-trunk #823 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/823/])
HDFS-2409. _HOST in dfs.web.authentication.kerberos.principal. Incorporates 
HDFS-2405 as well.

jitendra : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179861
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestAuthFilter.java


> hadoop dfs command with webhdfs fails on secure hadoop
> --
>
> Key: HDFS-2405
> URL: https://issues.apache.org/jira/browse/HDFS-2405
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Jitendra Nath Pandey
>Priority: Critical
> Fix For: 0.20.205.0, 0.24.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2403) The renewer in NamenodeWebHdfsMethods.generateDelegationToken(..) is not used

2011-10-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122720#comment-13122720
 ] 

Hudson commented on HDFS-2403:
--

Integrated in Hadoop-Hdfs-trunk #823 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/823/])
HDFS-2403. NamenodeWebHdfsMethods.generateDelegationToken(..) does not use 
the renewer parameter.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1179856
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java


> The renewer in NamenodeWebHdfsMethods.generateDelegationToken(..) is not used
> -
>
> Key: HDFS-2403
> URL: https://issues.apache.org/jira/browse/HDFS-2403
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.20.205.0, 0.24.0
>
> Attachments: h2403_20111005.patch, h2403_20111005_0.20.patch
>
>
> Below are some suggestions from Suresh.
> # renewer not used in #generateDelegationToken
> # put() does not use InputStream in and should not throw URISyntaxException
> # post() does not use InputStream in and should not throw URISyntaxException
> # get() should not throw URISyntaxException

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Steve Loughran (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-2209:
-

Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
  Status: Open  (was: Patch Available)

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2209) MiniDFS cluster improvements

2011-10-07 Thread Steve Loughran (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-2209:
-

Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
  Status: Patch Available  (was: Open)

> MiniDFS cluster improvements
> 
>
> Key: HDFS-2209
> URL: https://issues.apache.org/jira/browse/HDFS-2209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.203.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, 
> HDFS-2209.patch
>
>   Original Estimate: 1h
>  Time Spent: 1.5h
>  Remaining Estimate: 2h
>
> I've been deploying MiniDFSCluster for some testing, and while using 
> it/looking through the code I made some notes of where there are issues and 
> improvement opportunities. This is mostly minor as its a test tool, but a 
> risk of synchronization problems is there and does need addressing; the rest 
> are all feature creep. 
> Field {{nameNode}} should be marked as volatile as the shutdown operation can 
> be in a different thread than startup. Best of all, 
> add synchronized methods to set and get the field, as well as shutdown.
> The data dir is set from from System Properties.
> {code}
> base_dir = new File(System.getProperty("test.build.data", 
> "build/test/data"), "dfs/");
> data_dir = new File(base_dir, "data");
> {code}
> This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and  
> the constructor.
> Improvement: have a test property in the conf file, and only read the system 
> property if this is unset. This will enable
>  multiple MiniDFSClusters to come up in the same JVM, and handle 
> shutdown/startup race conditions better, and avoid the
>  "java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The 
> directory is already locked." messages
> Messages should log to the commons logging and not {{System.err}} and 
> {{System.out}}. This enables containers to catch and stream better, 
> and include more diagnostics such as timestamp and thread Id
> Class could benefit from a method to return the FS URI, rather than just the 
> FS. This currently has to be worked around with some tricks involving a 
> cached configuration
> {{waitActive()}} could get confused if "localhost" maps to an IPv6 address. 
> Better to ask for 127.0.0.1 as the hostname; Junit
> test runs may need to be set up to force in IPv4 too.
> {{injectBlocks}} has a spelling error in the IOException, 
> "SumulatedFSDataset" is the correct spelling

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2403) The renewer in NamenodeWebHdfsMethods.generateDelegationToken(..) is not used

2011-10-07 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122665#comment-13122665
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2403:
--

I committed this earlier but forgot to resolve the issue.  Thanks Matt for 
resolving it.

> The renewer in NamenodeWebHdfsMethods.generateDelegationToken(..) is not used
> -
>
> Key: HDFS-2403
> URL: https://issues.apache.org/jira/browse/HDFS-2403
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.20.205.0, 0.24.0
>
> Attachments: h2403_20111005.patch, h2403_20111005_0.20.patch
>
>
> Below are some suggestions from Suresh.
> # renewer not used in #generateDelegationToken
> # put() does not use InputStream in and should not throw URISyntaxException
> # post() does not use InputStream in and should not throw URISyntaxException
> # get() should not throw URISyntaxException

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122605#comment-13122605
 ] 

Hadoop QA commented on HDFS-2412:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498114/hdfs-2412.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1351//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1351//console

This message is automatically generated.

> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13122595#comment-13122595
 ] 

Aaron T. Myers commented on HDFS-2412:
--

I personally do think HDFS-1620 is worth it, and shouldn't be reverted. The 
patch Todd's provided is very straight-forward, and should fix things up for 
all down-stream projects that might be affected.

My apologies for not realizing that these private constants are referenced in 
public APIs when I originally reviewed HDFS-1620. I should have checked for 
that.

> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2412) Add backwards-compatibility layer for FSConstants

2011-10-07 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2412:
--

Status: Patch Available  (was: Open)

> Add backwards-compatibility layer for FSConstants
> -
>
> Key: HDFS-2412
> URL: https://issues.apache.org/jira/browse/HDFS-2412
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: hdfs-2412.txt
>
>
> HDFS-1620 renamed FSConstants which we believed to be a private class. But 
> currently the public APIs for safe-mode and datanode reports depend on 
> constants in FSConstants. This is breaking HBase builds against 0.23. This 
> JIRA is to provide a backward-compatibility route.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira