[jira] [Created] (HDFS-4992) Make balancer's thread count configurable

2013-07-15 Thread Max Lapan (JIRA)
Max Lapan created HDFS-4992:
---

 Summary: Make balancer's thread count configurable
 Key: HDFS-4992
 URL: https://issues.apache.org/jira/browse/HDFS-4992
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer
Reporter: Max Lapan


By default, balancer has 1000 threads which moves blocks around (mover threads) 
and 200 threads which decide which block to move (dispatcher threads).

On large clusters, 1000 threads creates significant load on NN, which slows 
down other HDFS activity. For example, on our cluster, 'hdfs dfs -ls /' command 
took about 1 minute when balancer is active. When no balancing in progress, the 
same command finishes in second or two.

This patch makes amount of threads configurable by two new options 
'dfs.balancer.moverThreads' and 'dfs.balancer.dispatcherThreads'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


File not found exception

2013-07-15 Thread Sugato Samanta
Hi,

i am trying to read a file from HDFS using a java code, but i am getting
'File NotFound exception. However the same file is being read using *hdfs
dfs -tail airline/final_data.csv* command. Can you please help?

java -jar /home/adduser/TrainLogistics.jar --passes 10 --rate 5 --lambda
0.001 --input airline/final_data.csv --features 21 --output ./airline.model
--target CRSDepTime --categories 2 --predictors ArrDelay DayOfWeek --types
numeric
airline/final_data.csv
./airline.model
Types are:
CRSDepTime
Exception in thread "main" java.io.FileNotFoundException: File
airline/final_data.csv does not exist.
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125)
at
org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at org.accy.mahout.TrainLogistic2.open(TrainLogistic2.java:361)
at
org.accy.mahout.TrainLogistic2.mainToOutput(TrainLogistic2.java:90)
at org.accy.mahout.TrainLogistic2.main(TrainLogistic2.java:77)

Regards,
Sugato


Jenkins build became unstable: Hadoop-Hdfs-0.23-Build #669

2013-07-15 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-0.23-Build - Build # 669 - Unstable

2013-07-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/669/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11843 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.10-SNAPSHOT/hadoop-hdfs-project-0.23.10-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] No dependencies found.
[INFO] Skipped writing classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [4:58.799s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [47.829s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.059s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:47.343s
[INFO] Finished at: Mon Jul 15 11:39:06 UTC 2013
[INFO] Final Memory: 52M/746M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
4 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy2

Error Message:
Timed out waiting for corrupt replicas. Waiting for 2, but only found 0

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for corrupt replicas. 
Waiting for 2, but only found 0
at 
org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:330)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.blockCorruptionRecoveryPolicy(TestDatanodeBlockScanner.java:288)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.__CLR3_0_2t1dvac10ig(TestDatanodeBlockScanner.java:242)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy2(TestDatanodeBlockScanner.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at ju

Build failed in Jenkins: Hadoop-Hdfs-trunk #1461

2013-07-15 Thread Apache Jenkins Server
See 

--
[...truncated 15594 lines...]
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 26.203 sec <<< 
FAILURE!
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints)
  Time elapsed: 8774 sec  <<< FAILURE!
java.lang.AssertionError: SBN should have still been checkpointing.
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testStandbyExceptionThrownDuringCheckpoint(TestStandbyCheckpoints.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)

Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.889 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.68 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.975 sec

Results :

Failed tests:   
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints):
 SBN should have still been checkpointing.

Tests run: 32, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 7 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-hdfs-nfs 
---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestOffsetRange
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.nfs.nfs3.TestDFSClientCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.322 sec
Running org.apache.hadoop.hdfs.nfs.TestMountd
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.474 sec

Results :

Tests run: 8, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] 

Hadoop-Hdfs-trunk - Build # 1461 - Still Failing

2013-07-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1461/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 15787 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:38:51.857s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:23.799s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [54.265s]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [25.945s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.033s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:42:36.913s
[INFO] Finished at: Mon Jul 15 13:16:22 UTC 2013
[INFO] Final Memory: 48M/783M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.6:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml'.
 -> [Help 2]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] [Help 2] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Suggestions required for HDFS-4926 (namenode webserver's page has a tooltip that is inconsistent with the datanode HTML link)

2013-07-15 Thread Vivek Ganesan

Hi,

We have 3 options (actually view points) laid out for resolution of 
HDFS-4926 .


(See 
https://issues.apache.org/jira/browse/HDFS-4926?focusedCommentId=13707219&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13707219)


Summarizing the options here.

Option -1 : Tool tip should be consistent with the HTML Link (view point 
of the issue reporter)
Option -2 : It's okay to provide supplementary useful information in the 
tool tip (view point of a reviewer)
Option -3 : Add new column to show data transfer port. Add new column to 
show http port.  Change the tooltip of link to show ip address only.


Please review these options and vote.

Thank you.

Regards,
Vivek Ganesan.


[jira] [Created] (HDFS-4993) fsck can fail if a file is renamed or deleted

2013-07-15 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4993:


 Summary: fsck can fail if a file is renamed or deleted
 Key: HDFS-4993
 URL: https://issues.apache.org/jira/browse/HDFS-4993
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.1.0-beta
Reporter: Kihwal Lee


In NamenodeFsck#check(), the getListing() and getBlockLocations() are not 
synchronized, so the file deletions or renames at the right moment can cause 
FileNotFoundException and failure of fsck.

Instead of failing, fsck should continue. Optionally it can record file system 
modifications it encountered, but since most modifications during fsck are not 
detected, there might be little value in recording these specifically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Suggestions required for HDFS-4926 (namenode webserver's page has a tooltip that is inconsistent with the datanode HTML link)

2013-07-15 Thread Suresh Srinivas
Jira has these comments right. I do not see necessity for posting that again to 
hdfs-dev mailing list. Please continue the discussion in Jira. 

Sent from phone

On Jul 15, 2013, at 7:04 AM, Vivek Ganesan  wrote:

> Hi,
> 
> We have 3 options (actually view points) laid out for resolution of HDFS-4926 
> .
> 
> (See 
> https://issues.apache.org/jira/browse/HDFS-4926?focusedCommentId=13707219&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13707219)
> 
> Summarizing the options here.
> 
> Option -1 : Tool tip should be consistent with the HTML Link (view point of 
> the issue reporter)
> Option -2 : It's okay to provide supplementary useful information in the tool 
> tip (view point of a reviewer)
> Option -3 : Add new column to show data transfer port. Add new column to show 
> http port.  Change the tooltip of link to show ip address only.
> 
> Please review these options and vote.
> 
> Thank you.
> 
> Regards,
> Vivek Ganesan.


Re: File not found exception

2013-07-15 Thread Jing Zhao
Hi Sugato,

Do you want to read the data from hdfs? From the log it seems like
you're reading data from a "RawLocalFileSystem". Maybe you need to check if
you have set your default file system in the configuration correctly (
fs.default.name or fs.defaultFS, depending on your hadoop version)?

Thanks,
-Jing


On Mon, Jul 15, 2013 at 4:24 AM, Sugato Samanta wrote:

> Hi,
>
> i am trying to read a file from HDFS using a java code, but i am getting
> 'File NotFound exception. However the same file is being read using *hdfs
> dfs -tail airline/final_data.csv* command. Can you please help?
>
> java -jar /home/adduser/TrainLogistics.jar --passes 10 --rate 5 --lambda
> 0.001 --input airline/final_data.csv --features 21 --output ./airline.model
> --target CRSDepTime --categories 2 --predictors ArrDelay DayOfWeek --types
> numeric
> airline/final_data.csv
> ./airline.model
> Types are:
> CRSDepTime
> Exception in thread "main" java.io.FileNotFoundException: File
> airline/final_data.csv does not exist.
> at
>
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
> at
>
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
> at
>
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
> at org.accy.mahout.TrainLogistic2.open(TrainLogistic2.java:361)
> at
> org.accy.mahout.TrainLogistic2.mainToOutput(TrainLogistic2.java:90)
> at org.accy.mahout.TrainLogistic2.main(TrainLogistic2.java:77)
>
> Regards,
> Sugato
>


[jira] [Resolved] (HDFS-4984) Incorrect Quota counting in INodeFile

2013-07-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-4984.
--

   Resolution: Fixed
Fix Version/s: 0.23.10
 Hadoop Flags: Reviewed

> Incorrect Quota counting in INodeFile
> -
>
> Key: HDFS-4984
> URL: https://issues.apache.org/jira/browse/HDFS-4984
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.9
>Reporter: Kihwal Lee
>Assignee: Jing Zhao
> Fix For: 0.23.10
>
> Attachments: HDFS-4984.001.patch
>
>
> INodeFile#diskspaceConsumed() checks the state of the file and use the full 
> block size for the last block if it is under construction.  This is 
> incorrect, because a file can be under construction but the last block is 
> already finalized. It should check the status of the last block instead.
> This issue was fixed in 2.1.0 when the snapshot feature was merged. A 
> subtask, HDFS-4503, consolidated several methods and as a result this bug 
> disappeared.
> A separate fix is needed for 0.23.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4994) Audit log getContentSummary() calls

2013-07-15 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4994:


 Summary: Audit log getContentSummary() calls
 Key: HDFS-4994
 URL: https://issues.apache.org/jira/browse/HDFS-4994
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Priority: Minor


Currently there getContentSummary() calls are not logged anywhere. It should be 
logged in the audit log.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


RE: mvn eclipse:eclipse failure on windows

2013-07-15 Thread Chuan Liu
Hi Uma,

I suggest you do a 'mvn install -DskipTests' before running 'mvn 
eclipse:eclipse'.

Thanks,
Chuan

-Original Message-
From: Uma Maheswara Rao G [mailto:hadoop@gmail.com] 
Sent: Friday, July 12, 2013 7:42 PM
To: common-...@hadoop.apache.org
Cc: hdfs-dev@hadoop.apache.org
Subject: Re: mvn eclipse:eclipse failure on windows

HI Chris,
  eclipse:eclipse works but still I am seeing UnsatisfiesLinkError.
Explicitly I pointed java.library.path to where hadoop.dll geneated. This
dll generated with my clean install command only.   My pc is 64 but and
also set Platform=x64 while building. But does not help.

Regards,
Uma






On Fri, Jul 12, 2013 at 11:45 PM, Chris Nauroth wrote:

> Hi Uma,
>
> I just tried getting a fresh copy of trunk and running "mvn clean 
> install -DskipTests" followed by "mvn eclipse:eclipse -DskipTests".  
> Everything worked fine in my environment.  Are you still seeing the problem?
>
> The UnsatisfiedLinkError seems to indicate that your build couldn't 
> access hadoop.dll for JNI method implementations.  hadoop.dll gets 
> built as part of the hadoop-common sub-module.  Is it possible that 
> you didn't have a complete package build for that sub-module before 
> you started running the HDFS test?
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Sun, Jul 7, 2013 at 9:08 AM, sure bhands  wrote:
>
> > I would try cleaning hadoop-maven-plugin directory from maven 
> > repository
> to
> > rule out the stale version and then mv install followed by mvn 
> > eclipse:eclipse before digging in to it further.
> >
> > Thanks,
> > Surendra
> >
> >
> > On Sun, Jul 7, 2013 at 8:28 AM, Uma Maheswara Rao G <
> hadoop@gmail.com
> > >wrote:
> >
> > > Hi,
> > >
> > > I am seeing this failure on windows while executing mvn 
> > > eclipse:eclipse command on trunk.
> > >
> > > See the following trace:
> > >
> > > [INFO]
> > >
> --
> --
> > > [ERROR] Failed to execute goal
> > > org.apache.maven.plugins:maven-eclipse-plugin:2.8
> > > :eclipse (default-cli) on project hadoop-common: Request to merge 
> > > when 'filterin g' is not identical. Original=resource 
> > > src/main/resources:
> > > output=target/classes
> > > , include=[], exclude=[common-version-info.properties|**/*.java],
> > > test=false, fi
> > > ltering=false, merging with=resource src/main/resources:
> > > output=target/classes,
> > > include=[common-version-info.properties], exclude=[**/*.java],
> > test=false,
> > > filte
> > > ring=true -> [Help 1]
> > > [ERROR]
> > > [ERROR] To see the full stack trace of the errors, re-run Maven 
> > > with
> the
> > -e
> > > swit
> > > ch.
> > > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > > [ERROR]
> > > [ERROR] For more information about the errors and possible 
> > > solutions, please rea d the following articles:
> > > [ERROR] [Help 1]
> > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionE
> > > xception
> > > [ERROR]
> > > [ERROR] After correcting the problems, you can resume the build 
> > > with
> the
> > > command
> > >
> > > [ERROR]   mvn  -rf :hadoop-common
> > > E:\Hadoop-Trunk>
> > >
> > > any idea for resolving it.
> > >
> > > With 'org.apache.maven.plugins:maven-eclipse-plugin:2.6:eclipse' 
> > > seems
> to
> > > be no failures but  I am seeing following exception while running
> tests.
> > > java.lang.UnsatisfiedLinkError:
> > >
> > >
> >
> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/Stri
> ng;I)Z
> > > at 
> > > org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native
> > > Method)
> > > at
> > >
> org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:42
> 3)
> > > at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:952)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeS
> torage(Storage.java:451)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSIm
> age.java:282)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(F
> SImage.java:200)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSName
> system.java:696)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNam
> esystem.java:530)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNod
> e.java:401)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.ja
> va:435)
> > > at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:6
> 07)
> > > at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:5
> 92)
> > > at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNod
> e.java:1172)
> > > at
> > >
> > >
> >
> org.apache

[jira] [Resolved] (HDFS-4972) [branch-0.23] permission check and operation are done in a separate lock for getBlockLocations()

2013-07-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-4972.
--

   Resolution: Fixed
Fix Version/s: 0.23.10
 Hadoop Flags: Reviewed

> [branch-0.23] permission check and operation are done in a separate lock for 
> getBlockLocations()
> 
>
> Key: HDFS-4972
> URL: https://issues.apache.org/jira/browse/HDFS-4972
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.8
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 0.23.10
>
> Attachments: HDFS-4972-branch-0.23.patch
>
>
> For getBlockLocations() call, the read lock is acquired when doing permission 
> check. But unlike other namenode methods, this is outside of the lock of the 
> actual operation. So it ends up acquiring and releasing the lock twice.  This 
> has two implications.
> - permissions can change in between the locks
> - the lock fairness will penalize getBlockLocations().
> This was fixed in trunk and branch-2 as a part of HDFS-4679, but not in 
> branch-0.23.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4995) Make getContentSummary() less expensive

2013-07-15 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4995:


 Summary: Make getContentSummary() less expensive
 Key: HDFS-4995
 URL: https://issues.apache.org/jira/browse/HDFS-4995
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee


When users call du or count DFS command, getContentSummary() method is called 
against namenode. If the directory has many directories and files, it could 
hold the namesystem lock for a long time. We've seen it taking over 20 seconds. 
Namenode should not allow regular users to cause extended locking.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4996) ClientProtocol#metaSave can be made idempotent by overwriting the output file instead of appending to it

2013-07-15 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4996:
---

 Summary: ClientProtocol#metaSave can be made idempotent by 
overwriting the output file instead of appending to it
 Key: HDFS-4996
 URL: https://issues.apache.org/jira/browse/HDFS-4996
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


{{ClientProtocol#metaSave}} opens its output file for append.  This prevents 
the operation from being idempotent, because retries can cause unpredictable 
side effects (single copy of the output vs. multiple copies).  As discussed on 
HDFS-4974, the operation could be made idempotent by changing this to open the 
file for overwrite instead of append.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-2042) Require c99 when building libhdfs

2013-07-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-2042.


   Resolution: Fixed
Fix Version/s: 2.1.0-beta

The stuff this JIRA talks about is fixed in branch-2 and trunk.  We don't use 
autotools any more so we don't need these macros, and {{hdfs.c}} is not under 
{{src/c++/libhdfs/hdfs.c}} any more.

If there is some specific platform that requires more or different CFLAGS to 
build, let's open a JIRA for that when we encounter it.

> Require c99 when building libhdfs
> -
>
> Key: HDFS-2042
> URL: https://issues.apache.org/jira/browse/HDFS-2042
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Reporter: Eli Collins
>Priority: Minor
> Fix For: 2.1.0-beta
>
>
> Per the discussion in HDFS-1619, libhdfs uses some c99 features, therefore we 
> should use compile with c99 standard flags (eg c99 or gnu99 when using gcc) 
> when building it. We could do this with autotools via AC_PROG_CC_C99 (which 
> requires a more recent autotools) or by setting CLFAGS in 
> src/c++/libhdfs/configure. We should perhaps rename src/c++ to src/c while 
> we're at it since libhdfs is not written in c++ and libhdfs is the only 
> subdirectory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4997) libhdfs doesn't return correct error codes in most cases

2013-07-15 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4997:
--

 Summary: libhdfs doesn't return correct error codes in most cases
 Key: HDFS-4997
 URL: https://issues.apache.org/jira/browse/HDFS-4997
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


libhdfs has some code to translate Java exceptions into C error codes.  
Unfortunately, the exceptions are returned to us in "dotted" format, but the 
code is expecting them to be in "slash-separated" format.  This results in most 
exceptions just leading to a generic error code.

We should fix this and add a unit test to ensure this continues to work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira