Hadoop-Hdfs-0.23-Build - Build # 801 - Still Failing

2013-11-25 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/801/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7892 lines...]
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3313,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3319,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #801

2013-11-25 Thread Apache Jenkins Server
See 

--
[...truncated 7699 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[270,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[281,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6301,30]
 cannot find sym

Build failed in Jenkins: Hadoop-Hdfs-trunk #1593

2013-11-25 Thread Apache Jenkins Server
See 

Changes:

[sandy] YARN-1423. Support queue placement by secondary group in the Fair 
Scheduler (Ted Malaska via Sandy Ryza)

--
[...truncated 11584 lines...]
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 358.376 sec - 
in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.354 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.895 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.479 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.964 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.584 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.891 sec - in 
org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.059 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.487 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.204 sec - 
in org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestFileInputStreamCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.203 sec - in 
org.apache.hadoop.hdfs.TestFileInputStreamCache
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.738 sec - in 
org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.054 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.003 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.154 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.944 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.75 sec - in 
org.apache.hadoop.hdfs.TestQuota
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.635 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.939 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.997 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.679 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.413 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.17 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.323 sec - in 
org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.431 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.2 sec - in 
org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.633 sec - 
in org.apach

Hadoop-Hdfs-trunk - Build # 1593 - Still Failing

2013-11-25 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1593/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11808 lines...]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [2.037s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:47:09.047s
[INFO] Finished at: Mon Nov 25 13:21:33 UTC 2013
[INFO] Final Memory: 41M/380M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-1423
Sending e-mails to: hdfs-dev@hadoop.apache.org
ERROR: Exception reading response
javax.mail.MessagingException: Exception reading response;
  nested exception is:
java.net.SocketTimeoutException: Read timed out
at 
com.sun.mail.smtp.SMTPTransport.readServerResponse(SMTPTransport.java:2153)
at 
com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:2036)
at com.sun.mail.smtp.SMTPTransport.finishData(SMTPTransport.java:1862)
at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:1100)
at javax.mail.Transport.send0(Transport.java:195)
at javax.mail.Transport.send(Transport.java:124)
at hudson.tasks.MailSender.execute(MailSender.java:116)
at hudson.tasks.Mailer.perform(Mailer.java:117)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:785)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:757)
at hudson.model.Build$BuildExecution.post2(Build.java:183)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:706)
at hudson.model.Run.execute(Run.java:1704)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:230)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at com.sun.mail.util.TraceInputStream.read(TraceInputStream.java:110)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at com.sun.mail.util.LineInputStream.readLine(LineInputStream.java:89)
at 
com.sun.mail.smtp.SMTPTransport.readServerResponse(SMTPTransport.java:2131)
... 16 more
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
9 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testWaitForCachedReplicas

Error Message:
Cannot start datanode because the configured max locked memory size 
(dfs.datanode.max.locked.memory) is greater than zero and native code is not 
available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max 
locked memory size (dfs.datanode.max.locked.memory) is greater than zero and 
native code is not available.
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:267)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)

[jira] [Created] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-25 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5563:


 Summary: NFS gateway should commit the buffered data when read 
request comes after write to the same file
 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li


HDFS write is asynchronous and data may not be available to read immediately 
after write.
One of the main reason is that DFSClient doesn't flush data to DN until its 
local buffer is full.

To workaround this problem, when a read comes after write to the same file, NFS 
gateway should sync the data so the read request can get the latest content. 
The drawback is that, the frequent hsync() call can slow down data write.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5564) Refactor tests in TestCacheDirectives

2013-11-25 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-5564:
-

 Summary: Refactor tests in TestCacheDirectives
 Key: HDFS-5564
 URL: https://issues.apache.org/jira/browse/HDFS-5564
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang


Some of the tests in TestCacheDirectives start their own MiniDFSCluster to get 
a new config, even though we already start a cluster in the @Before function. 
This contributes to longer test runs and code duplication.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-25 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-5565:
-

 Summary: CacheAdmin help should match against non-dashed commands
 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Priority: Minor


Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, but 
for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` with 
a dash before the command name. This is inconsistent with dfsadmin, dfs, and 
haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5566) HA namenode with QJM created from org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider should implement Closeable

2013-11-25 Thread Henry Hung (JIRA)
Henry Hung created HDFS-5566:


 Summary: HA namenode with QJM created from 
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider 
should implement Closeable
 Key: HDFS-5566
 URL: https://issues.apache.org/jira/browse/HDFS-5566
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: hadoop-2.2.0
hbase-0.96
Reporter: Henry Hung


When using hbase-0.96 with hadoop-2.2.0, stopping master/regionserver node will 
result in {{Cannot close proxy - is not Closeable or does not provide closeable 
invocation}}.

[Mail 
Archive|https://drive.google.com/file/d/0B22pkxoqCdvWSGFIaEpfR3lnT2M/edit?usp=sharing]

My hadoop-2.2.0 configured as HA namenode with QJM, the configuration is like 
this:
{code:xml}
  
dfs.nameservices
hadoopdev
  
  
dfs.ha.namenodes.hadoopdev
nn1,nn2
  
  
dfs.namenode.rpc-address.hadoopdev.nn1
fphd9.ctpilot1.com:9000
  
  
dfs.namenode.http-address.hadoopdev.nn1
fphd9.ctpilot1.com:50070
  
  
dfs.namenode.rpc-address.hadoopdev.nn2
fphd10.ctpilot1.com:9000
  
  
dfs.namenode.http-address.hadoopdev.nn2
fphd10.ctpilot1.com:50070
  
  
dfs.namenode.shared.edits.dir

qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;fphd10.ctpilot1.com:8485/hadoopdev
  
  
dfs.client.failover.proxy.provider.hadoopdev

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
  
  
dfs.ha.fencing.methods
shell(/bin/true)
  
  
dfs.journalnode.edits.dir
/data/hadoop/hadoop-data-2/journal
  
  
dfs.ha.automatic-failover.enabled
true
  
  
ha.zookeeper.quorum
fphd1.ctpilot1.com:
  
{code}

I traced the code and found out that when stopping the hbase master node, it 
will try invoke method "close" on namenode, but the instance that created from 
{{org.apache.hadoop.hdfs.NameNodeProxies.createProxy}} with 
failoverProxyProviderClass 
{{org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider}} 
do not have the Closeable interface.

If we use the Non-HA case, the created instance will be 
{{org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB}} that 
implement Closeable.

TL;DR;
With hbase connecting to hadoop HA namenode, when stopping the hbase master or 
regionserver, it couldn't find the {{close}} method to gracefully close 
namenode session.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5567) CacheAdmin operations not supported with viewfs

2013-11-25 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-5567:
-

 Summary: CacheAdmin operations not supported with viewfs
 Key: HDFS-5567
 URL: https://issues.apache.org/jira/browse/HDFS-5567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 3.0.0
Reporter: Stephen Chu


On a federated cluster with viewfs configured, we'll run into the following 
error when using CacheAdmin commands:

{code}
bash-4.1$ hdfs cacheadmin -listPools
Exception in thread "main" java.lang.IllegalArgumentException: FileSystem 
viewfs://cluster3/ is not an HDFS file system
at org.apache.hadoop.hdfs.tools.CacheAdmin.getDFS(CacheAdmin.java:96)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin.access$100(CacheAdmin.java:50)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:748)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
bash-4.1$
{code}





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-25 Thread Vinay (JIRA)
Vinay created HDFS-5568:
---

 Summary: Support inclusion of snapshot paths in Namenode fsck
 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay


Support Fsck to check the snapshot paths also for inconsistency.

Currently Fsck supports snapshot paths if path given explicitly refers to a 
snapshot path.

We have seen safemode problems in our clusters which were due to blocks missing 
which were only present inside snapshots. But "hdfs fsck /" shows HEALTHY. 

So supporting snapshot paths also during fsck (may be by default or on demand) 
would be helpful in these cases instead of specifying each and every 
snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5566) HA namenode with QJM created from org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider should implement Closeable

2013-11-25 Thread Henry Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Hung resolved HDFS-5566.
--

Resolution: Duplicate

duplicate with 
[HBASE-10029|https://issues.apache.org/jira/browse/HBASE-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]

> HA namenode with QJM created from 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider 
> should implement Closeable
> --
>
> Key: HDFS-5566
> URL: https://issues.apache.org/jira/browse/HDFS-5566
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: hadoop-2.2.0
> hbase-0.96
>Reporter: Henry Hung
>
> When using hbase-0.96 with hadoop-2.2.0, stopping master/regionserver node 
> will result in {{Cannot close proxy - is not Closeable or does not provide 
> closeable invocation}}.
> [Mail 
> Archive|https://drive.google.com/file/d/0B22pkxoqCdvWSGFIaEpfR3lnT2M/edit?usp=sharing]
> My hadoop-2.2.0 configured as HA namenode with QJM, the configuration is like 
> this:
> {code:xml}
>   
> dfs.nameservices
> hadoopdev
>   
>   
> dfs.ha.namenodes.hadoopdev
> nn1,nn2
>   
>   
> dfs.namenode.rpc-address.hadoopdev.nn1
> fphd9.ctpilot1.com:9000
>   
>   
> dfs.namenode.http-address.hadoopdev.nn1
> fphd9.ctpilot1.com:50070
>   
>   
> dfs.namenode.rpc-address.hadoopdev.nn2
> fphd10.ctpilot1.com:9000
>   
>   
> dfs.namenode.http-address.hadoopdev.nn2
> fphd10.ctpilot1.com:50070
>   
>   
> dfs.namenode.shared.edits.dir
> 
> qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;fphd10.ctpilot1.com:8485/hadoopdev
>   
>   
> dfs.client.failover.proxy.provider.hadoopdev
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>   
>   
> dfs.ha.fencing.methods
> shell(/bin/true)
>   
>   
> dfs.journalnode.edits.dir
> /data/hadoop/hadoop-data-2/journal
>   
>   
> dfs.ha.automatic-failover.enabled
> true
>   
>   
> ha.zookeeper.quorum
> fphd1.ctpilot1.com:
>   
> {code}
> I traced the code and found out that when stopping the hbase master node, it 
> will try invoke method "close" on namenode, but the instance that created 
> from {{org.apache.hadoop.hdfs.NameNodeProxies.createProxy}} with 
> failoverProxyProviderClass 
> {{org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider}} 
> do not have the Closeable interface.
> If we use the Non-HA case, the created instance will be 
> {{org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB}} that 
> implement Closeable.
> TL;DR;
> With hbase connecting to hadoop HA namenode, when stopping the hbase master 
> or regionserver, it couldn't find the {{close}} method to gracefully close 
> namenode session.



--
This message was sent by Atlassian JIRA
(v6.1#6144)