[jira] [Created] (HDFS-13419) client can communicate to server even if hdfs delegation token expired

2018-04-09 Thread wangqiang.shen (JIRA)
wangqiang.shen created HDFS-13419:
-

 Summary: client can communicate to server even if hdfs delegation 
token expired
 Key: HDFS-13419
 URL: https://issues.apache.org/jira/browse/HDFS-13419
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: wangqiang.shen


i was testing hdfs delegation token expired problem use spark streaming, if i 
set my batch interval small than 10 sec, my spark streaming program will not 
dead, but if batch interval was setted bigger than 10 sec, the spark streaming 
program will dead because of hdfs delegation token expire problem, and the 
exception as follows

{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 14042 for test) is expired
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy11.getListing(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:554)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy12.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1969)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1952)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:693)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
at com.envisioncn.arch.App$2$1.call(App.java:120)
at com.envisioncn.arch.App$2$1.call(App.java:91)
at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

the spark streaming program only call FileSystem.listStatus function in every 
batch

{code:java}
FileSystem fs = FileSystem.get(new Configuration());
FileStatus[] status =  fs.listStatus(new Path("/"));

for(FileStatus status1 : status){
System.out.println(status1.getPath());
}
{code}

and i found when hadoop client send rpc request to server, it will first get a 
connection object and set up the connection if the connection dose not 
exists.And  it will get a SaslRpcClient to connect to the server side in the 
connection setup stage.Also server will authenticate the client at the 
connection setup stage. But if the connection exists, client will use the 
existed connection, so the authentication stage will not happen. 

The connection between client and server will be closed if it's idle time 
exceeds ipc.client.connection.maxidletime, and 
ipc.client.connection.maxidletime default value is 10sec. Therefore, if i 
continue send request to server at fixed interval as long as the interval small 
than 10sec, the connection will not be closed, so delegation token expire 
problem will not happen.



 



--
This message was 

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-04-09 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/432/

No changes




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestTrash 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.TestAddBlock 
   hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   hadoop.hdfs.server.namenode.TestEditLogRace 
   hadoop.hdfs.server.namenode.TestFileTruncate 
   hadoop.hdfs.server.namenode.TestFsck 
   hadoop.hdfs.server.namenode.TestFSImage 
   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.server.namenode.TestNameNodeMXBean 
   hadoop.hdfs.server.namenode.TestNestedEncryptionZones 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.server.namenode.TestReencryptionHandler 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.hdfs.TestDatanodeReport 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.TestDFSOutputStream 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   

[jira] [Created] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology

2018-04-09 Thread Tao Jie (JIRA)
Tao Jie created HDFS-13418:
--

 Summary:  NetworkTopology should be configurable when enable 
DFSNetworkTopology
 Key: HDFS-13418
 URL: https://issues.apache.org/jira/browse/HDFS-13418
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.1
Reporter: Tao Jie
Assignee: Tao Jie


In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set 
DFSNetworkTopology as the default implementation.

We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
core-site.default. Actually this property does not effect once 
{{dfs.use.dfs.network.topology}} is true. 
in {{DatanodeManager}},networkTopology is initialized as 
{code}
if (useDfsNetworkTopology) {
  networktopology = DFSNetworkTopology.getInstance(conf);
} else {
  networktopology = NetworkTopology.getInstance(conf);
}
{code}
I think we should still make the NetworkTopology  configurable rather than hard 
code the implementation since we may need another NetworkTopology impl.
I am not sure if there is other consideration. Any thought? [~vagarychen] 
[~linyiqun]




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.2 (RC0)

2018-04-09 Thread Xiao Chen
Thanks Eddy for the effort!

+1 (binding)

   - Downloaded src tarball and verified checksums
   - Built from src
   - Started a pseudo distributed hdfs cluster
   - Verified basic hdfs operations work
   - Sanity checked logs / webui

Best,
-Xiao


On Mon, Apr 9, 2018 at 11:28 AM, Eric Payne 
wrote:

> Thanks a lot for working to produce this release.
>
> +1 (binding)
> Tested the following:
> - built from source and installed on 6-node pseudo-cluster
> - tested Capacity Scheduler FairOrderingPolicy and FifoOrderingPolicy to
> determine that capacity was assigned as expected in each case
> - tested user weights with FifoOrderingPolicy to ensure that weights were
> assigned to users as expected.
>
> Eric Payne
>
>
>
>
>
>
> On Friday, April 6, 2018, 1:17:10 PM CDT, Lei Xu  wrote:
>
>
>
>
>
> Hi, All
>
> I've created release candidate RC-0 for Apache Hadoop 3.0.2.
>
> Please note: this is an amendment for Apache Hadoop 3.0.1 release to
> fix shaded jars in apache maven repository. The codebase of 3.0.2
> release is the same as 3.0.1.  New bug fixes will be included in
> Apache Hadoop 3.0.3 instead.
>
> The release page is:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0+Release
>
> New RC is available at: http://home.apache.org/~lei/hadoop-3.0.2-RC0/
>
> The git tag is release-3.0.2-RC0, and the latest commit is
> 5c141f7c0f24c12cb8704a6ccc1ff8ec991f41ee
>
> The maven artifacts are available at
> https://repository.apache.org/content/repositories/orgapachehadoop-1096/
>
> Please try the release, especially, *verify the maven artifacts*, and vote.
>
> The vote will run 5 days, ending 4/11/2018.
>
> Thanks for everyone who helped to spot the error and proposed fixes!
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[VOTE] Release Apache Hadoop 2.7.6 (RC0)

2018-04-09 Thread Konstantin Shvachko
Hi everybody,

This is the next dot release of Apache Hadoop 2.7 line. The previous one 2.7.5
was released on December 14, 2017.
Release 2.7.6 includes critical bug fixes and optimizations. See more
details in Release Note:
http://home.apache.org/~shv/hadoop-2.7.6-RC0/releasenotes.html

The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.6-RC0/

Please give it a try and vote on this thread. The vote will run for 5 days
ending 04/16/2018.

My up to date public key is available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Thanks,
--Konstantin


[jira] [Created] (HDFS-13417) hdfsGetHedgedReadMetrics() crashes when 'fs' is a non-HDFS filesystem

2018-04-09 Thread Sailesh Mukil (JIRA)
Sailesh Mukil created HDFS-13417:


 Summary: hdfsGetHedgedReadMetrics() crashes when 'fs' is a 
non-HDFS filesystem
 Key: HDFS-13417
 URL: https://issues.apache.org/jira/browse/HDFS-13417
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.0.0-alpha4
Reporter: Sailesh Mukil


{code:java}
(gdb) bt
#0  0x003346c32625 in raise () from /lib64/libc.so.6
#1  0x003346c33e05 in abort () from /lib64/libc.so.6
#2  0x7f185be140b5 in os::abort(bool) ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#3  0x7f185bfb6443 in VMError::report_and_die() ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#4  0x7f185be195bf in JVM_handle_linux_signal ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#5  0x7f185be0fb03 in signalHandler(int, siginfo*, void*) ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#6  
#7  0x7f185bbc1a7b in jni_invoke_nonstatic(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#8  0x7f185bbc7e81 in jni_CallObjectMethodV ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#9  0x0212e2b7 in invokeMethod ()
#10 0x02131297 in hdfsGetHedgedReadMetrics ()
...
...
{code}

hdfsGetHedgedReadMetrics() is not supported for non-HDFS filesystems, so we 
need to fix this to make sure that it doesn't crash.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hdfs build on branch-2 are failing.

2018-04-09 Thread Xiao Chen
Similar to Haibo's link I found https://stackoverflow.com/a/22526046/1338884 to
be working.
Thrown a patch at HADOOP-15375
 to unblock branch-2.

I'm also not sure why this is passing for trunk.

-Xiao

On Thu, Apr 5, 2018 at 3:41 PM, Haibo Chen  wrote:

> Not sure why this did not show up in trunk. A quick google search takes me
> to bower-install-cert-untrusted-error
>  install-cert-untrusted-error>
>
> On Thu, Apr 5, 2018 at 10:40 AM, Vrushali C 
> wrote:
>
> > Seeing the same for branch-2 yarn patches as well.
> >
> > On Fri, Mar 30, 2018 at 12:54 PM, Rushabh Shah  >
> > wrote:
> >
> > > Hi All,
> > > Recently couple of my hdfs builds failed on branch-2.
> > > Slave id# H9 
> > > Builds that failed:
> > > https://builds.apache.org/job/PreCommit-HDFS-Build/23737/console
> > > https://builds.apache.org/job/PreCommit-HDFS-Build/23735/console
> > >
> > > It failed with the following error:
> > >
> > > npm http GET https://registry.npmjs.org/bowernpm http GET
> > > https://registry.npmjs.org/bowernpm http GET
> > > https://registry.npmjs.org/bowernpm ERR! Error: CERT_UNTRUSTED
> > > npm ERR! at SecurePair. (tls.js:1370:32)
> > > npm ERR! at SecurePair.EventEmitter.emit (events.js:92:17)npm ERR!
> > > at SecurePair.maybeInitFinished (tls.js:982:10)npm ERR! at
> > > CleartextStream.read [as _read] (tls.js:469:13)
> > > npm ERR! at CleartextStream.Readable.read
> > > (_stream_readable.js:320:10)npm ERR! at EncryptedStream.write [as
> > > _write] (tls.js:366:25)npm ERR! at doWrite
> > > (_stream_writable.js:223:10)
> > > npm ERR! at writeOrBuffer (_stream_writable.js:213:5)npm ERR!
> > > at EncryptedStream.Writable.write (_stream_writable.js:180:11)
> > > npm ERR! at write (_stream_readable.js:583:24)npm ERR! If you need
> > > help, you may report this log at:
> > > npm ERR! 
> > > npm ERR! or email it to:
> > > npm ERR! npm ERR! System Linux
> > > 3.13.0-143-genericnpm ERR! command "/usr/bin/nodejs" "/usr/bin/npm"
> > > "install" "-g" "bower"
> > > npm ERR! cwd /rootnpm ERR! node -v v0.10.25npm ERR! npm -v 1.3.10npm
> ERR!
> > > npm ERR! Additional logging details can be found in:
> > > npm ERR! /root/npm-debug.log
> > > npm ERR! not ok code 0
> > >
> > >
> > >
> > > The certificate details on https://registry.npmjs.org/bower:
> > >
> > > Not valid before: Thursday, March 15, 2018 at 8:39:52 AM Central
> Daylight
> > > Time
> > > Not valid after: Saturday, June 13, 2020 at 2:06:17 PM Central Daylight
> > > Time
> > >
> > > Far from being an expert on ssl, do we need to change the truststore on
> > > slave also ?
> > >
> > > Appreciate if anyone can help fixing this.
> > >
> > >
> > > Thanks,
> > > Rushabh Shah.
> > >
> >
>


[jira] [Created] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13416:
-

 Summary: TestNodeManager tests fail
 Key: HDFS-13416
 URL: https://issues.apache.org/jira/browse/HDFS-13416
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


java.lang.IllegalArgumentException: Invalid UUID string: h0

at java.util.UUID.fromString(UUID.java:194)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
 at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
 at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
 at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
 at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
 at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
 at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
 at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

 

This is happening after this change HDFS-13300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.2 (RC0)

2018-04-09 Thread Eric Payne
Thanks a lot for working to produce this release.

+1 (binding)
Tested the following:
- built from source and installed on 6-node pseudo-cluster
- tested Capacity Scheduler FairOrderingPolicy and FifoOrderingPolicy to 
determine that capacity was assigned as expected in each case
- tested user weights with FifoOrderingPolicy to ensure that weights were 
assigned to users as expected.

Eric Payne






On Friday, April 6, 2018, 1:17:10 PM CDT, Lei Xu  wrote: 





Hi, All

I've created release candidate RC-0 for Apache Hadoop 3.0.2.

Please note: this is an amendment for Apache Hadoop 3.0.1 release to
fix shaded jars in apache maven repository. The codebase of 3.0.2
release is the same as 3.0.1.  New bug fixes will be included in
Apache Hadoop 3.0.3 instead.

The release page is:
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0+Release

New RC is available at: http://home.apache.org/~lei/hadoop-3.0.2-RC0/

The git tag is release-3.0.2-RC0, and the latest commit is
5c141f7c0f24c12cb8704a6ccc1ff8ec991f41ee

The maven artifacts are available at
https://repository.apache.org/content/repositories/orgapachehadoop-1096/

Please try the release, especially, *verify the maven artifacts*, and vote.

The vote will run 5 days, ending 4/11/2018.

Thanks for everyone who helped to spot the error and proposed fixes!

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-13415:
---

 Summary: Ozone: Remove cblock code from HDFS-7240 (move to a 
different branch)
 Key: HDFS-13415
 URL: https://issues.apache.org/jira/browse/HDFS-13415
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Elek, Marton
Assignee: Elek, Marton


The Ozone components (hdds/ozone) and cblock components (cblock) has different 
stability. We suggest to separated the development: Ozone could remain on 
HDFS-7240 and could be merged to the trunk as voted by the community.

The cblock development could be kept on a separated feature branch (HDFS-8) 
and could be developed and merged independently.

To achieve this we 

 1. need to remove the cblock code from HDFS-7240 branch (this is what this 
jira about)
 2. Create a new branch from the latest HDFS-7240 which contains the cblock 
server (not in this jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes

2018-04-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-13414:
---

 Summary: Ozone: Update existing Ozone documentation according to 
the recent changes
 Key: HDFS-13414
 URL: https://issues.apache.org/jira/browse/HDFS-13414
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


1. Datanode port has been changed
2. remove the references to the branch (prepare to merge)
3. CLI commands are changed (eg. ozone scm)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-04-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/

[Apr 8, 2018 4:01:55 AM] (yqlin) HDFS-13402. RBF: Fix java doc for 
StateStoreFileSystemImpl. Contributed




-1 overall


The following subsystems voted -1:
unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/diff-compile-javac-root.txt
  [288K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [300K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/746/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto

2018-04-09 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13413:
--

 Summary: ClusterId and DatanodeUuid should be marked mandatory 
fileds in SCMRegisteredCmdResponseProto
 Key: HDFS-13413
 URL: https://issues.apache.org/jira/browse/HDFS-13413
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ozone
 Environment: ClusterId as well as the DatanodeUuid are optional fields 
in SCMRegisteredCmdResponseProto

currently. We have to make both clusterId and DatanodeUuid as required field 
and handle it properly. As of now, we don't do anything with the response of 
datanode registration. We should validate the clusterId and also the 
datanodeUuid
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: HDFS-7240






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org