Github user chesterxgchen commented on the pull request:

    https://github.com/apache/spark/pull/2111#issuecomment-53474088
  
    I just checked against 2.0.6-alpha, the test passed as well. 
    
    Notice that 2.0.6-alpha is released in Aug. 2013,  after 0.23.9 in July, 
2013
    http://hadoop.apache.org/releases.html
    
    
    ᚛ |SPARK-3177|$ sbt/sbt -Pyarn-alpha -Dhadoop.version=2.0.6-alpha 
    Using /Library/Java/JavaVirtualMachines/1.6.0_51-b11-457.jdk/Contents/Home 
as default JAVA_HOME.
    Note, this will be overridden by -java-home if it is set.
    [info] Loading project definition from 
/Users/chester/projects/alpine/spark/project/project
    [info] Loading project definition from 
/Users/chester/.sbt/0.13/staging/ec3aa8f39111944cc5f2/sbt-pom-reader/project
    [warn] Multiple resolvers having different access mechanism configured with 
same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project 
resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
    [info] Loading project definition from 
/Users/chester/projects/alpine/spark/project
    [info] Set current project to spark-parent (in build 
file:/Users/chester/projects/alpine/spark/)
    > yarn-alpha/test
    [info] Compiling 15 Scala sources to 
/Users/chester/projects/alpine/spark/yarn/alpha/target/scala-2.10/classes...
    [info] ClientBaseSuite:
    Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties
    [info] - default Yarn application classpath
    [info] - default MR application classpath
    [info] - resultant classpath for an application that defines a classpath 
for YARN
    [info] - resultant classpath for an application that defines a classpath 
for MR
    [info] - resultant classpath for an application that defines both 
classpaths, YARN and MR
    [info] - Local jar URIs
    2014-08-26 12:14:21.761 java[77357:e203] Unable to load realm info from 
SCDynamicStore
    14/08/26 12:14:21 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
    14/08/26 12:14:22 INFO 
ClientBaseSuite$DummyClient$$EnhancerByMockitoWithCGLIB$$96b82aa8: Preparing 
Local resources
    14/08/26 12:14:22 INFO 
ClientBaseSuite$DummyClient$$EnhancerByMockitoWithCGLIB$$96b82aa8: Prepared 
Local resources Map(addJar3 -> resource {, port: -1, file: "/", }, size: 1224, 
timestamp: 1403824330000, type: FILE, visibility: PUBLIC, )
    [info] - Jar path propagation through SparkConf
    [info] - check access nns empty
    [info] - check access nns unset
    [info] - check access nns
    [info] - check access nns space
    [info] - check access two nns
    [info] - check token renewer
    14/08/26 12:14:22 ERROR ClientBase: Can't get Master Kerberos principal for 
use as renewer
    [info] - check token renewer default
    [info] ClientDistributedCacheManagerSuite:
    [info] - test getFileStatus empty
    [info] - test getFileStatus cached
    [info] - test addResource
    [info] - test addResource link null
    [info] - test addResource appmaster only
    [info] - test addResource archive
    [info] YarnSparkHadoopUtilSuite:
    [info] - shell script escaping
    [info] ScalaTest
    [info] Run completed in 2 seconds, 368 milliseconds.
    [info] Total number of tests run: 21
    [info] Suites: completed 3, aborted 0
    [info] Tests: succeeded 21, failed 0, canceled 0, ignored 0, pending 0
    [info] All tests passed.
    [info] Passed: Total 21, Failed 0, Errors 0, Passed 21
    [success] Total time: 17 s, completed Aug 26, 2014 12:14:22 PM
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to