No, the build works fine, at least certainly on test machines. As I
say, try running from the actual Spark home, not bin/. You are still
running spark-shell there.

On Thu, Dec 5, 2019 at 4:37 PM Ping Liu <pingpinga...@gmail.com> wrote:
>
> Hi Sean,
>
> Thanks for your response!
>
> Sorry, I didn't mention that "build/mvn ..." doesn't work.  So I did go to 
> Spark home directory and ran mvn from there.  Following is my build and 
> running result.  The source code was just updated yesterday.  I guess the POM 
> should specify newer Guava library somehow.
>
> Thanks Sean.
>
> Ping
>
> [INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
> [INFO]
> [INFO] Spark Project Parent POM ........................... SUCCESS [ 14.794 
> s]
> [INFO] Spark Project Tags ................................. SUCCESS [ 18.233 
> s]
> [INFO] Spark Project Sketch ............................... SUCCESS [ 20.077 
> s]
> [INFO] Spark Project Local DB ............................. SUCCESS [  7.846 
> s]
> [INFO] Spark Project Networking ........................... SUCCESS [ 14.906 
> s]
> [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  6.267 
> s]
> [INFO] Spark Project Unsafe ............................... SUCCESS [ 31.710 
> s]
> [INFO] Spark Project Launcher ............................. SUCCESS [ 10.227 
> s]
> [INFO] Spark Project Core ................................. SUCCESS [08:03 
> min]
> [INFO] Spark Project ML Local Library ..................... SUCCESS [01:51 
> min]
> [INFO] Spark Project GraphX ............................... SUCCESS [02:20 
> min]
> [INFO] Spark Project Streaming ............................ SUCCESS [03:16 
> min]
> [INFO] Spark Project Catalyst ............................. SUCCESS [08:45 
> min]
> [INFO] Spark Project SQL .................................. SUCCESS [12:12 
> min]
> [INFO] Spark Project ML Library ........................... SUCCESS [  16:28 
> h]
> [INFO] Spark Project Tools ................................ SUCCESS [ 23.602 
> s]
> [INFO] Spark Project Hive ................................. SUCCESS [07:50 
> min]
> [INFO] Spark Project Graph API ............................ SUCCESS [  8.734 
> s]
> [INFO] Spark Project Cypher ............................... SUCCESS [ 12.420 
> s]
> [INFO] Spark Project Graph ................................ SUCCESS [ 10.186 
> s]
> [INFO] Spark Project REPL ................................. SUCCESS [01:03 
> min]
> [INFO] Spark Project YARN Shuffle Service ................. SUCCESS [01:19 
> min]
> [INFO] Spark Project YARN ................................. SUCCESS [02:19 
> min]
> [INFO] Spark Project Assembly ............................. SUCCESS [ 18.912 
> s]
> [INFO] Kafka 0.10+ Token Provider for Streaming ........... SUCCESS [ 57.925 
> s]
> [INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [01:20 
> min]
> [INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [02:26 
> min]
> [INFO] Spark Project Examples ............................. SUCCESS [02:00 
> min]
> [INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 28.354 
> s]
> [INFO] Spark Avro ......................................... SUCCESS [01:44 
> min]
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Total time:  17:30 h
> [INFO] Finished at: 2019-12-05T12:20:01-08:00
> [INFO] 
> ------------------------------------------------------------------------
>
> D:\apache\spark>cd bin
>
> D:\apache\spark\bin>ls
> beeline               load-spark-env.cmd  run-example       spark-shell       
> spark-sql2.cmd     sparkR.cmd
> beeline.cmd           load-spark-env.sh   run-example.cmd   spark-shell.cmd   
> spark-submit       sparkR2.cmd
> docker-image-tool.sh  pyspark             spark-class       spark-shell2.cmd  
> spark-submit.cmd
> find-spark-home       pyspark.cmd         spark-class.cmd   spark-sql         
> spark-submit2.cmd
> find-spark-home.cmd   pyspark2.cmd        spark-class2.cmd  spark-sql.cmd     
> sparkR
>
> D:\apache\spark\bin>spark-shell
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>         at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>         at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>         at 
> org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
>         at 
> org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
>         at 
> org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
>         at 
> org.apache.spark.deploy.SparkSubmit$$Lambda$132/817978763.apply(Unknown 
> Source)
>         at scala.Option.getOrElse(Option.scala:189)
>         at 
> org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
>         at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
>         at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
>         at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
>         at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
>         at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> D:\apache\spark\bin>
>
> On Thu, Dec 5, 2019 at 1:33 PM Sean Owen <sro...@gmail.com> wrote:
>>
>> What was the build error? you didn't say. Are you sure it succeeded?
>> Try running from the Spark home dir, not bin.
>> I know we do run Windows tests and it appears to pass tests, etc.
>>
>> On Thu, Dec 5, 2019 at 3:28 PM Ping Liu <pingpinga...@gmail.com> wrote:
>> >
>> > Hello,
>> >
>> > I understand Spark is preferably built on Linux.  But I have a Windows 
>> > machine with a slow Virtual Box for Linux.  So I wish I am able to build 
>> > and run Spark code on Windows environment.
>> >
>> > Unfortunately,
>> >
>> > # Apache Hadoop 2.6.X
>> > ./build/mvn -Pyarn -DskipTests clean package
>> >
>> > # Apache Hadoop 2.7.X and later
>> > ./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean 
>> > package
>> >
>> >
>> > Both are listed on 
>> > http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn
>> >
>> > But neither works for me (I stay directly under spark root directory and 
>> > run "mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean 
>> > package"
>> >
>> > and
>> >
>> > Then I tried "mvn -Pyarn -Phadoop-3.2 -Dhadoop.version=3.2.1 -DskipTests 
>> > clean package"
>> >
>> > Now build works.  But when I run spark-shell.  I got the following error.
>> >
>> > D:\apache\spark\bin>spark-shell
>> > Exception in thread "main" java.lang.NoSuchMethodError: 
>> > com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>> >         at 
>> > org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>> >         at 
>> > org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>> >         at 
>> > org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
>> >         at 
>> > org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit$$Lambda$132/817978763.apply(Unknown 
>> > Source)
>> >         at scala.Option.getOrElse(Option.scala:189)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
>> >         at 
>> > org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
>> >         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>> >
>> >
>> > Has anyone experienced building and running Spark source code successfully 
>> > on Windows?  Could you please share your experience?
>> >
>> > Thanks a lot!
>> >
>> > Ping
>> >

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to