Hi Deepak,

For Spark, I am using master branch and just have code updated yesterday.

For Guava, I actually deleted my old versions from the local Maven repo.
The build process of Spark automatically downloaded a few versions.  The
oldest version is 14.0.1.

But even in 14.0,1 (
https://guava.dev/releases/14.0.1/api/docs/com/google/common/base/Preconditions.html)
Preconditions already requires boolean as first parameter.

static void *checkArgument
<https://guava.dev/releases/14.0.1/api/docs/com/google/common/base/Preconditions.html#checkArgument(boolean,
java.lang.String, java.lang.Object...)>*(boolean expression, String
<http://download.oracle.com/javase/6/docs/api/java/lang/String.html?is-external=true>
errorMessageTemplate,
Object
<http://download.oracle.com/javase/6/docs/api/java/lang/Object.html?is-external=true>
... errorMessageArgs)

The newer Guava version, checkArgument() all require boolean as first
parameter.

For Docker, using EC2 is a good idea.  Is there a document or guidance for
it?

Thanks.

Ping



On Thu, Dec 5, 2019 at 3:30 PM Deepak Vohra <dvohr...@yahoo.com> wrote:

> Such type exception could occur if a dependency (most likely Guava)
> version is not supported by the Spark version. What is the Spark and Guava
> versions? Use a more recent Guava version dependency in Maven pom.xml.
>
> Regarding Docker, a cloud platform instance such as EC2 could be used with
> Hyper-V support.
>
> On Thursday, December 5, 2019, 10:51:59 PM UTC, Ping Liu <
> pingpinga...@gmail.com> wrote:
>
>
> Hi Deepak,
>
> Yes, I did use Maven. I even have the build pass successfully when setting
> Hadoop version to 3.2.  Please see my response to Sean's email.
>
> Unfortunately, I only have Docker Toolbox as my Windows doesn't have
> Microsoft Hyper-V.  So I want to avoid using Docker to do major work if
> possible.
>
> Thanks!
>
> Ping
>
>
> On Thu, Dec 5, 2019 at 2:24 PM Deepak Vohra <dvohr...@yahoo.com> wrote:
>
> Several alternatives are available:
>
> - Use Maven to build Spark on Windows.
> http://spark.apache.org/docs/latest/building-spark.html#apache-maven
>
> - Use Docker image for  CDH on Windows
> Docker Hub <https://hub.docker.com/u/cloudera>
>
> Docker Hub
>
> <https://hub.docker.com/u/cloudera>
>
>
>
>
> On Thursday, December 5, 2019, 09:33:43 p.m. UTC, Sean Owen <
> sro...@gmail.com> wrote:
>
>
> What was the build error? you didn't say. Are you sure it succeeded?
> Try running from the Spark home dir, not bin.
> I know we do run Windows tests and it appears to pass tests, etc.
>
> On Thu, Dec 5, 2019 at 3:28 PM Ping Liu <pingpinga...@gmail.com> wrote:
> >
> > Hello,
> >
> > I understand Spark is preferably built on Linux.  But I have a Windows
> machine with a slow Virtual Box for Linux.  So I wish I am able to build
> and run Spark code on Windows environment.
> >
> > Unfortunately,
> >
> > # Apache Hadoop 2.6.X
> > ./build/mvn -Pyarn -DskipTests clean package
> >
> > # Apache Hadoop 2.7.X and later
> > ./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean
> package
> >
> >
> > Both are listed on
> http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn
> >
> > But neither works for me (I stay directly under spark root directory and
> run "mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean
> package"
> >
> > and
> >
> > Then I tried "mvn -Pyarn -Phadoop-3.2 -Dhadoop.version=3.2.1 -DskipTests
> clean package"
> >
> > Now build works.  But when I run spark-shell.  I got the following error.
> >
> > D:\apache\spark\bin>spark-shell
> > Exception in thread "main" java.lang.NoSuchMethodError:
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> >        at
> org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> >        at
> org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> >        at
> org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
> >        at
> org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
> >        at
> org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
> >        at
> org.apache.spark.deploy.SparkSubmit$$Lambda$132/817978763.apply(Unknown
> Source)
> >        at scala.Option.getOrElse(Option.scala:189)
> >        at
> org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
> >        at org.apache.spark.deploy.SparkSubmit.org
> $apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
> >        at
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
> >        at
> org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
> >        at
> org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
> >        at
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
> >        at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
> >        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> >
> >
> > Has anyone experienced building and running Spark source code
> successfully on Windows?  Could you please share your experience?
> >
> > Thanks a lot!
> >
> > Ping
>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
>

Reply via email to