Sean, you are exactly right, as I learned from parsing your earlier reply
more carefully -- sorry I didn't do this the first time.

Setting hadoop.version was indeed the solution

./make-distribution.sh --tgz -Pyarn -Phadoop-2.4 -Phive -Phive-thriftserver
-Dhadoop.version=2.5.0-cdh5.3.2

Thanks for your help!
Eric


On Wed, Mar 18, 2015 at 4:19 AM, Sean Owen <so...@cloudera.com> wrote:

> I don't think this is the problem, but I think you'd also want to set
> -Dhadoop.version= to match your deployment version, if you're building
> for a particular version, just to be safe-est.
>
> I don't recall seeing that particular error before. It indicates to me
> that the SparkContext is null. Is this maybe a knock-on error from the
> SparkContext not initializing? I can see it would then cause this to
> fail to init.
>
> On Tue, Mar 17, 2015 at 7:16 PM, Eric Friedman
> <eric.d.fried...@gmail.com> wrote:
> > Yes, I did, with these arguments: --tgz -Pyarn -Phadoop-2.4 -Phive
> > -Phive-thriftserver
> >
> > To be more specific about what is not working, when I launch spark-shell
> > --master yarn, I get this error immediately after launch.  I have no idea
> > from looking at the source.
> >
> > java.lang.NullPointerException
> >
> > at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:141)
> >
> > at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:49)
> >
> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> >
> > at
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >
> > at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >
> > at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> >
> > at
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1027)
> >
> > at $iwC$$iwC.<init>(<console>:9)
> >
> >
> > On Tue, Mar 17, 2015 at 7:43 AM, Sean Owen <so...@cloudera.com> wrote:
> >>
> >> OK, did you build with YARN support (-Pyarn)? and the right
> >> incantation of flags like "-Phadoop-2.4
> >> -Dhadoop.version=2.5.0-cdh5.3.2" or similar?
> >>
> >> On Tue, Mar 17, 2015 at 2:39 PM, Eric Friedman
> >> <eric.d.fried...@gmail.com> wrote:
> >> > I did not find that the generic build worked.  In fact I also haven't
> >> > gotten
> >> > a build from source to work either, though that one might be a case of
> >> > PEBCAK. In the former case I got errors about the build not having
> YARN
> >> > support.
> >> >
> >> > On Sun, Mar 15, 2015 at 3:03 AM, Sean Owen <so...@cloudera.com>
> wrote:
> >> >>
> >> >> I think (I hope) it's because the generic builds "just work". Even
> >> >> though these are of course distributed mostly verbatim in CDH5, with
> >> >> tweaks to be compatible with other stuff at the edges, the stock
> >> >> builds should be fine too. Same for HDP as I understand.
> >> >>
> >> >> The CDH4 build may work on some builds of CDH4, but I think is
> lurking
> >> >> there as a "Hadoop 2.0.x plus a certain YARN beta" build. I'd prefer
> >> >> to rename it that way, myself, since it doesn't actually work with
> all
> >> >> of CDH4 anyway.
> >> >>
> >> >> Are the MapR builds there because the stock Hadoop build doesn't work
> >> >> on MapR? that would actually surprise me, but then, why are these two
> >> >> builds distributed?
> >> >>
> >> >>
> >> >> On Sun, Mar 15, 2015 at 6:22 AM, Eric Friedman
> >> >> <eric.d.fried...@gmail.com> wrote:
> >> >> > Is there a reason why the prebuilt releases don't include current
> CDH
> >> >> > distros and YARN support?
> >> >> >
> >> >> > ----
> >> >> > Eric Friedman
> >> >> >
> ---------------------------------------------------------------------
> >> >> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> >> >> > For additional commands, e-mail: user-h...@spark.apache.org
> >> >> >
> >> >
> >> >
> >
> >
>

Reply via email to