fwiw I've been using `zinc -scala-home $SCALA_HOME -nailed -start` which:
- starts a nailgun server as well,
- uses my installed scala 2.{10,11}, as opposed to zinc's default 2.9.2
https://github.com/typesafehub/zinc#scala: If no options are passed to
locate a version of Scala then Scala 2.9.2 is
@Patrick and Josh actually we went even further than that. We simply
disable the UI for most tests and these used to be the single largest
source of port conflict.
One thing I created a JIRA for a while back was to have a similar
script to sbt/sbt that transparently downloads Zinc, Scala, and
Maven in a subdirectory of Spark and sets it up correctly. I.e.
build/mvn.
Outside of brew for MacOS there aren't good Zinc packages, and it's a
pain to figure out how
The command run fine for me on master. Note that Hive does print an
exception in the logs, but that exception does not propogate to user code.
On Thu, Dec 4, 2014 at 11:31 PM, Jianshi Huang jianshi.hu...@gmail.com
wrote:
Hi,
I got exception saying Hive: NoSuchObjectException(message:table
And that is no different from how Hive has worked for a long time.
On Fri, Dec 5, 2014 at 11:42 AM, Michael Armbrust mich...@databricks.com
wrote:
The command run fine for me on master. Note that Hive does print an
exception in the logs, but that exception does not propogate to user code.
I am having trouble getting create table as select or saveAsTable from a
hiveContext to work with temp tables in spark 1.2. No issues in 1.1.0 or
1.1.1
Simple modification to test case in the hive SQLQuerySuite.scala:
test(double nested data) {
Thanks for reporting. This looks like a regression related to:
https://github.com/apache/spark/pull/2570
I've filed it here: https://issues.apache.org/jira/browse/SPARK-4769
On Fri, Dec 5, 2014 at 12:03 PM, kb kend...@hotmail.com wrote:
I am having trouble getting create table as select or
Hi devs,
I play with your amazing Spark here in Prague for some time. I have
stumbled on a thing which I like to ask about. I create assembly jars from
source and then use it to run simple jobs on our 2.3.0-cdh5.1.3 cluster
using yarn. Example of my usage [1]. Formerly I had started to use
When building against Hadoop 2.x, you need to enable the appropriate
profile, aside from just specifying the version. e.g. -Phadoop-2.3
for Hadoop 2.3.
On Fri, Dec 5, 2014 at 12:51 PM, spark.dubovsky.ja...@seznam.cz wrote:
Hi devs,
I play with your amazing Spark here in Prague for some
As Marcelo said, CDH5.3 is based on hadoop 2.3, so please try
./make-distribution.sh -Pyarn -Phive -Phadoop-2.3
-Dhadoop.version=2.3.0-cdh5.1.3 -DskipTests
See the detail of how to change the profile at
https://spark.apache.org/docs/latest/building-with-maven.html
Sincerely,
DB Tsai
(Nit: CDH *5.1.x*, including 5.1.3, is derived from Hadoop 2.3.x. 5.3
is based on 2.5.x)
On Fri, Dec 5, 2014 at 3:29 PM, DB Tsai dbt...@dbtsai.com wrote:
As Marcelo said, CDH5.3 is based on hadoop 2.3, so please try
-
To
oh, I meant to say cdh5.1.3 used by Jakub's company is based on 2.3. You
can see it from the first part of the Cloudera's version number - 2.3.0-cdh
5.1.3.
Sincerely,
DB Tsai
---
My Blog: https://www.dbtsai.com
LinkedIn:
Hi everyone,
Have a newbie question on using IntelliJ to build and debug.
I followed this wiki to setup IntelliJ:
https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-BuildingSparkinIntelliJIDEA
Afterward I tried to build via Toolbar (Build Rebuild
Hey All,
Thanks all for the continued testing!
The issue I mentioned earlier SPARK-4498 was fixed earlier this week
(hat tip to Mark Hamstra who contributed to fix).
In the interim a few smaller blocker-level issues with Spark SQL were
found and fixed (SPARK-4753, SPARK-4552, SPARK-4761).
If you go to “File - Project Structure” and click on “Project” under the
“Project settings” heading, do you see an entry for “Project SDK?” If not, you
should click “New…” and configure a JDK; by default, I think IntelliJ should
figure out a correct path to your system JDK, so you should just
15 matches
Mail list logo