I'm using Spark 1.3.0 RC3 build with Hive support.
In Spark Shell, I want to reuse the HiveContext instance to different
warehouse locations. Below are the steps for my test (Assume I have
loaded a file into table src).
==
15/03/10 18:22:59 INFO SparkILoop: Created sql context (with
FYI: https://issues.apache.org/jira/browse/INFRA-9259
I can run benchmark on another machine with GPU nVidia Titan and Intel Xeon
E5-2650 v2, although it runs Windows and I have to run Linux tests in
VirtualBox.
It would be also interesting to add results on netlib+nvblas, however I am not
sure I understand in details how to build this and will
Thanks for reporting. This was a result of a change to our DDL parser that
resulted in types becoming reserved words. I've filled a JIRA and will
investigate if this is something we can fix.
https://issues.apache.org/jira/browse/SPARK-6250
On Tue, Mar 10, 2015 at 1:51 PM, Nitay Joffe
Hi,
I found that if I try to read parquet file generated by spark 1.1.1 using
1.3.0-rc3 by default settings, I got this error:
com.fasterxml.jackson.core.JsonParseException: Unrecognized token
'StructType': was expecting ('true', 'false' or 'null')
at [Source:
Hi Nitay,
Can you try using backticks to quote the column name? Like
org.apache.spark.sql.hive.HiveMetastoreTypes.toDataType(
struct`int`:bigint)?
Thanks,
Yin
On Tue, Mar 10, 2015 at 2:43 PM, Michael Armbrust mich...@databricks.com
wrote:
Thanks for reporting. This was a result of a change
Hi all – building Spark on my local machine with build/mvn clean package test
runs until it hits the JavaAPISuite where it hangs indefinitely. Through some
experimentation, I’ve narrowed it down to the following test:
/**
* Test for SPARK-3647. This test needs to use the maven-built assembly
I am not so sure if Hive supports change the metastore after initialized, I
guess not. Spark SQL totally rely on Hive Metastore in HiveContext, probably
that's why it doesn't work as expected for Q1.
BTW, in most of cases, people configure the metastore settings in
hive-site.xml, and will not