Re: unsafe/compile error

2015-06-22 Thread Andrea Jemmett
Thank you, that worked!

*Andrea Jemmett*

2015-06-21 21:10 GMT+02:00 Reynold Xin r...@databricks.com:

 Put them in quotes, e.g.

 sbt/sbt mllib/testOnly *NaiveBayesSuite

 On Sun, Jun 21, 2015 at 11:15 AM, acidghost andreajemm...@gmail.com
 wrote:

 Something like mllib/testOnly NaiveBayesSuite is what I need!

 But it's not working, runs all mllib suites.



 --
 View this message in context:
 http://apache-spark-developers-list.1001551.n3.nabble.com/unsafe-compile-error-tp12815p12820.html
 Sent from the Apache Spark Developers List mailing list archive at
 Nabble.com.

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org





Re: Hive 0.12 support in 1.4.0 ?

2015-06-22 Thread Yin Huai
Hi Tom,

In Spark 1.4, we have de-coupled the support of Hive's metastore and other
parts (parser, Hive udfs, and Hive SerDes). The execution engine of Spark
SQL in 1.4 will always use Hive 0.13.1. For the metastore connection part,
you can connect to either Hive 0.12 or 0.13.1's metastore. We have removed
old shims and profiles of specifying the Hive version (since execution
engine is always using Hive 0.13.1 and metastore client part can be
configured to use either Hive 0.12 or 0.13.1's metastore).

You can take a look at
https://spark.apache.org/docs/latest/sql-programming-guide.html#interacting-with-different-versions-of-hive-metastore
for connecting to Hive 0.12's metastore.

Let me know if you have any question.

Thanks,

Yin

On Wed, Jun 17, 2015 at 4:18 PM, Thomas Dudziak tom...@gmail.com wrote:

 So I'm a little confused, has Hive 0.12 support disappeared in 1.4.0 ? The
 release notes didn't mention anything, but the documentation doesn't list a
 way to build for 0.12 anymore (
 http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support,
 in fact it doesn't list anything other than 0.13), and I don't see any
 maven profiles nor code for 0.12.

 Tom




Force Spark save parquet files with replication factor other than 3 (default one)

2015-06-22 Thread Ulanov, Alexander
Hi,

My Hadoop is configured to have replication ratio = 2. I've added 
$HADOOP_HOME/config to the PATH as suggested in 
http://apache-spark-user-list.1001560.n3.nabble.com/hdfs-replication-on-saving-RDD-td289.html.
 Spark (1.4) does rdd.saveAsTextFile with replication=2. However 
DataFrame.saveAsParquet is done with replication = 3. How can I force Spark 
Dataframe to save parquet files with replication factor other than 3 (default 
one)?

Best regards, Alexander