I've tried running your code through spark-shell on both 1.3.0 (pre-built for
Hadoop 2.4 and above) and a recently built snapshot of master. Both work
fine. Running on OS X yosemite. What's your configuration?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/
I am running following code on Spark 1.3.0. It is from
https://spark.apache.org/docs/1.3.0/ml-guide.html
On running val model1 = lr.fit(training.toDF) I get
java.lang.UnsupportedOperationException: empty collection
what could be the reason?
import org.apache.spark.{SparkConf
OK. I think I have to use "None" instead null, then it works. Still switching
from Java.
I can also just use the field name as what I assume.
Great experience.
From: java8...@hotmail.com
To: user@spark.apache.org
Subject: spark left outer join with java.lang.UnsupportedOperationExcepti
Hi,
I am using Spark 1.2.0 with Hadoop 2.2. Now I have to 2 csv files, but have 8
fields. I know that the first field from both files are IDs. I want to find all
the IDs existed in the first file, but NOT in the 2nd file.
I am coming with the following code in spark-shell.
case class origAsLeft