OK. I think I have to use None instead null, then it works. Still switching
from Java.
I can also just use the field name as what I assume.
Great experience.
From: java8...@hotmail.com
To: user@spark.apache.org
Subject: spark left outer join with java.lang.UnsupportedOperationException:
empty
Hi,
I am using Spark 1.2.0 with Hadoop 2.2. Now I have to 2 csv files, but have 8
fields. I know that the first field from both files are IDs. I want to find all
the IDs existed in the first file, but NOT in the 2nd file.
I am coming with the following code in spark-shell.
case class origAsLeft