Is this something which can be fixed in the Spark Interpreter?

Maybe an auto-restart if "File name too long" is the result?

On Sun, Aug 23, 2015 at 12:47 PM, Silvio Fiorito <
silvio.fior...@granturing.com> wrote:

> I've seen this recently as well. Seems to be an issue with the Scala REPL
> after running and rerunning notebooks with a lot of code.
>
> Only solution I found was too restart the interpreter.
>
> Even Databricks cloud seems to have this issue:
> https://forums.databricks.com/questions/427/why-do-i-see-this-error-when-i-run-my-notebook-jav.html
>
>
> Thanks,
> Silvio
> ------------------------------
> From: Randy Gelhausen <rgel...@gmail.com>
> Sent: ‎8/‎22/‎2015 6:33 PM
> To: users@zeppelin.incubator.apache.org
> Subject: "File name too long" error in Spark paragraphs
>
> Hi All,
>
> Anyone see something similar to this:
>
> %spark
> import org.apache.spark.sql._
> import org.apache.phoenix.spark._
>
> val input = "/user/root/crimes/atlanta"
>
> val df =
> sqlContext.read.format("com.databricks.spark.csv").option("header",
> "true").option("DROPMALFORMED", "true").load(input)
> val columns = df.columns.map(x => x.toUpperCase + " varchar,\n")
> column
>
> The result is an error:
> File name too long
>
> I tried commenting out various lines, and then ALL lines, but everything
> (even in new paragraphs) passed to the interpreter results in "File name
> too long".
>
> Am I doing something silly?
>
> Thanks,
> -Randy
>

Reply via email to