It looks like error of Spark's side. Does it works normally by spark shell?

On Thu, Jul 28, 2016 at 7:23 AM, Michael Sells <mjse...@gmail.com> wrote:

> Trying to get Zeppelin running on Mesos and I'm consistently hitting the
> following error when I try to create a dataframe/rdd from a file.
>
> java.io.IOException: Failed to create local dir in
> /tmp/blockmgr-82f31798-dd17-4907-a039-d1c90bf12a80/0e.
> at
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:73)
> at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:161)
> at org.apache.spark.storage.BlockManager.org
> $apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:391)
> at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:817)
> at
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:645)
> at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1003)
>
> Running Mesos 0.28, Zeppelin 0.6.0, Spark 1.6.1. This seems to happen
> whenever I try to read data from any source. Errors out just trying to
> create a dataframe or rdd like:
>
> sc.textFile("s3://filepath")
>
> Any pointers on what might be off here? I've tried changing the temp dir
> around and opening permissions. Everything I see indicates it should be
> able to write there. Any help would be appreciated.
>
> Thanks,
> Mike
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net

Reply via email to