error in sprark sql

2019-02-28 Thread yuvraj singh
Hi,

I am running  spark as a service , when we change some sql schema we are
facing some problems .

ERROR [http-nio-8090-exec-18] (Logging.scala:70) - SparkListenerBus has
already stopped! Dropping event
SparkListenerSQLExecutionEnd(2248,1551362214090)
@40005c77e8b00570efcc
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute,
tree:
@40005c77e8b00570efcc Exchange hashpartitioning(id#1131783L, 200)
@40005c77e8b00570f3b4 +- *Project [id#1131783L, ref_id#1131871]
@40005c77e8b00570f3b4+- *Scan JDBCRelation(crn_tracker FORCE INDEX
(history_data)) [numPartitions=4] [id#1131783L,ref_id#1131871]
PushedFilters: [*IsNotNull(tenant), *EqualTo(tenant,ola_share)],
ReadSchema: struct

do sprak cache schema of a table ?

Thanks
Yubraj Singh




[image: Mailtrack]

Sender
notified by
Mailtrack

03/01/19,
1:05:50 PM


Re: Spark on k8s - map persistentStorage for data spilling

2019-02-28 Thread Matt Cheah
I think we want to change the value of spark.local.dir to point to where your 
PVC is mounted. Can you give that a try and let us know if that moves the 
spills as expected?

 

-Matt Cheah

 

From: Tomasz Krol 
Date: Wednesday, February 27, 2019 at 3:41 AM
To: "user@spark.apache.org" 
Subject: Spark on k8s - map persistentStorage for data spilling

 

Hey Guys,

 

I hope someone will be able to help me, as I've stuck with this for a while:) 
Basically I am running some jobs on kubernetes as per documentation

 

https://spark.apache.org/docs/latest/running-on-kubernetes.html 
[spark.apache.org]

 

All works fine, however if I run queries on bigger data volume, then jobs 
failing that there is not enough space in /var/data/spark-1xxx directory.

 

Obviously the reason for this is that emptyDir mounted doesnt have enough space.

 

I also mounted pvc to the driver and executors pods which I can see during the 
runtime. I am wondering if someone knows how to set that data will be spilled 
to different directory (i.e my persistent storage directory) instead of empyDir 
with some limitted space. Or if I can mount the empyDir somehow on my pvc. 
Basically at the moment I cant run any jobs as they are failing due to 
insufficient space in that /var/data directory.

 

Thanks

-- 

Tomasz Krol
patric...@gmail.com



smime.p7s
Description: S/MIME cryptographic signature


Re: to_avro and from_avro not working with struct type in spark 2.4

2019-02-28 Thread Hien Luu
Thanks for the answer.

As far as the next step goes, I am thinking of writing out the dfKV
dataframe to disk and then use Avro apis to read the data.

This smells like a bug somewhere.

Cheers,

Hien

On Thu, Feb 28, 2019 at 4:02 AM Gabor Somogyi 
wrote:

> No, just take a look at the schema of dfStruct since you've converted its
> value column with to_avro:
>
> scala> dfStruct.printSchema
> root
>  |-- id: integer (nullable = false)
>  |-- name: string (nullable = true)
>  |-- age: integer (nullable = false)
>  |-- value: struct (nullable = false)
>  ||-- name: string (nullable = true)
>  ||-- age: integer (nullable = false)
>
>
> On Wed, Feb 27, 2019 at 6:51 PM Hien Luu  wrote:
>
>> Thanks for looking into this.  Does this mean string fields should alway
>> be nullable?
>>
>> You are right that the result is not yet correct and further digging is
>> needed :(
>>
>> On Wed, Feb 27, 2019 at 1:19 AM Gabor Somogyi 
>> wrote:
>>
>>> Hi,
>>>
>>> I was dealing with avro stuff lately and most of the time it has
>>> something to do with the schema.
>>> One thing I've pinpointed quickly (where I was struggling also) is the
>>> name field should be nullable but the result is not yet correct so further
>>> digging needed...
>>>
>>> scala> val expectedSchema = StructType(Seq(StructField("name",
>>> StringType,true),StructField("age", IntegerType, false)))
>>> expectedSchema: org.apache.spark.sql.types.StructType =
>>> StructType(StructField(name,StringType,true),
>>> StructField(age,IntegerType,false))
>>>
>>> scala> val avroTypeStruct =
>>> SchemaConverters.toAvroType(expectedSchema).toString
>>> avroTypeStruct: String =
>>> {"type":"record","name":"topLevelRecord","fields":[{"name":"name","type":["string","null"]},{"name":"age","type":"int"}]}
>>>
>>> scala> dfKV.select(from_avro('value, avroTypeStruct)).show
>>> +-+
>>> |from_avro(value, struct)|
>>> +-+
>>> |  [Mary Jane, 25]|
>>> |  [Mary Jane, 25]|
>>> +-+
>>>
>>> BR,
>>> G
>>>
>>>
>>> On Wed, Feb 27, 2019 at 7:43 AM Hien Luu  wrote:
>>>
 Hi,

 I ran into a pretty weird issue with to_avro and from_avro where it was
 not
 able to parse the data in a struct correctly.  Please see the simple and
 self contained example below. I am using Spark 2.4.  I am not sure if I
 missed something.

 This is how I start the spark-shell on my Mac:

 ./bin/spark-shell --packages org.apache.spark:spark-avro_2.11:2.4.0

 import org.apache.spark.sql.types._
 import org.apache.spark.sql.avro._
 import org.apache.spark.sql.functions._


 spark.version

 val df = Seq((1, "John Doe",  30), (2, "Mary Jane", 25)).toDF("id",
 "name",
 "age")

 val dfStruct = df.withColumn("value", struct("name","age"))

 dfStruct.show
 dfStruct.printSchema

 val dfKV = dfStruct.select(to_avro('id).as("key"),
 to_avro('value).as("value"))

 val expectedSchema = StructType(Seq(StructField("name", StringType,
 false),StructField("age", IntegerType, false)))

 val avroTypeStruct =
 SchemaConverters.toAvroType(expectedSchema).toString

 val avroTypeStr = s"""
   |{
   |  "type": "int",
   |  "name": "key"
   |}
 """.stripMargin


 dfKV.select(from_avro('key, avroTypeStr)).show

 // output
 +---+
 |from_avro(key, int)|
 +---+
 |  1|
 |  2|
 +---+

 dfKV.select(from_avro('value, avroTypeStruct)).show

 // output
 +-+
 |from_avro(value, struct)|
 +-+
 |[, 9]|
 |[, 9]|
 +-+

 Please help and thanks in advance.




 --
 Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

 -
 To unsubscribe e-mail: user-unsubscr...@spark.apache.org


>>
>> --
>> Regards,
>>
>

-- 
Regards,


Opportunity to speed up toLocalIterator?

2019-02-28 Thread Erik van Oosten

Hi,

This might be an opportunity to give a huge speed bump to toLocalIterator.

Method toLocalIterator fetches the partitions to the driver one by one. 
This is great. What is not so great, is that any required computation 
for the yet-to-be-fetched-partitions is not kicked off until it is 
fetched. Effectively only one partition is being computed at the same 
time, giving idle resources and longer wait time.


Is this observation correct?

Is it possible to have concurrent computation on all partitions while 
retaining the download-a-partition at a time behavior?


Kind regards,
    Erik.

--
Erik van Oosten
http://www.day-to-day-stuff.blogspot.com/


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: to_avro and from_avro not working with struct type in spark 2.4

2019-02-28 Thread Gabor Somogyi
No, just take a look at the schema of dfStruct since you've converted its
value column with to_avro:

scala> dfStruct.printSchema
root
 |-- id: integer (nullable = false)
 |-- name: string (nullable = true)
 |-- age: integer (nullable = false)
 |-- value: struct (nullable = false)
 ||-- name: string (nullable = true)
 ||-- age: integer (nullable = false)


On Wed, Feb 27, 2019 at 6:51 PM Hien Luu  wrote:

> Thanks for looking into this.  Does this mean string fields should alway
> be nullable?
>
> You are right that the result is not yet correct and further digging is
> needed :(
>
> On Wed, Feb 27, 2019 at 1:19 AM Gabor Somogyi 
> wrote:
>
>> Hi,
>>
>> I was dealing with avro stuff lately and most of the time it has
>> something to do with the schema.
>> One thing I've pinpointed quickly (where I was struggling also) is the
>> name field should be nullable but the result is not yet correct so further
>> digging needed...
>>
>> scala> val expectedSchema = StructType(Seq(StructField("name",
>> StringType,true),StructField("age", IntegerType, false)))
>> expectedSchema: org.apache.spark.sql.types.StructType =
>> StructType(StructField(name,StringType,true),
>> StructField(age,IntegerType,false))
>>
>> scala> val avroTypeStruct =
>> SchemaConverters.toAvroType(expectedSchema).toString
>> avroTypeStruct: String =
>> {"type":"record","name":"topLevelRecord","fields":[{"name":"name","type":["string","null"]},{"name":"age","type":"int"}]}
>>
>> scala> dfKV.select(from_avro('value, avroTypeStruct)).show
>> +-+
>> |from_avro(value, struct)|
>> +-+
>> |  [Mary Jane, 25]|
>> |  [Mary Jane, 25]|
>> +-+
>>
>> BR,
>> G
>>
>>
>> On Wed, Feb 27, 2019 at 7:43 AM Hien Luu  wrote:
>>
>>> Hi,
>>>
>>> I ran into a pretty weird issue with to_avro and from_avro where it was
>>> not
>>> able to parse the data in a struct correctly.  Please see the simple and
>>> self contained example below. I am using Spark 2.4.  I am not sure if I
>>> missed something.
>>>
>>> This is how I start the spark-shell on my Mac:
>>>
>>> ./bin/spark-shell --packages org.apache.spark:spark-avro_2.11:2.4.0
>>>
>>> import org.apache.spark.sql.types._
>>> import org.apache.spark.sql.avro._
>>> import org.apache.spark.sql.functions._
>>>
>>>
>>> spark.version
>>>
>>> val df = Seq((1, "John Doe",  30), (2, "Mary Jane", 25)).toDF("id",
>>> "name",
>>> "age")
>>>
>>> val dfStruct = df.withColumn("value", struct("name","age"))
>>>
>>> dfStruct.show
>>> dfStruct.printSchema
>>>
>>> val dfKV = dfStruct.select(to_avro('id).as("key"),
>>> to_avro('value).as("value"))
>>>
>>> val expectedSchema = StructType(Seq(StructField("name", StringType,
>>> false),StructField("age", IntegerType, false)))
>>>
>>> val avroTypeStruct = SchemaConverters.toAvroType(expectedSchema).toString
>>>
>>> val avroTypeStr = s"""
>>>   |{
>>>   |  "type": "int",
>>>   |  "name": "key"
>>>   |}
>>> """.stripMargin
>>>
>>>
>>> dfKV.select(from_avro('key, avroTypeStr)).show
>>>
>>> // output
>>> +---+
>>> |from_avro(key, int)|
>>> +---+
>>> |  1|
>>> |  2|
>>> +---+
>>>
>>> dfKV.select(from_avro('value, avroTypeStruct)).show
>>>
>>> // output
>>> +-+
>>> |from_avro(value, struct)|
>>> +-+
>>> |[, 9]|
>>> |[, 9]|
>>> +-+
>>>
>>> Please help and thanks in advance.
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>>>
>>> -
>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>
>>>
>
> --
> Regards,
>


Re: Spark 2.4.0 Master going down

2019-02-28 Thread lokeshkumar
Hi Akshay

Thanks for the response please find below the answers to your questions.

1. We are running Spark in cluster mode the cluster manager being Spark's
standalone cluster manager.
2. All the ports are open and we preconfigure on what ports the
communication should happen and modify firewall rules to allow traffic on
these ports. (The functionality is fine till Spark master goes down after 60
mins)
3. Memory consumptions of all the components:

Spark Master:
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
  0.00   0.00  12.91  35.11  97.08  95.80  50.239 20.197   
0.436
Spark Worker:
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
 51.64   0.00  46.66  27.44  97.57  95.85 100.381 20.233   
0.613
Spark Submit Process (Driver):
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
  0.00  63.57  93.82  26.29  98.24  97.53   4663  124.648   109   20.910 
145.558
Spark executor (Coarse grained):
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
  0.00  69.77  17.74  31.13  95.67  90.44   7353  556.888 51.572 
558.460



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark 2.4.0 Master going down

2019-02-28 Thread lokeshkumar
Hi Akshay

Thanks for the response please find below the answers to your questions.

1. We are running Spark in cluster mode the cluster manager being Spark's
standalone cluster manager.
2. All the ports are open and we preconfigure on what ports the
communication should happen and modify firewall rules to allow traffic on
these ports. (The functionality is fine till Spark master goes down after 60
mins)
3. Memory consumptions of all the components:

Spark Master:
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
  0.00   0.00  12.91  35.11  97.08  95.80  50.239 20.197   
0.436
Spark Worker:
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
 51.64   0.00  46.66  27.44  97.57  95.85 100.381 20.233   
0.613
Spark Submit Process (Driver):
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
  0.00  63.57  93.82  26.29  98.24  97.53   4663  124.648   109   20.910 
145.558
Spark executor (Coarse grained):
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
GCT   
  0.00  69.77  17.74  31.13  95.67  90.44   7353  556.888 51.572 
558.460



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark 2.4.0 Master going down

2019-02-28 Thread Lokesh Kumar Padhnavis
Hi Akshay

Thanks for the response please find below the answers to your questions.

1. We are running Spark in cluster mode the cluster manager being Spark's
standalone cluster manager.
2. All the ports are open and we preconfigure on what ports the
communication should happen and modify firewall rules to allow traffic on
these ports. (The functionality is fine till Spark master goes down after
60 mins)
3. Memory consumptions of all the components:

Spark Master:
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
 GCT
  0.00   0.00  12.91  35.11  97.08  95.80  50.239 20.197
0.436
Spark Worker:
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
 GCT
 51.64   0.00  46.66  27.44  97.57  95.85 100.381 20.233
0.613
Spark Submit Process (Driver):
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
 GCT
  0.00  63.57  93.82  26.29  98.24  97.53   4663  124.648   109   20.910
145.558
Spark executor (Coarse grained):
  S0 S1 E  O  M CCSYGC YGCTFGCFGCT
 GCT
  0.00  69.77  17.74  31.13  95.67  90.44   7353  556.888 51.572
558.460



On Thu, Feb 28, 2019 at 3:13 PM Akshay Bhardwaj <
akshay.bhardwaj1...@gmail.com> wrote:

> Hi Lokesh,
>
> Please provide further information to help identify the issue.
>
> 1) Are you running in a standalone mode or cluster mode? If cluster, then
> is a spark master/slave or YARN/Mesos?
> 2) Have you tried checking if all ports between your master and the
> machine with IP 192.168.43.167 are accessible?
> 3) Have you checked the memory consumption of the executors/driver running
> in the cluster?
>
>
> Akshay Bhardwaj
> +91-97111-33849
>
>
> On Wed, Feb 27, 2019 at 8:27 PM lokeshkumar  wrote:
>
>> Hi All
>>
>> We are running Spark version 2.4.0 and we run few Spark streaming jobs
>> listening on Kafka topics. We receive an average of 10-20 msgs per
>> second.
>> And the Spark master has been going down after 1-2 hours of it running.
>> Exception is given below:
>> Along with that spark executors also get killed.
>>
>> This was not happening with Spark 2.1.1 it started happening with Spark
>> 2.4.0 any help/suggestion is appreciated.
>>
>> The exception that we see is
>>
>> Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
>> at
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
>> at
>>
>> org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64)
>> at
>>
>> org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
>> at
>>
>> org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:281)
>> at
>>
>> org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
>> Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any
>> reply from 192.168.43.167:40007 in 120 seconds. This timeout is
>> controlled
>> by spark.rpc.askTimeout
>> at
>> org.apache.spark.rpc.RpcTimeout.org
>> $apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
>> at
>>
>> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:62)
>> at
>>
>> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:58)
>> at
>>
>> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>> at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
>> at scala.util.Try$.apply(Try.scala:192)
>> at scala.util.Failure.recover(Try.scala:216)
>> at
>> scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>> at
>> scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>> at
>>
>> org.spark_project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
>> at
>>
>> scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
>> at
>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>> at
>>
>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>> at scala.concurrent.Promise$class.complete(Promise.scala:55)
>> at
>> scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
>> at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>> at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>> at
>>
>> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
>> at
>>
>> 

Re: Spark 2.4.0 Master going down

2019-02-28 Thread Akshay Bhardwaj
Hi Lokesh,

Please provide further information to help identify the issue.

1) Are you running in a standalone mode or cluster mode? If cluster, then
is a spark master/slave or YARN/Mesos?
2) Have you tried checking if all ports between your master and the machine
with IP 192.168.43.167 are accessible?
3) Have you checked the memory consumption of the executors/driver running
in the cluster?


Akshay Bhardwaj
+91-97111-33849


On Wed, Feb 27, 2019 at 8:27 PM lokeshkumar  wrote:

> Hi All
>
> We are running Spark version 2.4.0 and we run few Spark streaming jobs
> listening on Kafka topics. We receive an average of 10-20 msgs per second.
> And the Spark master has been going down after 1-2 hours of it running.
> Exception is given below:
> Along with that spark executors also get killed.
>
> This was not happening with Spark 2.1.1 it started happening with Spark
> 2.4.0 any help/suggestion is appreciated.
>
> The exception that we see is
>
> Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
> at
>
> org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64)
> at
>
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
> at
>
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:281)
> at
>
> org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any
> reply from 192.168.43.167:40007 in 120 seconds. This timeout is controlled
> by spark.rpc.askTimeout
> at
> org.apache.spark.rpc.RpcTimeout.org
> $apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
> at
>
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:62)
> at
>
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:58)
> at
>
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
> at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
> at scala.util.Try$.apply(Try.scala:192)
> at scala.util.Failure.recover(Try.scala:216)
> at
> scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
> at
> scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at
>
> org.spark_project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
> at
>
> scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
> at
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at scala.concurrent.Promise$class.complete(Promise.scala:55)
> at
> scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
> at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
> at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at
>
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
> at
>
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
> at
>
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
> at
>
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
> at
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> at
> scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
> at
>
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
> at
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> at
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
> at
> scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:157)
> at
> org.apache.spark.rpc.netty.NettyRpcEnv.org
> $apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:206)
> at
> org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:243)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at