Hi,
I am running spark as a service , when we change some sql schema we are
facing some problems .
ERROR [http-nio-8090-exec-18] (Logging.scala:70) - SparkListenerBus has
already stopped! Dropping event
SparkListenerSQLExecutionEnd(2248,1551362214090)
@40005c77e8b00570efcc
I think we want to change the value of spark.local.dir to point to where your
PVC is mounted. Can you give that a try and let us know if that moves the
spills as expected?
-Matt Cheah
From: Tomasz Krol
Date: Wednesday, February 27, 2019 at 3:41 AM
To: "user@spark.apache.org"
Subject:
Thanks for the answer.
As far as the next step goes, I am thinking of writing out the dfKV
dataframe to disk and then use Avro apis to read the data.
This smells like a bug somewhere.
Cheers,
Hien
On Thu, Feb 28, 2019 at 4:02 AM Gabor Somogyi
wrote:
> No, just take a look at the schema of
Hi,
This might be an opportunity to give a huge speed bump to toLocalIterator.
Method toLocalIterator fetches the partitions to the driver one by one.
This is great. What is not so great, is that any required computation
for the yet-to-be-fetched-partitions is not kicked off until it is
No, just take a look at the schema of dfStruct since you've converted its
value column with to_avro:
scala> dfStruct.printSchema
root
|-- id: integer (nullable = false)
|-- name: string (nullable = true)
|-- age: integer (nullable = false)
|-- value: struct (nullable = false)
||-- name:
Hi Akshay
Thanks for the response please find below the answers to your questions.
1. We are running Spark in cluster mode the cluster manager being Spark's
standalone cluster manager.
2. All the ports are open and we preconfigure on what ports the
communication should happen and modify firewall
Hi Akshay
Thanks for the response please find below the answers to your questions.
1. We are running Spark in cluster mode the cluster manager being Spark's
standalone cluster manager.
2. All the ports are open and we preconfigure on what ports the
communication should happen and modify firewall
Hi Akshay
Thanks for the response please find below the answers to your questions.
1. We are running Spark in cluster mode the cluster manager being Spark's
standalone cluster manager.
2. All the ports are open and we preconfigure on what ports the
communication should happen and modify firewall
Hi Lokesh,
Please provide further information to help identify the issue.
1) Are you running in a standalone mode or cluster mode? If cluster, then
is a spark master/slave or YARN/Mesos?
2) Have you tried checking if all ports between your master and the machine
with IP 192.168.43.167 are