Howdy All,
The Spark 3.3 documentation states that it is Java 8/11/17 compatible, but
I'm having a hard time finding an existing code base that is using JDK 17
for the userland compilation. Even the Spark 3.3 branch doesn't appear to
compile/test with JDK 17 in the github actions for branch-3.3 a
Due to settings like,
"spark.kubernetes.executor.missingPodDetectDelta" I've begun to wonder
about heartbeats on Kubernetes.
Do executors still conduct the traditional heartbeat to the driver
when run on Kubernetes?
Thanks,
Kris
--
I figured out why. We are not persisting the data at the end of
.load(). Thus, every operation like count() is going back to Kafka
for the data again.
On Fri, Mar 1, 2019 at 10:10 AM Kristopher Kane wrote:
>
> We are using the assign API to do batch work with Spark and Kafka.
> What I
We are using the assign API to do batch work with Spark and Kafka.
What I'm seeing is the Spark executor work happening in the back
ground and constantly polling the same data over and over until the
main thread commits the offsets.
Is the below a blocking operation?
Dataset df = spark.read().f