(col("id0").bitwiseOR(col("id1")) % jobs == mod) \
> .withColumn("test", test_score_r4(col("id0"), col("id1"))) \
> .cache()
> df.count()
> df.coalesce(300).write.mode("overwrite").parquet(output_mod)
One guess - you are doing two things here, count() and write(). There is a
persist(), but it's async. It won't necessarily wait for the persist to
finish before proceeding and may have to recompute at least some partitions
for the second op. You could debug further by looking at the stages and
---
> *From:* Deepak Sharma
> *Sent:* Sunday, January 30, 2022 12:45 AM
> *To:* Benjamin Du
> *Cc:* u...@spark.incubator.apache.org
> *Subject:* Re: A Persisted Spark DataFrame is computed twice
>
> coalesce returns a new dataset.
> That will cause the recomputation.
22 1:08 AM
To: sebastian@gmail.com
Cc: Benjamin Du ; u...@spark.incubator.apache.org
Subject: Re: A Persisted Spark DataFrame is computed twice
Hi,
without getting into suppositions, the best option is to look into the SPARK UI
SQL section.
It is the most wonderful tool to explain wh
pak Sharma
Sent: Sunday, January 30, 2022 12:45 AM
To: Benjamin Du
Cc: u...@spark.incubator.apache.org
Subject: Re: A Persisted Spark DataFrame is computed twice
coalesce returns a new dataset.
That will cause the recomputation.
Thanks
Deepak
On Sun, 30 Jan 2022 at 14:06, Benjamin Du
mailto
ataFrame to disk, read it back,
repartition/coalesce it, and then write it back to HDFS.
spark.read.parquet("/input/hdfs/path") \
.filter(col("n0") == n0) \
.filter(col("n1") == n1) \
.filter(col("h1") == h1) \
.filter(col(
Hi,
without getting into suppositions, the best option is to look into the
SPARK UI SQL section.
It is the most wonderful tool to explain what is happening, and why. In
SPARK 3.x they have made the UI even better, with different set of
granularity and details.
On another note, you might want to
coalesce returns a new dataset.
That will cause the recomputation.
Thanks
Deepak
On Sun, 30 Jan 2022 at 14:06, Benjamin Du wrote:
> I have some PySpark code like below. Basically, I persist a DataFrame
> (which is time-consuming to compute) to disk, call the method
> DataFrame.count to trigger
It's probably the repartitioning and deserialising the df that you are
seeing take time. Try doing this
1. Add another count after your current one and compare times
2. Move coalesce before persist
You should see
On Sun, 30 Jan 2022, 08:37 Benjamin Du, wrote:
> I have some PySpark code like