[ 
https://issues.apache.org/jira/browse/SPARK-40154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean R. Owen reassigned SPARK-40154:
------------------------------------

    Assignee: Paul Staab

> PySpark: DataFrame.cache docstring gives wrong storage level
> ------------------------------------------------------------
>
>                 Key: SPARK-40154
>                 URL: https://issues.apache.org/jira/browse/SPARK-40154
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 3.3.0
>            Reporter: Paul Staab
>            Assignee: Paul Staab
>            Priority: Minor
>              Labels: pull-request-available
>
> The docstring of the `DataFrame.cache()` method currently states that it uses 
> a serialized storage level
> {code:java}
> Persists the :class:`DataFrame` with the default storage level 
> (`MEMORY_AND_DISK`).
> [...]
> -        The default storage level has changed to `MEMORY_AND_DISK` to match 
> Scala in 2.0.{code}
> while `DataFrame.persist()` states that it uses a deserialized storage level
> {code:java}
> If no storage level is specified defaults to (`MEMORY_AND_DISK_DESER`)
> [...]
> The default storage level has changed to `MEMORY_AND_DISK_DESER` to match 
> Scala in 3.0.{code}
>  
> However, in practice both `.cache()` and `.persist()` use deserialized 
> storage levels:
> {code:java}
> import pyspark
> from pyspark.sql import SparkSession
> from pyspark import StorageLevel
> print(pyspark.__version__)
> # 3.3.0
> spark = SparkSession.builder.master("local[2]").getOrCreate()
> df = spark.createDataFrame(zip(["A"] * 1000, ["B"] * 1000), ["col_a", 
> "col_b"])
> df = df.cache()
> df.count()
> # Storage level in Spark UI: "Disk Memory Deserialized 1x Replicated"
> df = spark.createDataFrame(zip(["A"] * 1000, ["B"] * 1000), ["col_a", 
> "col_b"])
> df = df.persist()
> df.count()
> # Storage level in Spark UI: "Disk Memory Deserialized 1x Replicated"
> df = spark.createDataFrame(zip(["A"] * 1000, ["B"] * 1000), ["col_a", 
> "col_b"])
> df = df.persist(StorageLevel.MEMORY_AND_DISK)
> df.count()
> # Storage level in Spark UI: "Disk Memory Serialized 1x Replicated"{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to