[ 
https://issues.apache.org/jira/browse/SPARK-32672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17182225#comment-17182225
 ] 

Hyukjin Kwon commented on SPARK-32672:
--------------------------------------

[~revans2] is a PMC, and it is a correctness issue. This indeed is a blocker. I 
think the initial action was a mistake.

I think [~cltlfcjin] referred:

{quote}
Set to Major or below; higher priorities are generally reserved for committers 
to set. 
{quote}

I fully agree that ideally we should first evaluate what they are reporting 
with stating the reason.

The problem is that we don't have a lot of manpower here in triaging/managing 
JIRAs. It's just that there are not so many people who do.
Given this situation, I would like to encourage to aggressively triage - there 
are many JIRAs that set priority incorrectly.

For example, many JIRAs just ask questions and/or investigations with setting 
the priority as a blocker. Such blockers matter for release managers.
If we want more fine-grained and ideal evaluation of JIRAs, I would encourage 
our PMC members to take a look more often.


> Data corruption in some cached compressed boolean columns
> ---------------------------------------------------------
>
>                 Key: SPARK-32672
>                 URL: https://issues.apache.org/jira/browse/SPARK-32672
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.4, 2.4.6, 3.0.0, 3.0.1, 3.1.0
>            Reporter: Robert Joseph Evans
>            Assignee: Robert Joseph Evans
>            Priority: Blocker
>              Labels: correctness
>             Fix For: 2.4.7, 3.0.1, 3.1.0
>
>         Attachments: bad_order.snappy.parquet, small_bad.snappy.parquet
>
>
> I found that when sorting some boolean data into the cache that the results 
> can change when the data is read back out.
> It needs to be a non-trivial amount of data, and it is highly dependent on 
> the order of the data.  If I disable compression in the cache the issue goes 
> away.  I was able to make this happen in 3.0.0.  I am going to try and 
> reproduce it in other versions too.
> I'll attach the parquet file with boolean data in an order that causes this 
> to happen. As you can see after the data is cached a single null values 
> switches over to be false.
> {code}
> scala> val bad_order = spark.read.parquet("./bad_order.snappy.parquet")
> bad_order: org.apache.spark.sql.DataFrame = [b: boolean]                      
>   
> scala> bad_order.groupBy("b").count.show
> +-----+-----+
> |    b|count|
> +-----+-----+
> | null| 7153|
> | true|54334|
> |false|54021|
> +-----+-----+
> scala> bad_order.cache()
> res1: bad_order.type = [b: boolean]
> scala> bad_order.groupBy("b").count.show
> +-----+-----+
> |    b|count|
> +-----+-----+
> | null| 7152|
> | true|54334|
> |false|54022|
> +-----+-----+
> scala> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to