[ 
https://issues.apache.org/jira/browse/SPARK-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pete Robbins updated SPARK-12470:
---------------------------------
    Component/s: SQL
        Summary: Incorrect calculation of row size in 
o.a.s.sql.catalyst.expressions.codegen.GenerateUnsafeRowJoiner  (was: Incorrect 
calculation of row size in 
o.a.s.catalyst.expressions.codegen.GenerateUnsafeRowJoiner)

> Incorrect calculation of row size in 
> o.a.s.sql.catalyst.expressions.codegen.GenerateUnsafeRowJoiner
> ---------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-12470
>                 URL: https://issues.apache.org/jira/browse/SPARK-12470
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.2
>            Reporter: Pete Robbins
>            Priority: Minor
>
> While looking into https://issues.apache.org/jira/browse/SPARK-12319 I 
> noticed that the row size is incorrectly calculated.
> The "sizeReduction" value is calculated in words:
>    // The number of words we can reduce when we concat two rows together.
>     // The only reduction comes from merging the bitset portion of the two 
> rows, saving 1 word.
>     val sizeReduction = bitset1Words + bitset2Words - outputBitsetWords
> but then it is subtracted from the size of the row in bytes:
>        |    out.pointTo(buf, ${schema1.size + schema2.size}, sizeInBytes - 
> $sizeReduction);
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to