[ 
https://issues.apache.org/jira/browse/SPARK-29195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Sun updated SPARK-29195:
-----------------------------
    Description: 
 Only codec can be effectively configured via code, but "orc.compress.size" or 
"orc.row.index.stride" can not.

 
{code:java}
// try
  val spark = SparkSession
    .builder()
    .appName(appName)
    .enableHiveSupport()
    .config("spark.sql.orc.impl", "native")
    .config("orc.compress.size", 512 * 1024)
    .config("spark.sql.orc.compress.size", 512 * 1024)
    .config("hive.exec.orc.default.buffer.size", 512 * 1024)
    .config("spark.hadoop.io.file.buffer.size", 512 * 1024)
    .getOrCreate()
{code}

orcfiledump still shows:
 
{code:java}
File Version: 0.12 with FUTURE

Compression: ZLIB
Compression size: 65536
{code}
 
Executor Log:

{code}
impl.WriterImpl: ORC writer created for path: 
hdfs://name_node_host:9000/foo/bar/_temporary/0/_temporary/attempt_20190920222359_0001_m_000127_0/part-00127-2a9a9287-54bf-441c-b3cf-718b122d9c2f_00127.c000.zlib.orc
 with stripeSize: 67108864 blockSize: 268435456 compression: ZLIB bufferSize: 
65536

File Output Committer Algorithm version is 2
{code}

According to [SPARK-23342], the other ORC options should be configurable. Is 
there anything missing here?
Is there any other way to affect "orc.compress.size"?

  was:
 Only codec can be effectively configured via code, but "orc.compress.size" or 
"orc.row.index.stride" can not.

 
{code:java}
// try
  val spark = SparkSession
    .builder()
    .appName(appName)
    .enableHiveSupport()
    .config("spark.sql.orc.impl", "native")
    .config("orc.compress.size", 512 * 1024)
    .config("spark.sql.orc.compress.size", 512 * 1024)
    .config("hive.exec.orc.default.buffer.size", 512 * 1024)
    .config("spark.hadoop.io.file.buffer.size", 512 * 1024)
    .getOrCreate()
{code}

orcfiledump still shows:
 
{code:java}
File Version: 0.12 with FUTURE

Compression: ZLIB
Compression size: 65536
{code}
 

According to [SPARK-23342], the other ORC options should be configurable. Is 
there anything missing here?
Is there any other way to affect "orc.compress.size"?

    Environment: Spark 2.3.0  (was: Spark 2.3.0

Executor Log:


impl.WriterImpl: ORC writer created for path: 
hdfs://name_node_host:9000/foo/bar/_temporary/0/_temporary/attempt_20190920222359_0001_m_000127_0/part-00127-2a9a9287-54bf-441c-b3cf-718b122d9c2f_00127.c000.zlib.orc
 with stripeSize: 67108864 blockSize: 268435456 compression: ZLIB bufferSize: 
65536

File Output Committer Algorithm version is 2)

> Can't config orc.compress.size option for native ORC writer
> -----------------------------------------------------------
>
>                 Key: SPARK-29195
>                 URL: https://issues.apache.org/jira/browse/SPARK-29195
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.3.0
>         Environment: Spark 2.3.0
>            Reporter: Eric Sun
>            Priority: Minor
>              Labels: ORC
>
>  Only codec can be effectively configured via code, but "orc.compress.size" 
> or "orc.row.index.stride" can not.
>  
> {code:java}
> // try
>   val spark = SparkSession
>     .builder()
>     .appName(appName)
>     .enableHiveSupport()
>     .config("spark.sql.orc.impl", "native")
>     .config("orc.compress.size", 512 * 1024)
>     .config("spark.sql.orc.compress.size", 512 * 1024)
>     .config("hive.exec.orc.default.buffer.size", 512 * 1024)
>     .config("spark.hadoop.io.file.buffer.size", 512 * 1024)
>     .getOrCreate()
> {code}
> orcfiledump still shows:
>  
> {code:java}
> File Version: 0.12 with FUTURE
> Compression: ZLIB
> Compression size: 65536
> {code}
>  
> Executor Log:
> {code}
> impl.WriterImpl: ORC writer created for path: 
> hdfs://name_node_host:9000/foo/bar/_temporary/0/_temporary/attempt_20190920222359_0001_m_000127_0/part-00127-2a9a9287-54bf-441c-b3cf-718b122d9c2f_00127.c000.zlib.orc
>  with stripeSize: 67108864 blockSize: 268435456 compression: ZLIB bufferSize: 
> 65536
> File Output Committer Algorithm version is 2
> {code}
> According to [SPARK-23342], the other ORC options should be configurable. Is 
> there anything missing here?
> Is there any other way to affect "orc.compress.size"?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to