[ 
https://issues.apache.org/jira/browse/KYLIN-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100598#comment-17100598
 ] 

ASF GitHub Bot commented on KYLIN-4320:
---------------------------------------

zhangayqian opened a new pull request #1196:
URL: https://github.com/apache/kylin/pull/1196


   …r Spark engine
   
   ## Proposed changes
   
   Describe the big picture of your changes here to communicate to the 
maintainers why we should accept this pull request. If it fixes a bug or 
resolves a feature request, be sure to link to that issue.
   
   ## Types of changes
   
   What types of changes does your code introduce to Kylin?
   _Put an `x` in the boxes that apply_
   
   - [ ] Bugfix (non-breaking change which fixes an issue)
   - [ ] New feature (non-breaking change which adds functionality)
   - [ ] Breaking change (fix or feature that would cause existing 
functionality to not work as expected)
   - [ ] Documentation Update (if none of the other choices apply)
   
   ## Checklist
   
   _Put an `x` in the boxes that apply. You can also fill these out after 
creating the PR. If you're unsure about any of them, don't hesitate to ask. 
We're here to help! This is simply a reminder of what we are going to look for 
before merging your code._
   
   - [ ] I have create an issue on [Kylin's 
jira](https://issues.apache.org/jira/browse/KYLIN), and have described the 
bug/feature there in detail
   - [ ] Commit messages in my PR start with the related jira ID, like 
"KYLIN-0000 Make Kylin project open-source"
   - [ ] Compiling and unit tests pass locally with my changes
   - [ ] I have added tests that prove my fix is effective or that my feature 
works
   - [ ] If this change need a document change, I will prepare another pr 
against the `document` branch
   - [ ] Any dependent changes have been merged
   
   ## Further comments
   
   If this is a relatively large or complex change, kick off the discussion at 
user@kylin or dev@kylin by explaining why you chose the solution you did and 
what alternatives you considered, etc...
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> number of replicas of Cuboid files cannot be configured for Spark engine
> ------------------------------------------------------------------------
>
>                 Key: KYLIN-4320
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4320
>             Project: Kylin
>          Issue Type: Bug
>          Components: Job Engine
>    Affects Versions: v3.0.1
>            Reporter: Congling Xia
>            Assignee: Yaqian Zhang
>            Priority: Major
>             Fix For: v3.1.0
>
>         Attachments: cuboid_replications.png
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Hi, team. I try to change `dfs.replication` to 3 by adding the following 
> config override
> {code:java}
> kylin.engine.spark-conf.spark.hadoop.dfs.replication=3
> {code}
> Then, I get a strange result - numbers of replicas of cuboid files varies 
> even though they are in the same level.
> !cuboid_replications.png!
> I guess it is due to the conflicting settings in SparkUtil:
> {code:java}
> public static void modifySparkHadoopConfiguration(SparkContext sc) throws 
> Exception {
>     sc.hadoopConfiguration().set("dfs.replication", "2"); // cuboid 
> intermediate files, replication=2
>     
> sc.hadoopConfiguration().set("mapreduce.output.fileoutputformat.compress", 
> "true");
>     
> sc.hadoopConfiguration().set("mapreduce.output.fileoutputformat.compress.type",
>  "BLOCK");
>     
> sc.hadoopConfiguration().set("mapreduce.output.fileoutputformat.compress.codec",
>  "org.apache.hadoop.io.compress.DefaultCodec"); // or 
> org.apache.hadoop.io.compress.SnappyCodec
> }
> {code}
> It may be a bug for Spark property precedence. After checking [Spark 
> documents|#dynamically-loading-spark-properties]], it seems that some 
> programmatically set properties may not take effect and it is not a 
> recommended way for Spark job configuration.
>  
> Anyway, cuboid files may survive for weeks until expired or been merged, the 
> configuration rewrite in 
> `org.apache.kylin.engine.spark.SparkUtil#modifySparkHadoopConfiguration` 
> makes those files less reliable.
> Is there any way to force cuboid files to remain 3 replicas? or shall we 
> remove the code in SparkUtil to make 
> kylin.engine.spark-conf.spark.hadoop.dfs.replication work properly?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to