[ 
https://issues.apache.org/jira/browse/SPARK-38812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523476#comment-17523476
 ] 

gaokui commented on SPARK-38812:
--------------------------------

I see  SPARK-2373, SPARK-6664 

actually i can get more better method than two, use once time job to calcute, 
not twice.

for example :

val intRDD=sc.makeRDD(Array(1,2,3,4,5,6))
intRDD.foreachPartition(iter=>{
val (it1,it2)=iter.patition(x=>x<=3)
saveQualityError(it1)   //but right here can not use rdd.savetextfile, need 
write store policy with interaltime and writing size.
saveQualityGood(it2)  //but right here can not use rdd.savetextfile, need write 
store policy with interaltime and writing size.

//and more serious problem short bucket effect. one patition good data is less, 
worse data is more. then one write method will wait another method.
})

> when i clean data ,I hope one rdd spill two rdd according clean data rule
> -------------------------------------------------------------------------
>
>                 Key: SPARK-38812
>                 URL: https://issues.apache.org/jira/browse/SPARK-38812
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>    Affects Versions: 3.2.1
>            Reporter: gaokui
>            Priority: Major
>
> when id do clean data,one rdd according one value(>or <) filter data, and 
> then generate two different set,one is error data file, another is errorless 
> data file.
> Now I use filter, but this filter must have two spark dag job, that cost too 
> much.
> exactly some code like iterator.span(preidicate) and then return one 
> tuple(iter1,iter2)
> one dataset will be spilted tow dataset in one rule data clean progress.
> i hope compute once not twice.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to