[ https://issues.apache.org/jira/browse/SPARK-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112286#comment-15112286 ]
Maciej Bryński commented on SPARK-12843: ---------------------------------------- >From the Spark UI. I can observe time of tasks and shuffle write. > Spark should avoid scanning all partitions when limit is set > ------------------------------------------------------------ > > Key: SPARK-12843 > URL: https://issues.apache.org/jira/browse/SPARK-12843 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.6.0 > Reporter: Maciej Bryński > > SQL Query: > {code} > select * from table limit 100 > {code} > force Spark to scan all partition even when data are available on the > beginning of scan. > This behaviour should be avoided and scan should stop when enough data is > collected. > Is it related to: [SPARK-9850] ? -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org