[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/19002 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
Github user wangyum commented on a diff in the pull request: https://github.com/apache/spark/pull/19002#discussion_r134144956 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala --- @@ -39,7 +39,6 @@ import org.apache.spark.sql.catalyst.plans.PlanTest import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan import org.apache.spark.sql.catalyst.util._ import org.apache.spark.sql.execution.FilterExec -import org.apache.spark.sql.internal.SQLConf --- End diff -- OK, i'll revert it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/19002#discussion_r134140695 --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala --- @@ -255,6 +256,18 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo val df = dfRead.filter(dfRead.col("date_type").lt(dt)) .filter(dfRead.col("timestamp_type").lt(ts)) +val parentPlan = df.queryExecution.executedPlan +assert(parentPlan.isInstanceOf[WholeStageCodegenExec]) +val node = parentPlan.asInstanceOf[WholeStageCodegenExec] +val metadata = node.child.asInstanceOf[RowDataSourceScanExec].metadata +// The "PushedFilters" part should be exist in Dataframe's --- End diff -- little nit: `should be exist` -> `should exist` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/19002#discussion_r134140826 --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala --- @@ -255,6 +256,18 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo val df = dfRead.filter(dfRead.col("date_type").lt(dt)) .filter(dfRead.col("timestamp_type").lt(ts)) +val parentPlan = df.queryExecution.executedPlan +assert(parentPlan.isInstanceOf[WholeStageCodegenExec]) +val node = parentPlan.asInstanceOf[WholeStageCodegenExec] +val metadata = node.child.asInstanceOf[RowDataSourceScanExec].metadata +// The "PushedFilters" part should be exist in Dataframe's +// physical plan and the existence of right literals in +// "PushedFilters" is used to prove that the predicates +// pushing down have been effective. +assert(metadata.get("PushedFilters").ne(None)) --- End diff -- nit: Could we use `isDefined` instead of `ne(None)`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/19002#discussion_r134114700 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala --- @@ -39,7 +39,6 @@ import org.apache.spark.sql.catalyst.plans.PlanTest import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan import org.apache.spark.sql.catalyst.util._ import org.apache.spark.sql.execution.FilterExec -import org.apache.spark.sql.internal.SQLConf --- End diff -- But I guess it's unrelated change. It wouldn't block this PR but I guess it's good to revert this one back. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
Github user gatorsmile commented on a diff in the pull request: https://github.com/apache/spark/pull/19002#discussion_r134105895 --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala --- @@ -255,6 +256,18 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo val df = dfRead.filter(dfRead.col("date_type").lt(dt)) .filter(dfRead.col("timestamp_type").lt(ts)) +val parentPlan = df.queryExecution.executedPlan +assert(parentPlan.isInstanceOf[WholeStageCodegenExec]) +val node = parentPlan.asInstanceOf[WholeStageCodegenExec] +val metadata = node.child.asInstanceOf[RowDataSourceScanExec].metadata +// The "PushedFilters" part should be exist in Datafrome's --- End diff -- `Datafrome's ` -> `Dataframe's ` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
Github user wangyum commented on a diff in the pull request: https://github.com/apache/spark/pull/19002#discussion_r134104208 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala --- @@ -39,7 +39,6 @@ import org.apache.spark.sql.catalyst.plans.PlanTest import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan import org.apache.spark.sql.catalyst.util._ import org.apache.spark.sql.execution.FilterExec -import org.apache.spark.sql.internal.SQLConf --- End diff -- Unused import --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19002: [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdo...
GitHub user wangyum opened a pull request: https://github.com/apache/spark/pull/19002 [SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdown verification back. ## What changes were proposed in this pull request? The previous PR(https://github.com/apache/spark/pull/19000) removed filter pushdown verification, This PR add them back. ## How was this patch tested? manual tests You can merge this pull request into a Git repository by running: $ git pull https://github.com/wangyum/spark SPARK-21790-follow-up Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/19002.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #19002 commit 2b45129def7d190a3949c31b9341e071b63526bc Author: Yuming WangDate: 2017-08-20T00:41:54Z Get metadata from RowDataSourceScanExec commit 71182ddcb16d8550bdbe2d946c451a47cea79af8 Author: Yuming Wang Date: 2017-08-20T00:55:41Z Remove unused import --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org