[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user hvanhovell commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-154341025 Closing PR. Some of the code in here will probably re-emerge if we ever want distincts in window functions. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user hvanhovell closed the pull request at: https://github.com/apache/spark/pull/9280 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user rxin commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-154234680 @hvanhovell I think the approach we want to take is to either use the aggregate expansion (#9406), or use joins, and not this one. Do you mind closing this one? (We already have a record of it from JIRA in case we need to reference it). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151529538 **[Test build #44430 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44430/consoleFull)** for PR 9280 at commit [`bfbf829`](https://github.com/apache/spark/commit/bfbf829ad1e617053993423a11a76a4a85e8467b). * This patch passes all tests. * This patch merges cleanly. * This patch adds the following public classes _(experimental)_:\n * `case class DistinctAggregateFallback(function: AggregateFunction2) extends DeclarativeAggregate `\n * `case class ReduceSetUsingImperativeAggregate(left: Expression, right: ImperativeAggregate)`\n * `case class ReduceSetUsingDeclarativeAggregate(left: Expression, right: DeclarativeAggregate)`\n * `case class DropAnyNull(child: Expression) extends UnaryExpression `\n --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151529768 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44430/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151529767 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151481687 Merged build started. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151481604 Merged build triggered. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151483407 **[Test build #44430 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44430/consoleFull)** for PR 9280 at commit [`bfbf829`](https://github.com/apache/spark/commit/bfbf829ad1e617053993423a11a76a4a85e8467b). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user hvanhovell commented on a diff in the pull request: https://github.com/apache/spark/pull/9280#discussion_r43116722 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/distinctFallback.scala --- @@ -0,0 +1,173 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.catalyst.expressions.aggregate + +import org.apache.spark.sql.catalyst.InternalRow +import org.apache.spark.sql.catalyst.expressions._ +import org.apache.spark.sql.catalyst.expressions.codegen.{GeneratedExpressionCode, CodeGenContext, CodegenFallback} +import org.apache.spark.sql.types.{AbstractDataType, DataType} +import org.apache.spark.util.collection.OpenHashSet + +/** + * Fallback operator for distinct operators. This will be used when a user issues multiple + * different distinct expressions in a query. + * + * The operator uses the OpenHashSetUDT for de-duplicating values. It is, as a result, not possible + * to use UnsafeRow based aggregation. + */ +case class DistinctAggregateFallback(function: AggregateFunction2) extends DeclarativeAggregate { + override def inputTypes: Seq[AbstractDataType] = function.inputTypes + override def nullable: Boolean = function.nullable + override def dataType: DataType = function.dataType + override def children: Seq[Expression] = Seq(function) + + private[this] val input = function.children match { +case child :: Nil => child +case children => CreateStruct(children) // TODO can we test this? + } + private[this] val items = AttributeReference("itemSet", new OpenHashSetUDT(input.dataType))() + + override def aggBufferAttributes: Seq[AttributeReference] = Seq(items) + override val initialValues: Seq[Expression] = Seq(NewSet(input.dataType)) + override val updateExpressions: Seq[Expression] = Seq(AddItemToSet(input, items)) + override val mergeExpressions: Seq[Expression] = Seq(CombineSets(items.left, items.right)) + override val evaluateExpression: Expression = function match { +case f: Count => CountSet(items) +case f: DeclarativeAggregate => ReduceSetUsingDeclarativeAggregate(items, f) +case f: ImperativeAggregate => ReduceSetUsingImperativeAggregate(items, f) + } +} + +case class ReduceSetUsingImperativeAggregate(left: Expression, right: ImperativeAggregate) + extends BinaryExpression with CodegenFallback { + + override def dataType: DataType = right.dataType + + private[this] val single = right.children.size == 1 + + // TODO can we assume that the offsets are 0 when we haven't touched them yet? + private[this] val function = right +.withNewInputAggBufferOffset(0) +.withNewMutableAggBufferOffset(0) + + @transient private[this] lazy val buffer = +new SpecificMutableRow(right.aggBufferAttributes.map(_.dataType)) + + @transient private[this] lazy val singleValueInput = new GenericMutableRow(1) + + override def eval(input: InternalRow): Any = { +val result = left.eval(input).asInstanceOf[OpenHashSet[Any]] +if (result != null) { + right.initialize(buffer) + val iterator = result.iterator + if (single) { +while (iterator.hasNext) { + singleValueInput.update(0, iterator.next()) + function.update(buffer, singleValueInput) +} + } else { +while (iterator.hasNext) { + function.update(buffer, iterator.next().asInstanceOf[InternalRow]) +} + } + function.eval(buffer) +} else null + } +} + +case class ReduceSetUsingDeclarativeAggregate(left: Expression, right: DeclarativeAggregate) + extends Expression with CodegenFallback { + override def children: Seq[Expression] = Seq(left) + override def nullable: Boolean = right.nullable + override def dataType: DataType = right.dataType + + private[this] val single = right.c
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user hvanhovell commented on a diff in the pull request: https://github.com/apache/spark/pull/9280#discussion_r43067869 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/distinctFallback.scala --- @@ -0,0 +1,173 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.catalyst.expressions.aggregate + +import org.apache.spark.sql.catalyst.InternalRow +import org.apache.spark.sql.catalyst.expressions._ +import org.apache.spark.sql.catalyst.expressions.codegen.{GeneratedExpressionCode, CodeGenContext, CodegenFallback} +import org.apache.spark.sql.types.{AbstractDataType, DataType} +import org.apache.spark.util.collection.OpenHashSet + +/** + * Fallback operator for distinct operators. This will be used when a user issues multiple + * different distinct expressions in a query. + * + * The operator uses the OpenHashSetUDT for de-duplicating values. It is, as a result, not possible + * to use UnsafeRow based aggregation. + */ +case class DistinctAggregateFallback(function: AggregateFunction2) extends DeclarativeAggregate { + override def inputTypes: Seq[AbstractDataType] = function.inputTypes + override def nullable: Boolean = function.nullable + override def dataType: DataType = function.dataType + override def children: Seq[Expression] = Seq(function) + + private[this] val input = function.children match { +case child :: Nil => child +case children => CreateStruct(children) // TODO can we test this? + } + private[this] val items = AttributeReference("itemSet", new OpenHashSetUDT(input.dataType))() + + override def aggBufferAttributes: Seq[AttributeReference] = Seq(items) + override val initialValues: Seq[Expression] = Seq(NewSet(input.dataType)) + override val updateExpressions: Seq[Expression] = Seq(AddItemToSet(input, items)) + override val mergeExpressions: Seq[Expression] = Seq(CombineSets(items.left, items.right)) + override val evaluateExpression: Expression = function match { +case f: Count => CountSet(items) +case f: DeclarativeAggregate => ReduceSetUsingDeclarativeAggregate(items, f) +case f: ImperativeAggregate => ReduceSetUsingImperativeAggregate(items, f) + } +} + +case class ReduceSetUsingImperativeAggregate(left: Expression, right: ImperativeAggregate) + extends BinaryExpression with CodegenFallback { + + override def dataType: DataType = right.dataType + + private[this] val single = right.children.size == 1 + + // TODO can we assume that the offsets are 0 when we haven't touched them yet? + private[this] val function = right +.withNewInputAggBufferOffset(0) +.withNewMutableAggBufferOffset(0) + + @transient private[this] lazy val buffer = +new SpecificMutableRow(right.aggBufferAttributes.map(_.dataType)) + + @transient private[this] lazy val singleValueInput = new GenericMutableRow(1) + + override def eval(input: InternalRow): Any = { +val result = left.eval(input).asInstanceOf[OpenHashSet[Any]] +if (result != null) { + right.initialize(buffer) + val iterator = result.iterator + if (single) { +while (iterator.hasNext) { + singleValueInput.update(0, iterator.next()) + function.update(buffer, singleValueInput) +} + } else { +while (iterator.hasNext) { + function.update(buffer, iterator.next().asInstanceOf[InternalRow]) +} + } + function.eval(buffer) +} else null + } +} + +case class ReduceSetUsingDeclarativeAggregate(left: Expression, right: DeclarativeAggregate) + extends Expression with CodegenFallback { + override def children: Seq[Expression] = Seq(left) + override def nullable: Boolean = right.nullable + override def dataType: DataType = right.dataType + + private[this] val single = right.c
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151306547 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151306414 **[Test build #44376 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44376/consoleFull)** for PR 9280 at commit [`b76b83b`](https://github.com/apache/spark/commit/b76b83b3b94fa585f8d8acae16a088f5bd2002ee). * This patch passes all tests. * This patch merges cleanly. * This patch adds the following public classes _(experimental)_:\n * `case class DistinctAggregateFallback(function: AggregateFunction2) extends DeclarativeAggregate `\n * `case class ReduceSetUsingImperativeAggregate(left: Expression, right: ImperativeAggregate)`\n * `case class ReduceSetUsingDeclarativeAggregate(left: Expression, right: DeclarativeAggregate)`\n * `case class DropAnyNull(child: Expression) extends UnaryExpression `\n --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151306549 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44376/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151282269 **[Test build #44376 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44376/consoleFull)** for PR 9280 at commit [`b76b83b`](https://github.com/apache/spark/commit/b76b83b3b94fa585f8d8acae16a088f5bd2002ee). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151281418 Merged build started. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151281398 Merged build triggered. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151243208 Merged build finished. Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151243209 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44363/ Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151243088 **[Test build #44363 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44363/consoleFull)** for PR 9280 at commit [`3bd6db5`](https://github.com/apache/spark/commit/3bd6db5390dee044ab4673e38329f584b0436a66). * This patch **fails Spark unit tests**. * This patch merges cleanly. * This patch adds the following public classes _(experimental)_:\n * `case class DistinctAggregateFallback(function: AggregateFunction2) extends DeclarativeAggregate `\n * `case class ReduceSetUsingImperativeAggregate(left: Expression, right: ImperativeAggregate)`\n * `case class ReduceSetUsingDeclarativeAggregate(left: Expression, right: DeclarativeAggregate)`\n --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151205312 **[Test build #44363 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/44363/consoleFull)** for PR 9280 at commit [`3bd6db5`](https://github.com/apache/spark/commit/3bd6db5390dee044ab4673e38329f584b0436a66). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151203859 Merged build started. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151203811 Merged build triggered. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user JoshRosen commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151203645 Jenkins, this is ok to test. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/9280#issuecomment-151174372 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-9241] [SQL] [WIP] Supporting multiple D...
GitHub user hvanhovell opened a pull request: https://github.com/apache/spark/pull/9280 [SPARK-9241] [SQL] [WIP] Supporting multiple DISTINCT columns This PR adds support for multiple distinct columns to the new aggregation code path. The implementation uses the ```OpenHashSet``` class and set expressions. As a result we can only use the slower sort based aggregation code path. This also means the code will be probably slower than the old hash aggregation. The PR is currently in the proof of concept phase, and I have submitted it to get some feedback to see if I am headed in the right direction. I'll add more tests if this considered to be the way to go. An example using the new code path: val df = sqlContext .range(1 << 25) .select( $"id".as("employee_id"), (rand(6321782L) * 4 + 1).cast("int").as("department_id"), when(rand(981293L) >= 0.5, "M").otherwise("F").as("gender"), (rand(7123L) * 3 + 1).cast("int").as("education_level") ) df.registerTempTable("employee") // Regular query. sql(""" select department_id as d, count(distinct gender, education_level) as c0, count(distinct gender) as c1, count(distinct education_level) as c2 from employee group by department_id """).show() cc @yhuai You can merge this pull request into a Git repository by running: $ git pull https://github.com/hvanhovell/spark SPARK-9241 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/9280.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #9280 commit 256e1f6902b8adbc304c6e287d7cfdf2ef97b12b Author: Herman van Hovell Date: 2015-10-26T12:46:33Z Created distinct fallback mechanism. commit 6a87384de8d934327ead72daf7210e29be8687b6 Author: Herman van Hovell Date: 2015-10-26T13:35:01Z Added fallback distinct creation to aggregate conversion. commit 3bd6db5390dee044ab4673e38329f584b0436a66 Author: Herman van Hovell Date: 2015-10-26T15:07:22Z Fix style. Fix CG for OpenHashSetUDT. Fix bug. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org