Re: SPARK-5364
Thank you, Raynold. Thanking you. With Regards Sree On Sunday, April 12, 2015 11:18 AM, Reynold Xin r...@databricks.com wrote: I closed it. Thanks. On Sun, Apr 12, 2015 at 11:08 AM, Sree V sree_at_ch...@yahoo.com.invalid wrote: Hi, I was browsing through the JIRAs and found this can be closed.If anyone who has edit permissions on Spark JIRA, please close this. https://issues.apache.org/jira/browse/SPARK-5364 It is OpenIts Pull Request already merged Its parent and grand parent Resolved Thanking you. With Regards Sree Vaddi
Re: [VOTE] Release Apache Spark 1.3.1 (RC3)
+1builds - checktests - checkinstalls and sample run - check Thanking you. With Regards Sree On Friday, April 10, 2015 11:07 PM, Patrick Wendell pwend...@gmail.com wrote: Please vote on releasing the following candidate as Apache Spark version 1.3.1! The tag to be voted on is v1.3.1-rc2 (commit 3e83913): https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=3e8391327ba586eaf54447043bd526d919043a44 The list of fixes present in this release can be found at: http://bit.ly/1C2nVPY The release files, including signatures, digests, etc. can be found at: http://people.apache.org/~pwendell/spark-1.3.1-rc3/ Release artifacts are signed with the following key: https://people.apache.org/keys/committer/pwendell.asc The staging repository for this release can be found at: https://repository.apache.org/content/repositories/orgapachespark-1088/ The documentation corresponding to this release can be found at: http://people.apache.org/~pwendell/spark-1.3.1-rc3-docs/ The patches on top of RC2 are: [SPARK-6851] [SQL] Create new instance for each converted parquet relation [SPARK-5969] [PySpark] Fix descending pyspark.rdd.sortByKey. [SPARK-6343] Doc driver-worker network reqs [SPARK-6767] [SQL] Fixed Query DSL error in spark sql Readme [SPARK-6781] [SQL] use sqlContext in python shell [SPARK-6753] Clone SparkConf in ShuffleSuite tests [SPARK-6506] [PySpark] Do not try to retrieve SPARK_HOME when not needed... Please vote on releasing this package as Apache Spark 1.3.1! The vote is open until Tuesday, April 14, at 07:00 UTC and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Spark 1.3.1 [ ] -1 Do not release this package because ... To learn more about Apache Spark, please see http://spark.apache.org/ - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org
Spark Sql reading hive partitioned tables?
Hey, I was trying out spark sql using the HiveContext and doing a select on a partitioned table with lots of partitions (16,000+). It took over 6 minutes before it even started the job. It looks like it was querying the Hive metastore and got a good chunk of data back. Which I'm guessing is info on the partitions. Running the same query using hive takes 45 seconds for the entire job. I know spark sql doesn't support all the hive optimization. Is this a known limitation currently? Thanks,Tom
Re: [VOTE] Release Apache Spark 1.3.1 (RC3)
+1 (non-binding) Tested 2.6 build with standalone and yarn (no external shuffle service this time, although it does come up). On Fri, Apr 10, 2015 at 11:05 PM, Patrick Wendell pwend...@gmail.com wrote: Please vote on releasing the following candidate as Apache Spark version 1.3.1! The tag to be voted on is v1.3.1-rc2 (commit 3e83913): https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=3e8391327ba586eaf54447043bd526d919043a44 The list of fixes present in this release can be found at: http://bit.ly/1C2nVPY The release files, including signatures, digests, etc. can be found at: http://people.apache.org/~pwendell/spark-1.3.1-rc3/ Release artifacts are signed with the following key: https://people.apache.org/keys/committer/pwendell.asc The staging repository for this release can be found at: https://repository.apache.org/content/repositories/orgapachespark-1088/ The documentation corresponding to this release can be found at: http://people.apache.org/~pwendell/spark-1.3.1-rc3-docs/ The patches on top of RC2 are: [SPARK-6851] [SQL] Create new instance for each converted parquet relation [SPARK-5969] [PySpark] Fix descending pyspark.rdd.sortByKey. [SPARK-6343] Doc driver-worker network reqs [SPARK-6767] [SQL] Fixed Query DSL error in spark sql Readme [SPARK-6781] [SQL] use sqlContext in python shell [SPARK-6753] Clone SparkConf in ShuffleSuite tests [SPARK-6506] [PySpark] Do not try to retrieve SPARK_HOME when not needed... Please vote on releasing this package as Apache Spark 1.3.1! The vote is open until Tuesday, April 14, at 07:00 UTC and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Spark 1.3.1 [ ] -1 Do not release this package because ... To learn more about Apache Spark, please see http://spark.apache.org/ - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org -- Marcelo - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org