I can confirm that the patch fixed my issue. :-)
-
Cheers,
Stephanie
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Timestamp-support-in-v1-0-tp6850p6948.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
Awesome, thanks for testing!
On Thu, Jun 5, 2014 at 1:30 PM, dataginjaninja rickett.stepha...@gmail.com
wrote:
I can confirm that the patch fixed my issue. :-)
-
Cheers,
Stephanie
--
View this message in context:
Can anyone verify which rc [SPARK-1360] Add Timestamp Support for SQL #275
https://github.com/apache/spark/pull/275 is included in? I am running
rc3, but receiving errors with TIMESTAMP as a datatype in my Hive tables
when trying to use them in pyspark.
*The error I get:
*
14/05/29 15:44:47
I can confirm that the commit is included in the 1.0.0 release candidates
(it was committed before branch-1.0 split off from master), but I can't
confirm that it works in PySpark. Generally the Python and Java interfaces
lag a little behind the Scala interface to Spark, but we're working to keep
Thanks for reporting this!
https://issues.apache.org/jira/browse/SPARK-1964
https://github.com/apache/spark/pull/913
If you could test out that PR and see if it fixes your problems I'd really
appreciate it!
Michael
On Thu, May 29, 2014 at 9:09 AM, Andrew Ash and...@andrewash.com wrote:
I
Yes, I get the same error:
scala val hc = new org.apache.spark.sql.hive.HiveContext(sc)
14/05/29 16:53:40 INFO deprecation: mapred.input.dir.recursive is
deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/05/29 16:53:40 INFO deprecation: mapred.max.split.size is
Michael,
Will I have to rebuild after adding the change? Thanks
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Timestamp-support-in-v1-0-tp6850p6855.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
Darn, I was hoping just to sneak it in that file. I am not the only person
working on the cluster; if I rebuild it that means I have to redeploy
everything to all the nodes as well. So I cannot do that ... today. If
someone else doesn't beat me to it. I can rebuild at another time.
-
Yes, you'll need to download the code from that PR and reassemble Spark
(sbt/sbt assembly).
On Thu, May 29, 2014 at 10:02 AM, dataginjaninja
rickett.stepha...@gmail.com wrote:
Michael,
Will I have to rebuild after adding the change? Thanks
--
View this message in context:
You should be able to get away with only doing it locally. This bug is
happening during analysis which only occurs on the driver.
On Thu, May 29, 2014 at 10:17 AM, dataginjaninja
rickett.stepha...@gmail.com wrote:
Darn, I was hoping just to sneak it in that file. I am not the only person
10 matches
Mail list logo