[jira] [Comment Edited] (SPARK-8332) NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer
[ https://issues.apache.org/jira/browse/SPARK-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620975#comment-14620975 ] Kevin Tham edited comment on SPARK-8332 at 7/9/15 6:19 PM: --- I had no issue having play-json_2.11:2.4.0 (which depends on jackson 2.5.3) in my SBT build with Spark 1.3, and then upgrading to Spark 1.4 leads me to not be able to do an sc.parallelize(someCollection) where the collection is a Seq[(TreeMap[String, Double], Double)] and am able to reproduce the error Jonathan Kelly saw with (Exception in thread main com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)) I'll help look into this more when I have time to figure out why, I'm interested to see what commit caused this and the actual cause of this error. This is causing us to stick with Spark 1.3 for now. I hope we can prioritize this JIRA for the next 1.4.x release was (Author: ktham): I had no issue having play-json_2.11:2.4.0 (which depends on jackson 2.5.3) in my SBT build with Spark 1.3, and then upgrading to Spark 1.4 leads me to not be able to do an sc.parallelize(someCollection) where the collection is a Seq[(TreeMap[String, Double], Double)] and am able to reproduce the error Jonathan Kelly saw with (Exception in thread main com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)) I would help when I have time to figure out why, but I'm interested to see what commit caused this and the actual cause of this error. This is causing us to stick with Spark 1.3 for now. I hope we can prioritize this JIRA for the next 1.4.x release NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer -- Key: SPARK-8332 URL: https://issues.apache.org/jira/browse/SPARK-8332 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 1.4.0 Environment: spark 1.4 hadoop 2.3.0-cdh5.0.0 Reporter: Tao Li Priority: Critical Labels: 1.4.0, NoSuchMethodError, com.fasterxml.jackson I complied new spark 1.4.0 version. But when I run a simple WordCount demo, it throws NoSuchMethodError {code} java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer {code} I found out that the default fasterxml.jackson.version is 2.4.4. Is there any wrong or conflict with the jackson version? Or is there possibly some project maven dependency containing the wrong version of jackson? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-8332) NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer
[ https://issues.apache.org/jira/browse/SPARK-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620975#comment-14620975 ] Kevin Tham commented on SPARK-8332: --- I had no issue having play-json_2.11:2.4.0 (which depends on jackson 2.5.3) in my SBT build with Spark 1.3, and then upgrading to Spark 1.4 leads me to not be able to do an sc.parallelize(someCollection) where the collection is a Seq[(TreeMap[String, Double], Double)] and am able to reproduce the error Jonathan Kelly saw with (Exception in thread main com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)) I would help when I have time to figure out why, but I'm interested to see what commit caused this and the actual cause of this error. This is causing us to stick with Spark 1.3 for now. NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer -- Key: SPARK-8332 URL: https://issues.apache.org/jira/browse/SPARK-8332 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 1.4.0 Environment: spark 1.4 hadoop 2.3.0-cdh5.0.0 Reporter: Tao Li Priority: Critical Labels: 1.4.0, NoSuchMethodError, com.fasterxml.jackson I complied new spark 1.4.0 version. But when I run a simple WordCount demo, it throws NoSuchMethodError {code} java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer {code} I found out that the default fasterxml.jackson.version is 2.4.4. Is there any wrong or conflict with the jackson version? Or is there possibly some project maven dependency containing the wrong version of jackson? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-8332) NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer
[ https://issues.apache.org/jira/browse/SPARK-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620975#comment-14620975 ] Kevin Tham edited comment on SPARK-8332 at 7/9/15 6:16 PM: --- I had no issue having play-json_2.11:2.4.0 (which depends on jackson 2.5.3) in my SBT build with Spark 1.3, and then upgrading to Spark 1.4 leads me to not be able to do an sc.parallelize(someCollection) where the collection is a Seq[(TreeMap[String, Double], Double)] and am able to reproduce the error Jonathan Kelly saw with (Exception in thread main com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)) I would help when I have time to figure out why, but I'm interested to see what commit caused this and the actual cause of this error. This is causing us to stick with Spark 1.3 for now. I hope we can prioritize this JIRA for the next 1.4.x release was (Author: ktham): I had no issue having play-json_2.11:2.4.0 (which depends on jackson 2.5.3) in my SBT build with Spark 1.3, and then upgrading to Spark 1.4 leads me to not be able to do an sc.parallelize(someCollection) where the collection is a Seq[(TreeMap[String, Double], Double)] and am able to reproduce the error Jonathan Kelly saw with (Exception in thread main com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)) I would help when I have time to figure out why, but I'm interested to see what commit caused this and the actual cause of this error. This is causing us to stick with Spark 1.3 for now. NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer -- Key: SPARK-8332 URL: https://issues.apache.org/jira/browse/SPARK-8332 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 1.4.0 Environment: spark 1.4 hadoop 2.3.0-cdh5.0.0 Reporter: Tao Li Priority: Critical Labels: 1.4.0, NoSuchMethodError, com.fasterxml.jackson I complied new spark 1.4.0 version. But when I run a simple WordCount demo, it throws NoSuchMethodError {code} java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer {code} I found out that the default fasterxml.jackson.version is 2.4.4. Is there any wrong or conflict with the jackson version? Or is there possibly some project maven dependency containing the wrong version of jackson? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-2006) Spark Sql example throws ClassCastException: Long - Int
Kevin Tham created SPARK-2006: - Summary: Spark Sql example throws ClassCastException: Long - Int Key: SPARK-2006 URL: https://issues.apache.org/jira/browse/SPARK-2006 Project: Spark Issue Type: Bug Components: Examples Affects Versions: 1.0.0 Reporter: Kevin Tham Priority: Minor getInt() is being called on a spark sql COUNT query whose output datatype is of type Long. Casting a Long to Int is an illegal operation and the example should use getLong() instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (SPARK-2006) Spark Sql example throws ClassCastException: Long - Int
[ https://issues.apache.org/jira/browse/SPARK-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14017162#comment-14017162 ] Kevin Tham commented on SPARK-2006: --- Actually there was an earlier pull request that just got merged in, https://github.com/apache/spark/pull/949. so no longer an issue. Spark Sql example throws ClassCastException: Long - Int Key: SPARK-2006 URL: https://issues.apache.org/jira/browse/SPARK-2006 Project: Spark Issue Type: Bug Components: Examples Affects Versions: 1.0.0 Reporter: Kevin Tham Priority: Minor getInt() is being called on a spark sql COUNT query whose output datatype is of type Long. Casting a Long to Int is an illegal operation and the example should use getLong() instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Issue Comment Deleted] (SPARK-1438) Update RDD.sample() API to make seed parameter optional
[ https://issues.apache.org/jira/browse/SPARK-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Tham updated SPARK-1438: -- Comment: was deleted (was: I can work on this (I'd like to try to submit my first Spark contribution :-) )) Update RDD.sample() API to make seed parameter optional --- Key: SPARK-1438 URL: https://issues.apache.org/jira/browse/SPARK-1438 Project: Spark Issue Type: Improvement Components: Spark Core Reporter: Matei Zaharia Priority: Blocker Labels: Starter Fix For: 1.0.0 When a seed is not given, it should pick one based on Math.random(). This needs to be done in Java and Python as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (SPARK-1438) Update RDD.sample() API to make seed parameter optional
[ https://issues.apache.org/jira/browse/SPARK-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963291#comment-13963291 ] Kevin Tham commented on SPARK-1438: --- I can work on this (I'd like to try to submit my first Spark contribution :-) ) Update RDD.sample() API to make seed parameter optional --- Key: SPARK-1438 URL: https://issues.apache.org/jira/browse/SPARK-1438 Project: Spark Issue Type: Improvement Components: Spark Core Reporter: Matei Zaharia Priority: Blocker Labels: Starter Fix For: 1.0.0 When a seed is not given, it should pick one based on Math.random(). This needs to be done in Java and Python as well. -- This message was sent by Atlassian JIRA (v6.2#6252)