[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-24 Thread Oleksiy Dyagilev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163000#comment-15163000
 ] 

Oleksiy Dyagilev commented on SPARK-1199:
-

Yes, I did. It doesn't help, the inner class still doesn't have a no-arg 
constructor visible with a reflection.

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-23 Thread Prashant Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160147#comment-15160147
 ] 

Prashant Sharma commented on SPARK-1199:


Did you try the :paste option ?

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-23 Thread Michael Armbrust (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159458#comment-15159458
 ] 

Michael Armbrust commented on SPARK-1199:
-

Not that I know of.  Also, please use the spark-user list instead of JIRA for 
tech support questions :)

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-23 Thread Oleksiy Dyagilev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159448#comment-15159448
 ] 

Oleksiy Dyagilev commented on SPARK-1199:
-

Thanks, any other options? I want to be able to define classes in the REPL.

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-23 Thread Michael Armbrust (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159436#comment-15159436
 ] 

Michael Armbrust commented on SPARK-1199:
-

You will have to define your case classes in a jar instead of the REPL.

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-23 Thread Oleksiy Dyagilev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159432#comment-15159432
 ] 

Oleksiy Dyagilev commented on SPARK-1199:
-

 Michael Armbrust,
in my use case I have a library that relies on having a default constructor and 
I want to use this library in the REPL. Any workaround for that?

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-23 Thread Michael Armbrust (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159362#comment-15159362
 ] 

Michael Armbrust commented on SPARK-1199:
-

All classes defined in the REPL are inner classes due to the way compilation 
works.  Therefore there is not going to be a no-arg constructor.  This is 
expected behavior.

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2016-02-23 Thread Oleksiy Dyagilev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159338#comment-15159338
 ] 

Oleksiy Dyagilev commented on SPARK-1199:
-

I have problems with declaring case classes in shell, Spark 1.6

This doesn't work for me:

{code}
scala> case class ABCD()
defined class ABCD

scala> new ABCD()
res33: ABCD = ABCD()

scala> classOf[ABCD].getConstructor()
java.lang.NoSuchMethodException: $iwC$$iwC$ABCD.()
 at java.lang.Class.getConstructor0(Class.java:3074)
 at java.lang.Class.getConstructor(Class.java:1817)

scala> classOf[ABCD].getConstructors()
res31: Array[java.lang.reflect.Constructor[_]] = Array(public 
$iwC$$iwC$ABCD($iwC$$iwC))
{code}

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2015-12-10 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051088#comment-15051088
 ] 

Apache Spark commented on SPARK-1199:
-

User 'ScrapCodes' has created a pull request for this issue:
https://github.com/apache/spark/pull/1176

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2015-12-10 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051087#comment-15051087
 ] 

Apache Spark commented on SPARK-1199:
-

User 'ScrapCodes' has created a pull request for this issue:
https://github.com/apache/spark/pull/1179

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
> Fix For: 1.1.0
>
>
> *NOTE: This issue was fixed in 1.0.1, but the fix was reverted in Spark 1.0.2 
> pending further testing. The final fix will be in Spark 1.1.0.*
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2014-07-21 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069038#comment-14069038
 ] 

Patrick Wendell commented on SPARK-1199:


Just a note, I've reverted this fix in branch-1.0 the fix here caused other 
issues that were worse than the original bug (SPARK-2452). This will be fixed 
in 1.1.

 Type mismatch in Spark shell when using case class defined in shell
 ---

 Key: SPARK-1199
 URL: https://issues.apache.org/jira/browse/SPARK-1199
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Andrew Kerr
Assignee: Prashant Sharma
Priority: Blocker
 Fix For: 1.1.0


 Define a class in the shell:
 {code}
 case class TestClass(a:String)
 {code}
 and an RDD
 {code}
 val data = sc.parallelize(Seq(a)).map(TestClass(_))
 {code}
 define a function on it and map over the RDD
 {code}
 def itemFunc(a:TestClass):TestClass = a
 data.map(itemFunc)
 {code}
 Error:
 {code}
 console:19: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   data.map(itemFunc)
 {code}
 Similarly with a mapPartitions:
 {code}
 def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
 data.mapPartitions(partitionFunc)
 {code}
 {code}
 console:19: error: type mismatch;
  found   : Iterator[TestClass] = Iterator[TestClass]
  required: Iterator[TestClass] = Iterator[?]
 Error occurred in an application involving default arguments.
   data.mapPartitions(partitionFunc)
 {code}
 The behavior is the same whether in local mode or on a cluster.
 This isn't specific to RDDs. A Scala collection in the Spark shell has the 
 same problem.
 {code}
 scala Seq(TestClass(foo)).map(itemFunc)
 console:15: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   Seq(TestClass(foo)).map(itemFunc)
 ^
 {code}
 When run in the Scala console (not the Spark shell) there are no type 
 mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2014-07-03 Thread Andrea Ferretti (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051282#comment-14051282
 ] 

Andrea Ferretti commented on SPARK-1199:


More examples on https://issues.apache.org/jira/browse/SPARK-2330 which should 
also be a duplicate

 Type mismatch in Spark shell when using case class defined in shell
 ---

 Key: SPARK-1199
 URL: https://issues.apache.org/jira/browse/SPARK-1199
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Andrew Kerr
Assignee: Prashant Sharma
Priority: Blocker

 Define a class in the shell:
 {code}
 case class TestClass(a:String)
 {code}
 and an RDD
 {code}
 val data = sc.parallelize(Seq(a)).map(TestClass(_))
 {code}
 define a function on it and map over the RDD
 {code}
 def itemFunc(a:TestClass):TestClass = a
 data.map(itemFunc)
 {code}
 Error:
 {code}
 console:19: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   data.map(itemFunc)
 {code}
 Similarly with a mapPartitions:
 {code}
 def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
 data.mapPartitions(partitionFunc)
 {code}
 {code}
 console:19: error: type mismatch;
  found   : Iterator[TestClass] = Iterator[TestClass]
  required: Iterator[TestClass] = Iterator[?]
 Error occurred in an application involving default arguments.
   data.mapPartitions(partitionFunc)
 {code}
 The behavior is the same whether in local mode or on a cluster.
 This isn't specific to RDDs. A Scala collection in the Spark shell has the 
 same problem.
 {code}
 scala Seq(TestClass(foo)).map(itemFunc)
 console:15: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   Seq(TestClass(foo)).map(itemFunc)
 ^
 {code}
 When run in the Scala console (not the Spark shell) there are no type 
 mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2014-06-24 Thread Prashant Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041789#comment-14041789
 ] 

Prashant Sharma commented on SPARK-1199:


One work around is to use `:paste` command of repl to work with these kind of 
scenarios. So if you use :paste and put the whole thing at once it will work 
nicely. I am just mentioning it because I found it, we also have a slightly 
better fix on github PR. 

 Type mismatch in Spark shell when using case class defined in shell
 ---

 Key: SPARK-1199
 URL: https://issues.apache.org/jira/browse/SPARK-1199
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Andrew Kerr
Assignee: Prashant Sharma
Priority: Blocker

 Define a class in the shell:
 {code}
 case class TestClass(a:String)
 {code}
 and an RDD
 {code}
 val data = sc.parallelize(Seq(a)).map(TestClass(_))
 {code}
 define a function on it and map over the RDD
 {code}
 def itemFunc(a:TestClass):TestClass = a
 data.map(itemFunc)
 {code}
 Error:
 {code}
 console:19: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   data.map(itemFunc)
 {code}
 Similarly with a mapPartitions:
 {code}
 def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
 data.mapPartitions(partitionFunc)
 {code}
 {code}
 console:19: error: type mismatch;
  found   : Iterator[TestClass] = Iterator[TestClass]
  required: Iterator[TestClass] = Iterator[?]
 Error occurred in an application involving default arguments.
   data.mapPartitions(partitionFunc)
 {code}
 The behavior is the same whether in local mode or on a cluster.
 This isn't specific to RDDs. A Scala collection in the Spark shell has the 
 same problem.
 {code}
 scala Seq(TestClass(foo)).map(itemFunc)
 console:15: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   Seq(TestClass(foo)).map(itemFunc)
 ^
 {code}
 When run in the Scala console (not the Spark shell) there are no type 
 mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2014-05-28 Thread Michael Malak (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011492#comment-14011492
 ] 

Michael Malak commented on SPARK-1199:
--

See also additional test cases in 
https://issues.apache.org/jira/browse/SPARK-1836 which has now been marked as a 
duplicate.

 Type mismatch in Spark shell when using case class defined in shell
 ---

 Key: SPARK-1199
 URL: https://issues.apache.org/jira/browse/SPARK-1199
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Andrew Kerr
Priority: Critical
 Fix For: 1.1.0


 Define a class in the shell:
 {code}
 case class TestClass(a:String)
 {code}
 and an RDD
 {code}
 val data = sc.parallelize(Seq(a)).map(TestClass(_))
 {code}
 define a function on it and map over the RDD
 {code}
 def itemFunc(a:TestClass):TestClass = a
 data.map(itemFunc)
 {code}
 Error:
 {code}
 console:19: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   data.map(itemFunc)
 {code}
 Similarly with a mapPartitions:
 {code}
 def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
 data.mapPartitions(partitionFunc)
 {code}
 {code}
 console:19: error: type mismatch;
  found   : Iterator[TestClass] = Iterator[TestClass]
  required: Iterator[TestClass] = Iterator[?]
 Error occurred in an application involving default arguments.
   data.mapPartitions(partitionFunc)
 {code}
 The behavior is the same whether in local mode or on a cluster.
 This isn't specific to RDDs. A Scala collection in the Spark shell has the 
 same problem.
 {code}
 scala Seq(TestClass(foo)).map(itemFunc)
 console:15: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   Seq(TestClass(foo)).map(itemFunc)
 ^
 {code}
 When run in the Scala console (not the Spark shell) there are no type 
 mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2014-04-25 Thread Andrew Kerr (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981187#comment-13981187
 ] 

Andrew Kerr commented on SPARK-1199:


I have something of a workaround:

{code}
object MyTypes {
  case class TestClass(a:Int)
}

object MyLogic {
  import MyClasses._
  def fn(b:TestClass) = TestClass(b.a * 2)
  val result = Seq(TestClass(1)).map(fn)
}

MyLogic.result
// Seq{MyTypes.TestClass] = List(TestClass(2))
{code}

Still can't access TestClass outside an object.

 Type mismatch in Spark shell when using case class defined in shell
 ---

 Key: SPARK-1199
 URL: https://issues.apache.org/jira/browse/SPARK-1199
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Andrew Kerr
Priority: Critical
 Fix For: 1.1.0


 Define a class in the shell:
 {code}
 case class TestClass(a:String)
 {code}
 and an RDD
 {code}
 val data = sc.parallelize(Seq(a)).map(TestClass(_))
 {code}
 define a function on it and map over the RDD
 {code}
 def itemFunc(a:TestClass):TestClass = a
 data.map(itemFunc)
 {code}
 Error:
 {code}
 console:19: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   data.map(itemFunc)
 {code}
 Similarly with a mapPartitions:
 {code}
 def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
 data.mapPartitions(partitionFunc)
 {code}
 {code}
 console:19: error: type mismatch;
  found   : Iterator[TestClass] = Iterator[TestClass]
  required: Iterator[TestClass] = Iterator[?]
 Error occurred in an application involving default arguments.
   data.mapPartitions(partitionFunc)
 {code}
 The behavior is the same whether in local mode or on a cluster.
 This isn't specific to RDDs. A Scala collection in the Spark shell has the 
 same problem.
 {code}
 scala Seq(TestClass(foo)).map(itemFunc)
 console:15: error: type mismatch;
  found   : TestClass = TestClass
  required: TestClass = ?
   Seq(TestClass(foo)).map(itemFunc)
 ^
 {code}
 When run in the Scala console (not the Spark shell) there are no type 
 mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)