[jira] [Commented] (SPARK-1199) Type mismatch in Spark shell when using case class defined in shell

2014-07-03 Thread Andrea Ferretti (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051282#comment-14051282
 ] 

Andrea Ferretti commented on SPARK-1199:


More examples on https://issues.apache.org/jira/browse/SPARK-2330 which should 
also be a duplicate

> Type mismatch in Spark shell when using case class defined in shell
> ---
>
> Key: SPARK-1199
> URL: https://issues.apache.org/jira/browse/SPARK-1199
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.0
>Reporter: Andrew Kerr
>Assignee: Prashant Sharma
>Priority: Blocker
>
> Define a class in the shell:
> {code}
> case class TestClass(a:String)
> {code}
> and an RDD
> {code}
> val data = sc.parallelize(Seq("a")).map(TestClass(_))
> {code}
> define a function on it and map over the RDD
> {code}
> def itemFunc(a:TestClass):TestClass = a
> data.map(itemFunc)
> {code}
> Error:
> {code}
> :19: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   data.map(itemFunc)
> {code}
> Similarly with a mapPartitions:
> {code}
> def partitionFunc(a:Iterator[TestClass]):Iterator[TestClass] = a
> data.mapPartitions(partitionFunc)
> {code}
> {code}
> :19: error: type mismatch;
>  found   : Iterator[TestClass] => Iterator[TestClass]
>  required: Iterator[TestClass] => Iterator[?]
> Error occurred in an application involving default arguments.
>   data.mapPartitions(partitionFunc)
> {code}
> The behavior is the same whether in local mode or on a cluster.
> This isn't specific to RDDs. A Scala collection in the Spark shell has the 
> same problem.
> {code}
> scala> Seq(TestClass("foo")).map(itemFunc)
> :15: error: type mismatch;
>  found   : TestClass => TestClass
>  required: TestClass => ?
>   Seq(TestClass("foo")).map(itemFunc)
> ^
> {code}
> When run in the Scala console (not the Spark shell) there are no type 
> mismatch errors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-2330) Spark shell has weird scala semantics

2014-07-03 Thread Andrea Ferretti (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051280#comment-14051280
 ] 

Andrea Ferretti commented on SPARK-2330:


I have tried this by pulling from git and I still have the same issues. I am 
not sure why you cannot reproduce it. What do you get when you open a shell and 
paste the lines above?

In any case, it does seem to be a duplicate of the issue you linked.

> Spark shell has weird scala semantics
> -
>
> Key: SPARK-2330
> URL: https://issues.apache.org/jira/browse/SPARK-2330
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.1, 1.0.0
> Environment: Ubuntu 14.04 with spark-x.x.x-bin-hadoop2
>Reporter: Andrea Ferretti
>  Labels: scala, shell
>
> Normal scala expressions are interpreted in a strange way in the spark shell. 
> For instance
> {noformat}
> case class Foo(x: Int)
> def print(f: Foo) = f.x
> val f = Foo(3)
> print(f)
> :24: error: type mismatch;
>  found   : Foo
>  required: Foo
> {noformat}
> For another example
> {noformat}
> trait Currency
> case object EUR extends Currency
> case object USD extends Currency
> def nextCurrency: Currency = nextInt(2) match {
>   case 0 => EUR
>   case _ => USD
> }
> :22: error: type mismatch;
>  found   : EUR.type
>  required: Currency
>  case 0 => EUR
> :24: error: type mismatch;
>  found   : USD.type
>  required: Currency
>  case _ => USD
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-2330) Spark shell has weird scala semantics

2014-06-30 Thread Andrea Ferretti (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrea Ferretti updated SPARK-2330:
---

Description: 
Normal scala expressions are interpreted in a strange way in the spark shell. 
For instance

{noformat}
case class Foo(x: Int)
def print(f: Foo) = f.x
val f = Foo(3)
print(f)
:24: error: type mismatch;
 found   : Foo
 required: Foo
{noformat}

For another example

{noformat}
trait Currency
case object EUR extends Currency
case object USD extends Currency

def nextCurrency: Currency = nextInt(2) match {
  case 0 => EUR
  case _ => USD
}

:22: error: type mismatch;
 found   : EUR.type
 required: Currency
 case 0 => EUR

:24: error: type mismatch;
 found   : USD.type
 required: Currency
 case _ => USD
{noformat}

  was:
Normal scala expressions are interpreted in a strange way in the spark shell. 
For instance

case class Foo(x: Int)
def print(f: Foo) = f.x
val f = Foo(3)
print(f)
:24: error: type mismatch;
 found   : Foo
 required: Foo

For another example

trait Currency
case object EUR extends Currency
case object USD extends Currency

def nextCurrency: Currency = nextInt(2) match {
  case 0 => EUR
  case _ => USD
}

:22: error: type mismatch;
 found   : EUR.type
 required: Currency
 case 0 => EUR

:24: error: type mismatch;
 found   : USD.type
 required: Currency
 case _ => USD


> Spark shell has weird scala semantics
> -
>
> Key: SPARK-2330
> URL: https://issues.apache.org/jira/browse/SPARK-2330
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 0.9.1, 1.0.0
> Environment: Ubuntu 14.04 with spark-x.x.x-bin-hadoop2
>Reporter: Andrea Ferretti
>  Labels: scala, shell
>
> Normal scala expressions are interpreted in a strange way in the spark shell. 
> For instance
> {noformat}
> case class Foo(x: Int)
> def print(f: Foo) = f.x
> val f = Foo(3)
> print(f)
> :24: error: type mismatch;
>  found   : Foo
>  required: Foo
> {noformat}
> For another example
> {noformat}
> trait Currency
> case object EUR extends Currency
> case object USD extends Currency
> def nextCurrency: Currency = nextInt(2) match {
>   case 0 => EUR
>   case _ => USD
> }
> :22: error: type mismatch;
>  found   : EUR.type
>  required: Currency
>  case 0 => EUR
> :24: error: type mismatch;
>  found   : USD.type
>  required: Currency
>  case _ => USD
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (SPARK-2330) Spark shell has weird scala semantics

2014-06-30 Thread Andrea Ferretti (JIRA)
Andrea Ferretti created SPARK-2330:
--

 Summary: Spark shell has weird scala semantics
 Key: SPARK-2330
 URL: https://issues.apache.org/jira/browse/SPARK-2330
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.0.0, 0.9.1
 Environment: Ubuntu 14.04 with spark-x.x.x-bin-hadoop2
Reporter: Andrea Ferretti


Normal scala expressions are interpreted in a strange way in the spark shell. 
For instance

case class Foo(x: Int)
def print(f: Foo) = f.x
val f = Foo(3)
print(f)
:24: error: type mismatch;
 found   : Foo
 required: Foo

For another example

trait Currency
case object EUR extends Currency
case object USD extends Currency

def nextCurrency: Currency = nextInt(2) match {
  case 0 => EUR
  case _ => USD
}

:22: error: type mismatch;
 found   : EUR.type
 required: Currency
 case 0 => EUR

:24: error: type mismatch;
 found   : USD.type
 required: Currency
 case _ => USD



--
This message was sent by Atlassian JIRA
(v6.2#6252)