Github user aray commented on the issue:

    https://github.com/apache/spark/pull/16161
  
    Right now it's not supported to have the following:
    ```
    case class Foo(a: Map[Int, Int])
    ```
    (using the scala Predef version of Map)
    
    The 
[documented](http://spark.apache.org/docs/latest/sql-programming-guide.html#data-types)
 way to do this is:
    ```
    case class Foo(a: scala.collections.Map[Int, Int])
    ```
    and it works fine.
    
    However if someone did not read the documentation carefully and used the 
former, instead of a reasonable error they get a compile error on the code 
spark generates. Therefore IMHO there are two options:
    
    1. Make the generated code work with either version of Map (this PR)
    2. Intercept definitions that use the wrong version of Map and issue a 
proper error.
    
    If the consensus is that this PR is not worth it then I'll be happy to work 
on option 2. But in my opinion as a Spark user option 1 is better.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to