[ 
https://issues.apache.org/jira/browse/SPARK-26296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M. Le Bihan updated SPARK-26296:
--------------------------------
    Description: 
I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I 
believe those using _PySpark_ no more. 

 

But _Spark_ as been build over _Scala_ instead of plain _Java_ and its a cause 
of troubles, especially at the time of upgrading the JDK. We are awaiting to 
use JDK 11 and _Scala_ is still lowering _Spark_ in the previous version of the 
JDK.

_Big Data_ programming shall not force developpers to get by with _Scala_ when 
its not the language they have choosen.

 

Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ without 
_Hadoop,_ would confort me : a cause of issues would disappear.

Provide an optional _spark-scala_ artifact would be fine as those that do not 
need it won't download it. In the same move, you would to return to the 
generation of standard javadocs for Java classes documentation.

 

  was:
I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I 
believe those using _PySpark_ no more. 

 

But _Spark_ as been build over _Scala_ instead of plain _Java_ and its a cause 
troubles, especially at the time of leveling JDK. We are awaiting to use JDK 11 
and _Scala_ is still lowering _Spark_ in the previous version of the JDK.

_Big Data_ programming shall not force developpers to get by with _Scala_ when 
its not the language they have choosen.

 

Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ without 
_Hadoop,_ would confort me : a cause of issues would disappear.

Provide an optional _spark-scala_ artifact would be fine as those that do not 
need it won't download it. In the same move, you would to return to the 
generation of standard javadocs for Java classes documentation.

 


> Spark build over Java and not over Scala, offering Scala as an option over 
> Spark
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-26296
>                 URL: https://issues.apache.org/jira/browse/SPARK-26296
>             Project: Spark
>          Issue Type: Wish
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: M. Le Bihan
>            Priority: Minor
>
> I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I 
> believe those using _PySpark_ no more. 
>  
> But _Spark_ as been build over _Scala_ instead of plain _Java_ and its a 
> cause of troubles, especially at the time of upgrading the JDK. We are 
> awaiting to use JDK 11 and _Scala_ is still lowering _Spark_ in the previous 
> version of the JDK.
> _Big Data_ programming shall not force developpers to get by with _Scala_ 
> when its not the language they have choosen.
>  
> Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ 
> without _Hadoop,_ would confort me : a cause of issues would disappear.
> Provide an optional _spark-scala_ artifact would be fine as those that do not 
> need it won't download it. In the same move, you would to return to the 
> generation of standard javadocs for Java classes documentation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to