Yea I agree with you all.

Just let you know, this was anyway fixed in
https://github.com/apache/spark/commit/6fc3dc8839eaed673c64ec87af6dfe24f8cebe0c

On 14 Apr 2016 5:13 p.m., "Takeshi Yamamuro" <linguin....@gmail.com> wrote:

>

> The latter is simpler and less-typing, I think.
> How about adding this as an example in these style guides?
>
> // maropu
>
> On Thu, Apr 14, 2016 at 4:35 PM, Mark Hamstra <m...@clearstorydata.com>
wrote:

>>

>> I don't believe the Scala compiler understands the difference between
your two examples the same way that you do.  Looking at a few similar
cases, I've only found the bytecode produced to be the same regardless of
which style is used.
>>
>> On Wed, Apr 13, 2016 at 7:46 PM, Hyukjin Kwon <gurwls...@gmail.com>
wrote:

>>>

>>> Hi all,
>>>
>>> I recently noticed that actually there are some usages of functional
transformations (eg. map, foreach and etc.) with extra anonymous closure.
>>>
>>> For example,
>>>
>>> ...map(item => {
>>>   ...
>>> })
>>>
>>> which can be just simply as below:
>>>
>>> ...map { item =>
>>>   ...
>>> }
>>>
>>> I wrote a regex to find all of them and corrected them for a PR (I did
not submit yet).
>>>
>>> However, I feel a bit hesitating because only reasons I can think for
this are,
>>>
>>>     firstly, Spark coding guides in both
https://github.com/databricks/scala-style-guide and
https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide
are not using the examples as above
>>>
>>>     secondly, I feel like extra anonymous closure can harm performance
but I am too sure,
>>>
>>> which I think are not persuasive enough.
>>>
>>>
>>>
>>> To cut it short, my questions are,
>>>
>>> 1. Would this be a proper change for a PR?
>>>
>>> 2. Would there be more explicit reasons to remove extra closure not
only for coding style?
>>>
>>>
>>> Thanks!
>>
>>
>
>
>
> --
> ---
> Takeshi Yamamuro

Reply via email to