Re: Exposing JIRA issue types at GitHub PRs

2019-06-13 Thread Marco Gaido
Hi Dongjoon,
Thanks for the proposal! I like the idea. Maybe we can extend it to
component too and to some jira labels such as correctness which may be
worth to highlight in PRs too. My only concern is that in many cases JIRAs
are created not very carefully so they may be incorrect at the moment of
the pr creation and it may be updated later: so keeping them in sync may be
an extra effort..

On Thu, 13 Jun 2019, 08:09 Reynold Xin,  wrote:

> Seems like a good idea. Can we test this with a component first?
>
> On Thu, Jun 13, 2019 at 6:17 AM Dongjoon Hyun 
> wrote:
>
>> Hi, All.
>>
>> Since we use both Apache JIRA and GitHub actively for Apache Spark
>> contributions, we have lots of JIRAs and PRs consequently. One specific
>> thing I've been longing to see is `Jira Issue Type` in GitHub.
>>
>> How about exposing JIRA issue types at GitHub PRs as GitHub `Labels`?
>> There are two main benefits:
>> 1. It helps the communication between the contributors and reviewers with
>> more information.
>> (In some cases, some people only visit GitHub to see the PR and
>> commits)
>> 2. `Labels` is searchable. We don't need to visit Apache Jira to search
>> PRs to see a specific type.
>> (For example, the reviewers can see and review 'BUG' PRs first by
>> using `is:open is:pr label:BUG`.)
>>
>> Of course, this can be done automatically without human intervention.
>> Since we already have GitHub Jenkins job to access JIRA/GitHub, that job
>> can add the labels from the beginning. If needed, I can volunteer to update
>> the script.
>>
>> To show the demo, I labeled several PRs manually. You can see the result
>> right now in Apache Spark PR page.
>>
>>   - https://github.com/apache/spark/pulls
>>
>> If you're surprised due to those manual activities, I want to apologize
>> for that. I hope we can take advantage of the existing GitHub features to
>> serve Apache Spark community in a way better than yesterday.
>>
>> How do you think about this specific suggestion?
>>
>> Bests,
>> Dongjoon
>>
>> PS. I saw that `Request Review` and `Assign` features are already used
>> for some purposes, but these feature are out of the scope in this email.
>>
>


Re: Cartesian issue with user defined objects

2015-02-26 Thread Marco Gaido
Thanks,
my issue was exactly that the function to extract the class from the file used 
the same object, by only changing it. Creating a new object for each item 
solved the issue.
Thank you very much for your reply.
Best regards.

 Il giorno 26/feb/2015, alle ore 22:25, Imran Rashid iras...@cloudera.com ha 
 scritto:
 
 any chance your input RDD is being read from hdfs, and you are running into 
 this issue (in the docs on SparkContext#hadoopFile):
 
 * '''Note:''' Because Hadoop's RecordReader class re-uses the same Writable 
 object for each
 * record, directly caching the returned RDD or directly passing it to an 
 aggregation or shuffle
 * operation will create many references to the same object.
 * If you plan to directly cache, sort, or aggregate Hadoop writable objects, 
 you should first
 * copy them using a `map` function.
 
 
 
 On Thu, Feb 26, 2015 at 10:38 AM, mrk91 marcogaid...@gmail.com 
 mailto:marcogaid...@gmail.com wrote:
 Hello,
 
 I have an issue with the cartesian method. When I use it with the Java types 
 everything is ok, but when I use it with RDD made of objects defined by me it 
 has very strage behaviors which depends on whether the RDD is cached or not 
 (you can see here 
 http://stackoverflow.com/questions/28727823/creating-a-matrix-of-neighbors-with-spark-cartesian-issue
  what happens).
 
 Is this due to a bug in its implementation or are there any requirements for 
 the objects to be passed to it?
 Thanks.
 Best regards.
 Marco 
 View this message in context: Cartesian issue with user defined objects 
 http://apache-spark-user-list.1001560.n3.nabble.com/Cartesian-issue-with-user-defined-objects-tp21826.html
 Sent from the Apache Spark User List mailing list archive 
 http://apache-spark-user-list.1001560.n3.nabble.com/ at Nabble.com.