> [The shepherd] can advise on technical and procedural considerations for
people outside the community

The sentiment is good, but this doesn't justify requiring a shepherd for a
proposal. There are plenty of people that wouldn't need this, would get
feedback during discussion, or would ask a committer or PMC member if it
weren't a formal requirement.

> if no one is willing to be a shepherd, the proposed idea is probably not
going to receive much traction in the first place.

This also doesn't sound like a reason for needing a shepherd. Saying that a
shepherd probably won't hurt the process doesn't give me an idea of why a
shepherd should be required in the first place.

What was the motivation for adding a shepherd originally? It may not be bad
and it could be helpful, but neither of those makes me think that they
should be required or else the proposal fails.

rb

On Thu, Feb 16, 2017 at 12:23 PM, Tim Hunter <timhun...@databricks.com>
wrote:

> The doc looks good to me.
>
> Ryan, the role of the shepherd is to make sure that someone
> knowledgeable with Spark processes is involved: this person can advise
> on technical and procedural considerations for people outside the
> community. Also, if no one is willing to be a shepherd, the proposed
> idea is probably not going to receive much traction in the first
> place.
>
> Tim
>
> On Thu, Feb 16, 2017 at 9:17 AM, Cody Koeninger <c...@koeninger.org>
> wrote:
> > Reynold, thanks, LGTM.
> >
> > Sean, great concerns.  I agree that behavior is largely cultural and
> > writing down a process won't necessarily solve any problems one way or
> > the other.  But one outwardly visible change I'm hoping for out of
> > this a way for people who have a stake in Spark, but can't follow
> > jiras closely, to go to the Spark website, see the list of proposed
> > major changes, contribute discussion on issues that are relevant to
> > their needs, and see a clear direction once a vote has passed.  We
> > don't have that now.
> >
> > Ryan, realistically speaking any PMC member can and will stop any
> > changes they don't like anyway, so might as well be up front about the
> > reality of the situation.
> >
> > On Thu, Feb 16, 2017 at 10:43 AM, Sean Owen <so...@cloudera.com> wrote:
> >> The text seems fine to me. Really, this is not describing a
> fundamentally
> >> new process, which is good. We've always had JIRAs, we've always been
> able
> >> to call a VOTE for a big question. This just writes down a sensible set
> of
> >> guidelines for putting those two together when a major change is
> proposed. I
> >> look forward to turning some big JIRAs into a request for a SPIP.
> >>
> >> My only hesitation is that this seems to be perceived by some as a new
> or
> >> different thing, that is supposed to solve some problems that aren't
> >> otherwise solvable. I see mentioned problems like: clear process for
> >> managing work, public communication, more committers, some sort of
> binding
> >> outcome and deadline.
> >>
> >> If SPIP is supposed to be a way to make people design in public and a
> way to
> >> force attention to a particular change, then, this doesn't do that by
> >> itself. Therefore I don't want to let a detailed discussion of SPIP
> detract
> >> from the discussion about doing what SPIP implies. It's just a process
> >> document.
> >>
> >> Still, a fine step IMHO.
> >>
> >> On Thu, Feb 16, 2017 at 4:22 PM Reynold Xin <r...@databricks.com>
> wrote:
> >>>
> >>> Updated. Any feedback from other community members?
> >>>
> >>>
> >>> On Wed, Feb 15, 2017 at 2:53 AM, Cody Koeninger <c...@koeninger.org>
> >>> wrote:
> >>>>
> >>>> Thanks for doing that.
> >>>>
> >>>> Given that there are at least 4 different Apache voting processes,
> >>>> "typical Apache vote process" isn't meaningful to me.
> >>>>
> >>>> I think the intention is that in order to pass, it needs at least 3 +1
> >>>> votes from PMC members *and no -1 votes from PMC members*.  But the
> document
> >>>> doesn't explicitly say that second part.
> >>>>
> >>>> There's also no mention of the duration a vote should remain open.
> >>>> There's a mention of a month for finding a shepherd, but that's
> different.
> >>>>
> >>>> Other than that, LGTM.
> >>>>
> >>>> On Mon, Feb 13, 2017 at 9:02 AM, Reynold Xin <r...@databricks.com>
> wrote:
> >>>>>
> >>>>> Here's a new draft that incorporated most of the feedback:
> >>>>> https://docs.google.com/document/d/1-Zdi_W-wtuxS9hTK0P9qb2x-
> nRanvXmnZ7SUi4qMljg/edit#
> >>>>>
> >>>>> I added a specific role for SPIP Author and another one for SPIP
> >>>>> Shepherd.
> >>>>>
> >>>>> On Sat, Feb 11, 2017 at 6:13 PM, Xiao Li <gatorsm...@gmail.com>
> wrote:
> >>>>>>
> >>>>>> During the summit, I also had a lot of discussions over similar
> topics
> >>>>>> with multiple Committers and active users. I heard many fantastic
> ideas. I
> >>>>>> believe Spark improvement proposals are good channels to collect the
> >>>>>> requirements/designs.
> >>>>>>
> >>>>>>
> >>>>>> IMO, we also need to consider the priority when working on these
> items.
> >>>>>> Even if the proposal is accepted, it does not mean it will be
> implemented
> >>>>>> and merged immediately. It is not a FIFO queue.
> >>>>>>
> >>>>>>
> >>>>>> Even if some PRs are merged, sometimes, we still have to revert them
> >>>>>> back, if the design and implementation are not reviewed carefully.
> We have
> >>>>>> to ensure our quality. Spark is not an application software. It is
> an
> >>>>>> infrastructure software that is being used by many many companies.
> We have
> >>>>>> to be very careful in the design and implementation, especially
> >>>>>> adding/changing the external APIs.
> >>>>>>
> >>>>>>
> >>>>>> When I developed the Mainframe infrastructure/middleware software in
> >>>>>> the past 6 years, I were involved in the discussions with
> external/internal
> >>>>>> customers. The to-do feature list was always above 100. Sometimes,
> the
> >>>>>> customers are feeling frustrated when we are unable to deliver them
> on time
> >>>>>> due to the resource limits and others. Even if they paid us
> billions, we
> >>>>>> still need to do it phase by phase or sometimes they have to accept
> the
> >>>>>> workarounds. That is the reality everyone has to face, I think.
> >>>>>>
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>>
> >>>>>> Xiao Li
> >>>>>>>
> >>>>>>>
> >>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>


-- 
Ryan Blue
Software Engineer
Netflix

Reply via email to