For the current release - maybe Holden could just sign the artifacts with
her own key manually, if this is a concern. I don't think that would
require modifying the release pipeline, except to just remove/ignore the
existing signatures.

- Patrick

On Mon, Sep 18, 2017 at 7:56 PM, Reynold Xin <r...@databricks.com> wrote:

> Does anybody know whether this is a hard blocker? If it is not, we should
> probably push 2.1.2 forward quickly and do the infrastructure improvement
> in parallel.
>
> On Mon, Sep 18, 2017 at 7:49 PM, Holden Karau <hol...@pigscanfly.ca>
> wrote:
>
>> I'm more than willing to help migrate the scripts as part of either this
>> release or the next.
>>
>> It sounds like there is a consensus developing around changing the
>> process -- should we hold off on the 2.1.2 release or roll this into the
>> next one?
>>
>> On Mon, Sep 18, 2017 at 7:37 PM, Marcelo Vanzin <van...@cloudera.com>
>> wrote:
>>
>>> +1 to this. There should be a script in the Spark repo that has all
>>> the logic needed for a release. That script should take the RM's key
>>> as a parameter.
>>>
>>> if there's a desire to keep the current Jenkins job to create the
>>> release, it should be based on that script. But from what I'm seeing
>>> there are currently too many unknowns in the release process.
>>>
>>> On Mon, Sep 18, 2017 at 4:55 PM, Ryan Blue <rb...@netflix.com.invalid>
>>> wrote:
>>> > I don't understand why it is necessary to share a release key. If this
>>> is
>>> > something that can be automated in a Jenkins job, then can it be a
>>> script
>>> > with a reasonable set of build requirements for Mac and Ubuntu? That's
>>> the
>>> > approach I've seen the most in other projects.
>>> >
>>> > I'm also not just concerned about release managers. Having a key stored
>>> > persistently on outside infrastructure adds the most risk, as Luciano
>>> noted
>>> > as well. We should also start publishing checksums in the Spark VOTE
>>> thread,
>>> > which are currently missing. The risk I'm concerned about is that if
>>> the key
>>> > were compromised, it would be possible to replace binaries with
>>> perfectly
>>> > valid ones, at least on some mirrors. If the Apache copy were
>>> replaced, then
>>> > we wouldn't even be able to catch that it had happened. Given the high
>>> > profile of Spark and the number of companies that run it, I think we
>>> need to
>>> > take extra care to make sure that can't happen, even if it is an
>>> annoyance
>>> > for the release managers.
>>>
>>> --
>>> Marcelo
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>>
>>>
>>
>>
>> --
>> Twitter: https://twitter.com/holdenkarau
>>
>
>

Reply via email to