On Mon, Dec 21, 2015 at 2:29 PM, Bobby Holley <[email protected]> wrote:

> On Mon, Dec 21, 2015 at 2:21 PM, Yehuda Katz <[email protected]> wrote:
>
>> On Mon, Dec 21, 2015 at 2:14 PM, Bobby Holley <[email protected]>
>> wrote:
>>
>> > I don't think this is going to fly in Gecko, unfortunately.
>> >
>> > Gecko is a monolithic repo, and everything needs to be vendored in-tree
>> (a
>> > non-negotiable requirement from the build peers). This means that we'll
>> > already need an automated process for pulling in updates to the shared
>> > code, committing them, and running them through CI.
>> >
>>
>> Can you say a bit more about what the requirements are here? Is the reason
>> for including the code in-tree that you need to be 100% confident that
>> everyone is talking about the same code? Or is it more than that?
>>
>
> The reason I've heard are:
> (1) Gecko/Firefox has a very complicated releng setup, and the builders
> are heavily firewalled from the outside, and not allowed to hit the
> network. So adding network dependencies to the build step would require a
> lot of operations work.
> (2) Gecko exists on a pretty long timescale, and we want to make sure that
> we can still build Firefox 50 ten years from now, even if Caro has long
> migrated to some other setup.
> (3) A general unease about depending on any third-party service without a
> contract and SLA in order to build and ship Firefox.
>
> There may be other reasons, or I may be getting some of these wrong. This
> all comes from gps, ted, etc, so you're probably better off discussing with
> them directly.
>

This is the gist of it. There are also implications for downstream
packagers. The more complicated our build mechanism is, the more work it is
for them. Having everything vendored makes it self contained and more
manageable.

There is also a general trend towards reproducible builds. Those are a bit
harder to attain when you are trying to cobble together pieces from
multiple repositories. Related to this are security and integrity concerns.
Could a malicious actor insert a vulnerability in Firefox by compromising a
3rd party repository/project? Would we necessarily have the audit trail in
place to detect this if things weren't vendored? (Yes, we have exposure to
this today.)

Also, #3 is more important than #1. To add some perspective, we can't have
parts of automation clone from github.com because we've found GitHub to be
too unreliable. I'm not talking about the China-based DDoS from a few
months back - this has been a longstanding problem. In general, we don't
want to have a Firefox chemspill delayed because some random 3rd party
server isn't available.
_______________________________________________
dev-servo mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-servo

Reply via email to