On Tue, Dec 22, 2015 at 1:46 PM, Yehuda Katz <[email protected]> wrote:
> I want to make sure that I understand the core constraints here: > > 1. Gecko wants to be able to reliably do builds (especially urgent > ones) without relying on the uptime of third party services. > 2. Gecko's build bots do not have any access to the Internet. > 3. Servo will continue to make use of third-party libraries like > rust-url that will be developed in parallel by the Rust community. > 4. Browsers need to be sure that the upstream code they are using is > byte-for-byte compatible across builds. > 5. Gecko would like to manually audit all changes to upstream > dependencies, and be sure that once a dependency (direct or indirect) has > been audited, the relevant source code cannot change without additional > intervention. > 6. Both Gecko and Servo need the ability to quickly (and in a > lightweight fashion) make changes to upstream dependencies without needing > to immediately convince the upstream maintainer to accept the change. > 7. The Rust community assumes that the Cargo package manager will be > used to update dependencies, and packages lean on the ability for > sophisticated dependency resolution algorithms to manage the process of > updates. > > Have I missed any important constraints? > > Looks mostly good to me. I would add "and time" to point #4 to drive home the point of reproducible and deterministic builds. There might be room for a point about needs/desires of downstream packagers of Firefox. And Bobby's suggestion about forks being undesired and the developer workflow implications are good too. > On Tue, Dec 22, 2015 at 9:25 AM, Bobby Holley <[email protected]> > wrote: > >> Reposting this without the extraneous mailing lists. Please reply to this >> one. >> >> On Tue, Dec 22, 2015 at 9:23 AM, Bobby Holley <[email protected]> >> wrote: >> >> > On Mon, Dec 21, 2015 at 2:47 PM, Alex Crichton <[email protected]> >> > wrote: >> > >> >> I think one important aspect here is to separate out the concerns of >> >> vendoring code in-tree vs where that code is developed. It sounds like >> >> Gecko has lots of technical requirements about why and how code lives >> >> in-tree, but the more important bit here is how it changes over time. >> >> >> >> What Yehuda was saying about how forking or syncing having downsides I >> >> think is quite important for Rust code in Gecko, as this may run the >> >> risk of splitting the ecosystem on various boundaries (e.g. the "gecko >> >> rust-url" and the "upstream rust-url" or something like that). Along >> >> those lines I think it's very important to drill down into exactly >> >> what concerns there are about having these components developed in the >> >> Servo-style as they are today. >> >> >> > >> > IME "Servo-style today" would already be an enormous hassle if it >> weren't >> > for the mitigating factor that most of the core browser components >> already >> > live in a monolithic repo. In the cases where some more core-ish code >> lives >> > in an external repo (like rust-selectors), making cross-coupled changes >> is >> > a huge pain, even with super-responsive maintainers like Simon. >> > >> > Given that I hack on both, I am interested in solving this problem for >> > Servo too. :-) >> > >> > >> >> It sounds like there's some hesitation about this today, but Yehuda I >> >> think is making a good case that tooling can make the process quite >> >> easy. There's quite a lot to benefit from with having only "one source >> >> of truth" for all this code! >> >> >> >> >> >> On Mon, Dec 21, 2015 at 2:45 PM, Yehuda Katz <[email protected]> wrote: >> >> > Yehuda Katz >> >> > (ph) 718.877.1325 >> >> > >> >> > On Mon, Dec 21, 2015 at 2:28 PM, Josh Matthews < >> [email protected]> >> >> > wrote: >> >> > >> >> >> On 2015-12-21 5:21 PM, Yehuda Katz wrote: >> >> >> >> >> >>> On Mon, Dec 21, 2015 at 2:14 PM, Bobby Holley < >> [email protected]> >> >> >>> wrote: >> >> >>> >> >> >>> I don't think this is going to fly in Gecko, unfortunately. >> >> >>>> >> >> >>>> Gecko is a monolithic repo, and everything needs to be vendored >> >> in-tree >> >> >>>> (a >> >> >>>> non-negotiable requirement from the build peers). This means that >> >> we'll >> >> >>>> already need an automated process for pulling in updates to the >> >> shared >> >> >>>> code, committing them, and running them through CI. >> >> >>>> >> >> >>>> >> >> >>> Can you say a bit more about what the requirements are here? Is the >> >> reason >> >> >>> for including the code in-tree that you need to be 100% confident >> that >> >> >>> everyone is talking about the same code? Or is it more than that? >> >> >>> >> >> >>> >> >> >> The "non-negotiable requirement from the build peers" is that the >> >> machines >> >> >> that build Firefox do not have internet access. >> >> > >> >> > >> >> > That's a totally reasonable requirement and one that I am very >> >> comfortable >> >> > saying that Cargo should support. >> >> > >> >> > To support this requirement, we plan to add a new command that >> packages >> >> up >> >> > an application into a single tarball (or equivalent), complete with >> all >> >> > dependencies, and another command that can build such a package >> without >> >> > network connectivity. >> >> > >> >> > Bundler has equivalent facilities (bundle package and bundle install >> >> > --deployment) that I consider indispensable for deployment scenarios, >> >> and >> >> > which I championed early on in the bundler process. (Heroku uses >> `bundle >> >> > install --deployment` in their buildpacks). >> >> > >> >> > -- Yehuda >> >> _______________________________________________ >> >> dev-servo mailing list >> >> [email protected] >> >> https://lists.mozilla.org/listinfo/dev-servo >> >> >> > >> > >> _______________________________________________ >> dev-servo mailing list >> [email protected] >> https://lists.mozilla.org/listinfo/dev-servo >> > > _______________________________________________ dev-servo mailing list [email protected] https://lists.mozilla.org/listinfo/dev-servo

