Re: GNOME Build situation and BuildStream

2017-04-28 Thread Sébastien Wilmet
On Thu, Apr 27, 2017 at 11:21:37PM +0900, Tristan Van Berkom wrote:
> On Thu, 2017-04-27 at 14:41 +0200, Sébastien Wilmet wrote:
> [...]
> > With jhbuild, when we enter into a jhbuild shell we are still in the
> > same directory, usually inside the git repository. With builddir ==
> > srcdir we have all the files that we can directly open with our
> > preferred text editor. When we open a new terminal tab, we are in the
> > same directory where we are able to 1) do git commands 2) building
> > (with
> > recursive make) 3) launching executables 4) editing files, etc.
> > 
> > With BuildStream, will it be similar?

[...]

> So, this is a little bit fiddly compared to working entirely within one
> build sandbox, only because you really need to exit and enter a sandbox
> environment when you want to try something out, otherwise it's snappy
> (and maybe a convenience command to say "build + shell" in one go could
> reduce a bit of typing).
> 
> On the bright side, you never ever trust your host environment for
> anything, except for a display server and session bus in the case that
> you use `bst shell` to run things.

OK, thanks for your detailed explanation.

--
Sébastien
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: GNOME Build situation and BuildStream

2017-04-27 Thread Tristan Van Berkom
On Thu, 2017-04-27 at 14:41 +0200, Sébastien Wilmet wrote:
> Hi Tristan,
> 

[...]
> With jhbuild, when we enter into a jhbuild shell we are still in the
> same directory, usually inside the git repository. With builddir ==
> srcdir we have all the files that we can directly open with our
> preferred text editor. When we open a new terminal tab, we are in the
> same directory where we are able to 1) do git commands 2) building
> (with
> recursive make) 3) launching executables 4) editing files, etc.
> 
> With BuildStream, will it be similar?

Hi Sébastien,

Good question :)

There are some things which will inevitably be different. I think the
most disruptive thing is that you will not have the experience of
having a single, persistent filesystem tree where the things you've
built "are".

This is because BuildStream does not have a serial build model but
rather will parallelize builds where possible; every build result is
stored in a separate "artifact", and sandboxed environments are created
on demand.

So, first of all to talk about VMs, launching a full VM is the
preferred way to:

  o Test how some software interacts in a full GNOME environment,
usually the bleeding edge of development.

  o Work on modules like GNOME Shell, GDM, GNOME Session etc which
is very difficult to isolate and work on in your host environment.


That said, today BuildStream has a `bst shell` option to stage a given
module's dependencies in a sandbox and run shell on demand.

There are two semantics for this, first of all let's assume that you
have a checkout of the GNOME build metadata (or "modulesets"), your
current working directory is at the root of that checkout, and the
module you want to hack on is called "foo".


  bst shell --scope build foo.bst
  ~~~
  Stage all of the build time dependencies for the module `foo.bst`,
  and also stage the sources for building `foo.bst`, and drop you
  into a shell in the build directory.

  Useful for debugging failed builds (however, when a build fails
  you will be presented an option to shell into the actual failed
  build sandbox anyway).


  bst shell --scope run foo.bst
  ~
  Stage all of the runtime dependencies for the module `foo.bst`,
  including the built `foo.bst` module itself, and drop you
  into a shell in that sandboxed environment.

  Useful for debugging applications to a certain degree, can
  run gdb and strace and similar things in here.

  This shell differs from the actual build sandbox because it allows
  some pass through of the host environment variables. This makes
  it possible to launch graphical applications like say, gedit.

  This will _only_ work well on systems which have a somewhat
  conforming environment, i.e. your host should be running dbus
  and you should have DBUS_SESSION_BUS_ADDRESS set in your environment,
  similarly you want to have DISPLAY in your environment.

  So essentially, launching graphical applications inside
  `bst shell --scope run foo.bst` should work only for the cases that
  it would have worked when using jhbuild, so no loss there really.


Now that part is already working, and dont worry about speed; even if
hundreds of "artifacts" need to be staged into a directory, this is
lightning fast and uses hardlinks to get it done.


But what you will be more interested in is your edit/compile/debug
cycles, for that we have a hard blocker before BuildStream can be
really efficient for the type of workflow you want; we're calling this
"Workspaces"[0].

With workspaces, you will be able to use a directory of your choosing
to build a specific module from (and you can have more than one "active
workspace" at a time, so you might open a workspace to hack on glib,
and another one to hack on GTK+, and have your local trees for both be
effective for any builds).

This is not done yet but here's an approximate mock shell of what I
think the UX would look like:

  # First get a local copy of the modulesets
  host$ git clone 
  host$ cd gnome-modulesets

  # Now lets create some workspaces
  host$ bst init-workspace glib.bst ~/glib
  host$ bst init-workspace gtk+.bst ~/gtk+

  # Open your favorite text editor, and edit
  # files directly in ~/glib and/or ~/gtk+
  #
  # Now build something, maybe we want to just test with gtk3-demo
  host$ bst build gtk+.bst

  # Lets test it
  host$ bst shell --scope run gtk+.bst

  # We're in the sandbox now
  sandbox$ gtk3-demo

  # Hmmm, why did it crash ?
  sandbox$ gdb gtk3-demo
  
  # Ah, I see what I did there...
  sandbox$ exit

  # Edit some files in ~/glib and/or ~/gtk+ and try again
  #
  host$ bst build gtk+.bst
  host$ bst shell --scope run gtk+.bst
  sandbox$ gtk3-demo
  sandbox$ exit

  # Ok that worked !
  host$ cd ~/gtk+
  host$ git commit -a -m "Its a patch !"

  # Do appropriate thing, maybe you push, maybe you
  # do `git format-patch` and post some patch
  #
  # At this point you may want to continuously leave
  # the 

Re: GNOME Build situation and BuildStream

2017-04-27 Thread Sébastien Wilmet
Hi Tristan,

For application or library developers (libraries used by applications),
I'm a bit struggling to see what the workflow will look like with
BuildStream.

I've described two examples of my current workflow in this mail:
https://mail.gnome.org/archives/desktop-devel-list/2016-August/msg00047.html
"builddir != srcdir in jhbuild breaks my workflow"

See also:
https://mail.gnome.org/archives/desktop-devel-list/2017-February/msg00018.html
"Equivalent of recursive make with meson/ninja?"

With BuildStream you're talking about launching a VM. It's quite a big
change compared to how applications are launched with jhbuild.

So, can you describe a little more how the workflow would look like for
application developers using the terminal (not an IDE)?

With jhbuild, when we enter into a jhbuild shell we are still in the
same directory, usually inside the git repository. With builddir ==
srcdir we have all the files that we can directly open with our
preferred text editor. When we open a new terminal tab, we are in the
same directory where we are able to 1) do git commands 2) building (with
recursive make) 3) launching executables 4) editing files, etc.

With BuildStream, will it be similar?

--
Sébastien
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: GNOME Build situation and BuildStream

2017-04-27 Thread Tristan Van Berkom
Hi Matthias,

I realize now that this was too much information at once (even for the
involved reader as opposed to a fly-by reader).

So I'd like to thank you for your mind share.

On Wed, 2017-04-26 at 16:39 -0400, Matthias Clasen wrote:
> Tristan,
> 
> again, it is impossible to reply to an email of this length. I can
> only give a few general comments, beyond that, we really need to sit
> down face-to-face and discuss this. I hope you are going to be at
> Guadec ?

I will certainly be around all week at GUADEC to meet with you and
anyone who wants to discuss :)

I am preparing a talk on this subject, but perhaps I should also try to
organize something more hands on, maybe a BoF or such would be good.

> My general comments:
> 
> What you are describing here (and in your previous communications)
> looks like a big, all-encompassing system, with lots of its own
> terminology and a complete worldview of how things should be built. I
> prefer a system that starts small and solves one problem initially,
> and then maybe grows over time.

I can see how it can come across this way, we are trying to break the
trend of having a build system as something that is tied to any
particular deployment/use case.

As such, I needed to give consideration to a lot of use cases to be
sure that this is something that fits, and is also an improvement over
what exists. These considerations will be reflected in my
communications and I can see how one might think this appears to be
some kind of huge monolith which does everything.

However this is exactly the opposite of what I'm trying to achieve,
instead we are striving to "do one thing well" and are making an effort
to ensure we're doing it the right way, for any use case.

So, the core codebase itself should remain small over time, with really
the sole purpose of being:

   "A format and engine for modeling and executing pipelines of 
    elements which perform mutations on filesystem data within an
    isolated sandboxed environment"

In time, I expect that an ecosystem of plugins and projects will grow
around this, and use cases I had not even foreseen will come to light.
This has already started to happen in some ways, as Jochen Breuer's
commented on my blog here:

   https://blogs.gnome.org/tvb/2017/02/06/introducing-buildstream-2/

And as a result has started to work on a plugin which would allow
importing distro packages and building on top of those bases:

   https://gitlab.com/BuildStream/buildstream/issues/10

> The system you describe seems to be all about centralization, and
> about introducing a new language to describe what we build. That is
> by-and-large what we already have in various incarnations that you
> describe: jhbuild modulesets, the continuous manifest, flatpak
> runtimes. I can get behind the idea of unifying these into a single
> way of describing a multi-module build.
> 
> But I've also seen things mentioned like 'conversion scripts for
> flatpak'. And I think that is exactly the opposite of what we need
> for application building.

I may be mistaken, but I have a feeling you are getting the same
impression which Christian had last month, which I tried to explain in
this email:

   https://mail.gnome.org/archives/desktop-devel-list/2017-March/msg3.html

> If we want to convince 3rd party applications to use flatpak, we
> can't treat it as an intermediate format that we generate from some
> other source, and just use somewhere in a centralized build process.
> We need to embrace it ourselves and use it, just like we expect 3rd
> party applications to use it.

So at the risk of being repetitive, I am completely behind application
authors maintaining their own build metadata themselves, building
flatpaks themselves and/or submitting build metadata to a "flathub" to
have them built and published to users.

Of course this makes sense, because the application authors themselves
are usually best situated to know what should be in their bundling
build metadata.

So let me try to break down how I would see this work (I realize,
already a long email):


  GNOME core modules and services (excluding Flatpak apps)
  
  These can be expressed in a single format/repository for all the
  interesting purposes:

    o Performing CI
    o Creating bootable GNOME images on top of some base system,
      mostly for developers
    o Release modulesets
    o Producing the GNOME Flatpak runtime and SDK

  This is of course, one centralization.


  Flatpak Applications
  
  Considering the benefits which the GNOME core modules and services
  get by representing build metadata for multiple purposes, a given
  application developer team may also benefit in similar ways.

  This is without centralizing everything into one big build blob, or
  using intermediate formats or anything like this.

  Now, this may be more of a personal goal, a bit more ambitious,
  maybe best punted to later on, but 

Re: GNOME Build situation and BuildStream

2017-04-26 Thread Matthias Clasen
Tristan,

again, it is impossible to reply to an email of this length. I can only
give a few general comments, beyond that, we really need to sit down
face-to-face and discuss this. I hope you are going to be at Guadec ?

My general comments:

What you are describing here (and in your previous communications) looks
like a big, all-encompassing system, with lots of its own terminology and a
complete worldview of how things should be built. I prefer a system that
starts small and solves one problem initially, and then maybe grows over
time.

The system you describe seems to be all about centralization, and about
introducing a new language to describe what we build. That is by-and-large
what we already have in various incarnations that you describe: jhbuild
modulesets, the continuous manifest, flatpak runtimes. I can get behind the
idea of unifying these into a single way of describing a multi-module build.

But I've also seen things mentioned like 'conversion scripts for flatpak'.
And I think that is exactly the opposite of what we need for application
building.

If we want to convince 3rd party applications to use flatpak, we can't
treat it as an intermediate format that we generate from some other source,
and just use somewhere in a centralized build process. We need to embrace
it ourselves and use it, just like we expect 3rd party applications to use
it.
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: GNOME Build situation and BuildStream

2017-04-26 Thread Sasa Ostrouska
On Wed, Apr 26, 2017 at 8:51 AM, Tristan Van Berkom <
tristan.vanber...@codethink.co.uk> wrote:

> Hi Sasa,
>
> Hi Tristan !


> On Tue, 2017-04-25 at 17:45 +, Sasa Ostrouska wrote:
> > Woow, long one really. Ok, I think the idea is really good. Of course
> > a lot of work. I as a maintainer of a gnome desktop version for
> > Slackware would like to ask how this would handle the distros which
> > do not use systemd ?
>
> I did not expect this question, but I'm glad you asked it :)
>
> I want just to clarify that I do not intend change this discussion about
if use or not use systemd and
personally i have nothing against it, its just that the distro i use do not
supply it. I think similar situation
is with *BSD .

Firstly, I can say that a new build tool is not going to magically make
> GNOME work better on all distros, however it *can* help us to
> understand the problem better and raise awareness, both for GNOME
> developers and for distro developers/integrators.
>
> Correct, I would personally like to see it be easier buildable on my
distro. But there are some things
which Slackware does not have and on some parts of gnome it is a
dependency. In some cases I can
add this to Slackware and I do it for many years already. But in cases of
an init system is a bit difficult
because it requires too much low level changes. Therefore the idea of
minimum and well defined dependencies
is really good in my opinion.


> Here's how I think we can greatly improve the integrator's experience:
>
> > >   For CI
> > >   ~~
> [...]
> > >   Further than this, I should mention there is some movement to
> implement
> > >   Source plugins to represent distro packaging sources, e.g. the dpkg
> > >   source plugin[4]. With this plugin in place, I suspect it will be
> > >   very easy to run tests of building and running GNOME against various
> > >   third party distributions and versions of those. Imagine we could
> have
> > >   a board somewhere which displays which distributions GNOME builds on
> > >   without failing tests (we could always know when GNOME master is
> failing
> > >   against debian sid, or latest fedora, etc).
>
> So, my vision of how we can improve communication and collaboration
> between GNOME and it's consuming distros works something like this:
>
>   o We would have a dedicated CI system which would build and hopefully
> run GNOME on a series of "subscribed" distros.
>
>   o An interested party (distro representative/contibutor) would have
> to ensure that BuildStream have a 'Source' plugin which handles
> importing of distro packages in the package format which that
> distro uses.
>

This is ok with me, and I am ready to pick it up. Especially for doing the
needed
packages.


> The requirements to meet for implementing a Source plugin are
> outlined in the comments of the 'dpkg source' issue[4].
>
> Ok will take a look at it.


>   o The interested party then subscribes to the CI, by providing
> a simple patch to some YAML which says:
>
>   - This is variant 'slackware'
>   - This is how you obtain the 'slackware' base to build on
> (using the appropriate Source plugin to do the work).
>
> and then adding 'slackware' to a list of variants that the
> CI server should try building.
>
> Perfect, seems fine to me.


>   o For every distro that passes some CI, a bootable image could
> be downloadable too, so one could easily try out the latest GNOME
> on a variety of bleeding edge versions of distros and compare the
> experience (this could be fun pretty quickly :))
>
> Yep, thats good.


> I think it would be great if this CI was centralized and hosted by
> GNOME in some way, even though I'm sure that most distros have their
> own forms of CI, this would provide a venue for GNOME developers to
> collaborate with distros directly and have a better understanding of
> what kind of changes break distros in what ways.
>
> Agreed.


> Now of course in such a utopian future, it would be important to
> understand that GNOME running CI against a variety of distros, does not
> equate to GNOME making a promise to never break distros.
>
> Correct.


> If a CI fails in this context then it could be for any of the following
> reasons:
>
>   o It is a legitimate integration bug in the distro
>   o It is a legitimate bug somewhere in GNOME
>   o The distro did not provide what GNOME requires
>   o GNOME failed to communicate it's requirements clearly enough
>
> So in closing, no this would not magically make GNOME easier to work
> with when integrating on non-systemd distributions, at least not at
> first.
>

Yeah, my intendion with the init system question was mostly to understand
on how you plan to handle this ?
I can try to push up some people in Slackware community who could help up
to get the desired deps up
to date.
Gnome already works quite fine without systemd, some troubles are with gdm
, but other things mostly work
out up to 3.20 which i use 

Re: GNOME Build situation and BuildStream

2017-04-26 Thread Tristan Van Berkom
Hi Sasa,

On Tue, 2017-04-25 at 17:45 +, Sasa Ostrouska wrote:
> Woow, long one really. Ok, I think the idea is really good. Of course
> a lot of work. I as a maintainer of a gnome desktop version for
> Slackware would like to ask how this would handle the distros which
> do not use systemd ?

I did not expect this question, but I'm glad you asked it :)

Firstly, I can say that a new build tool is not going to magically make
GNOME work better on all distros, however it *can* help us to
understand the problem better and raise awareness, both for GNOME
developers and for distro developers/integrators.

Here's how I think we can greatly improve the integrator's experience:

> >   For CI
> >   ~~
[...]
> >   Further than this, I should mention there is some movement to implement
> >   Source plugins to represent distro packaging sources, e.g. the dpkg
> >   source plugin[4]. With this plugin in place, I suspect it will be
> >   very easy to run tests of building and running GNOME against various
> >   third party distributions and versions of those. Imagine we could have
> >   a board somewhere which displays which distributions GNOME builds on
> >   without failing tests (we could always know when GNOME master is failing
> >   against debian sid, or latest fedora, etc).

So, my vision of how we can improve communication and collaboration
between GNOME and it's consuming distros works something like this:

  o We would have a dedicated CI system which would build and hopefully
    run GNOME on a series of "subscribed" distros.

  o An interested party (distro representative/contibutor) would have
    to ensure that BuildStream have a 'Source' plugin which handles
    importing of distro packages in the package format which that
    distro uses.

    The requirements to meet for implementing a Source plugin are
    outlined in the comments of the 'dpkg source' issue[4].

  o The interested party then subscribes to the CI, by providing
    a simple patch to some YAML which says:

      - This is variant 'slackware'
      - This is how you obtain the 'slackware' base to build on
        (using the appropriate Source plugin to do the work).

    and then adding 'slackware' to a list of variants that the
    CI server should try building.

  o For every distro that passes some CI, a bootable image could
    be downloadable too, so one could easily try out the latest GNOME
    on a variety of bleeding edge versions of distros and compare the
    experience (this could be fun pretty quickly :))

I think it would be great if this CI was centralized and hosted by
GNOME in some way, even though I'm sure that most distros have their
own forms of CI, this would provide a venue for GNOME developers to
collaborate with distros directly and have a better understanding of
what kind of changes break distros in what ways.

Now of course in such a utopian future, it would be important to
understand that GNOME running CI against a variety of distros, does not
equate to GNOME making a promise to never break distros.

If a CI fails in this context then it could be for any of the following
reasons:

  o It is a legitimate integration bug in the distro
  o It is a legitimate bug somewhere in GNOME
  o The distro did not provide what GNOME requires
  o GNOME failed to communicate it's requirements clearly enough

So in closing, no this would not magically make GNOME easier to work
with when integrating on non-systemd distributions, at least not at
first.

However, it could help everyone understand the details and problems
surrounding integrating GNOME on any distro better, which would
contribute to a better experience for distro integrators in general
over time.

That is aside from the most obvious advantage, that building the
bleeding edge of GNOME against the bleeding edge of 'foo distro'
continuously will of course help everyone to catch integration bugs
earlier in the cycle.

Cheers,
    -Tristan

[4]: https://gitlab.com/BuildStream/buildstream/issues/10

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: GNOME Build situation and BuildStream

2017-04-26 Thread Tristan Van Berkom
Hi Christian,

On Tue, 2017-04-25 at 10:07 -0700, Christian Hergert wrote:
> On 04/25/2017 09:38 AM, Tristan Van Berkom wrote:
> > 
> > Any questions about what we have created so far and how it works ?
> > Please
> > reply and ask about these !
> 
> I don't think this was mentioned, apologies if I missed it.

No worries, apparently it was a very long email :)

> One thing we want in Builder is a simulator. Being able to take a
> BuildStream bootable image and overlay the project is a very
> desirable
> feature. It could be a patched gnome-shell, glib, or an application.
> It
> would be great If your "workspace" feature can allow us to do this on
> the developer host rather quickly (so we don't wait minutes to launch
> the simulator).

Reducing the image creation routine from minutes to seconds is a bit
difficult, for that purpose you might try reusing an already made image
across multiple sessions.

The way this would normally work (without customization):

  o The user downloads or builds artifacts for all modules which go
    into the image (where 'artifact' is the output of a traditional
    `make DESTDIR=${artifact-root} install`).

    A build is only performed in the case that an artifact does not
    already exist.

  o The user can now build a "workspace" module on pre-built
    dependencies

  o The user can now create an image using their "workspace" artifact

  o The image is then created using all the required artifacts, this
    is an I/O bound task which takes time, proportional to the size of
    the image

The user story for the above should just only be a matter of:

  o Creating a workspace (something like `bst workspace `)
  o Edit sources in the workspace
  o Running `bst build gnome-system-image.bst` would now take the
    active workspace into account

But this would take minutes...

So instead, for a really nifty Builder simulator feature, you might
prefer to work with the content of the image and have it cooperate with
Builder's simulator use case; which may mean either Builder has a fork
of upstream release modulesets (with only few changes), or that the
upstream modulesets have some kind of support for this built in.

There are probably a few ways to get this to be lightning fast once you
have some cooperation from the already created image, here is one idea:

  o You have the image created with a kernel with virtualization
    features, specifically you will want the 9p 'virtfs' filesys.

  o You have the initramfs `init` script check for the presence
    of the 9p device and mount it if it's there (this is where you
    will have the build output "prefix" of only the modules you want
    overridden, which are already staged on the host filesys).

  o You ensure that the system ld.so.conf is configured in such
    a way that the mounted virtfs 'libdir' location takes precedence,
    and probably run ldconfig (not sure that's needed).

    This would ensure that a rebuilt 'glib' is the effective glib
    for the whole system's boot sequence

  o Something similar needs to be done for core system applications
    like gnome-control-center or the like, to ensure the execs in the
    thing you've mounted take precedence over the copies in the image.

    Probably you dont want to care about gdm, gnome-keyring and system
    services, from a Builder perspective.

Something along those lines would allow you to reuse the same image
across multiple Builder user sessions and always be able to rebuild
something against existing pre-built dependencies, and boot an image
immediately after the build completes.

> For the application case, we certainly just want to inject the
> Flatpak'd
> build of the app from Builder rather than a traditional build.
> 
> As you can imagine, it would be hell if we had users downloading full
> bootable images regularly. Do you anticipate a way to publicly expose
> the OStree even for bootable images? Would it be reasonable for
> Builder
> to use that to keep things in sync?

From the BuildStream maintenance perspective I much prefer to keep the
artifact caching implementation details a "black box" (I want to keep
the door open for later supporting other OSes than only Linux).

However it should be easy enough to have BuildStream download it for
you and check it out to a local directory, which would have the same
benefits.

Makes sense ?

Cheers,
    -Tristan


___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: GNOME Build situation and BuildStream

2017-04-25 Thread Sasa Ostrouska
Woow, long one really. Ok, I think the idea is really good. Of course a lot
of work. I as a maintainer of a gnome desktop version for Slackware would
like to ask how this would handle the distros which do not use systemd ?

Rgds
Saxa
www.droplinegnome.org

On Tue, Apr 25, 2017 at 4:38 PM, Tristan Van Berkom <
tristan.vanber...@codethink.co.uk> wrote:

> TL;DR: We are working to improve build tooling for GNOME software
> development, and
>are very interested in GNOME developer community feedback on the
> BuildStream
>approach we are taking, and how to achieve a successful transition.
>
>
> Hi all,
>
> By now many participants of this list are already aware of our efforts
> on
> the BuildStream tooling from reading my blog posts ([0] and [1]), which we
> aspire
> to use in GNOME to improve GNOME's developer experience as well as
> improving the
> process around maintaining GNOME releases.
>
> At this time I would like to start a more open conversation about our
> plans,
> ensure that we are all aligned on what the best approach should be for
> building
> GNOME and also look at how we can implement a better build story for GNOME,
> hopefully over the following months.
>
> There is a lot to say here as it's a big topic, so to structure this a bit,
> I'll talk about:
>
>   o What are the existing use cases of building GNOME ?
> o What are the pain points related to these use cases ?
>   o How do we plan to address these pain points using BuildStream ?
>   o How would we implement BuildStream in GNOME with a smooth transition ?
>
>
> What are the existing use cases of building GNOME ?
> ~~~
>
>   o A developer uses JHBuild to build and hack on parts of
> GNOME on their local laptop or desktop machine, which
> can be any linux distro.
>
> Pain Points:
>
> * Non deterministic set of dependencies cause builds to fail in
>   random unpredictable ways, which can be off putting for newcomers
>   and also an annoyance to seasoned developers who want to fix some
>   specific bug, but get caught up fixing other bugs instead.
>
> * It's difficult to debug core components such as the interactions
>   of gnome-keyring, Linux-PAM, GDM, GNOME Session, GNOME Shell.
>
>   Manual tinkering is needed, you either need a separate machine or
>   a VM you've setup manually to recreate login scenarios and
>   gnome-initial-setup scenarios, ensuring a seamless handoff to the
>   initial user with their online accounts setup and all of this.
>
>   o The release team needs to build and publish GNOME releases
>
> Pain Points:
>
> * Non deterministic dependencies again make things unclear as to
>   what exactly we are releasing.
>
>   E.g., we might know that this vector of GNOME module versions
>   work well on some specific distro it has been tried with, but
>   we can't know for sure that the result of a JHBuild of GNOME
>   will behave the same on any person's distro.
>
>   By the same logic, it becomes difficult as time passes to
>   build older releases of GNOME in the future on more modern
>   dependency sets.
>
> * Current tooling does not allow for any distinction from
>   a specific version of something (be it a commit sha in a git
>   repository or a specific tarball), and a symbolic branch name.
>
>   With JHBuild (or flatpak-builder for that matter), you must
>   either specify a symbolic branch, or a specific commit.
>
>   To advertise a specific release involves a lot of tedious
>   editing of build metadata to communicate new versions of a stable
>   or development release set manually.
>
>   o The Flatpak maintainers currently maintain their own set
> of build metadata to build the GNOME SDK and runtime.
>
> Pain Points:
>
> * Arguably the flatpak maintainership should not be accountable
>   for maintaining the GNOME Runtime and SDK builds. I think we
>   mostly agree by now that it would be great to have the GNOME
>   release team have control of their own flatpak builds.
>
>   There is however a large overlap of libraries and tooling
>   which must appear in the GNOME Runtimes/SDKs and must also
>   appear on an operating system running GNOME (libraries and
>   services to support the display manager, the shell,
>   the session, control center, etc.).
>
>   Maintaining these sets of build metadata in separate formats
>   and separate repositories is both burdensome and error prone:
>   releases are not communicated atomically and nothing guarantees
>   that GNOME 3.24 moduleset versions and GNOME 3.24 flatpak runtime
>   versions coincide at all times.
>
>   o Application module maintainers need to either build Flatpaks
> of their module or at least provide some flatpak-builder json
> for a third party to build it on your behalf.
>
> Pain Points:
>
>

Re: GNOME Build situation and BuildStream

2017-04-25 Thread Matthias Clasen
On Tue, Apr 25, 2017 at 12:38 PM, Tristan Van Berkom <
tristan.vanber...@codethink.co.uk> wrote:

>
>
> Feedback and involvement in any form are greatly appreciated, are
> there parts of the picture you think we've missed ? Please reply and
> tell us about them :)
>
>
My feedback is: too long!!!

If it takes 20 pages to describe, nobody is ever going to get to the bottom
of it, and meaningful feedback will be hard to come by.
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: GNOME Build situation and BuildStream

2017-04-25 Thread Christian Hergert
On 04/25/2017 09:38 AM, Tristan Van Berkom wrote:
> Any questions about what we have created so far and how it works ? Please
> reply and ask about these !

I don't think this was mentioned, apologies if I missed it.

One thing we want in Builder is a simulator. Being able to take a
BuildStream bootable image and overlay the project is a very desirable
feature. It could be a patched gnome-shell, glib, or an application. It
would be great If your "workspace" feature can allow us to do this on
the developer host rather quickly (so we don't wait minutes to launch
the simulator).

For the application case, we certainly just want to inject the Flatpak'd
build of the app from Builder rather than a traditional build.

As you can imagine, it would be hell if we had users downloading full
bootable images regularly. Do you anticipate a way to publicly expose
the OStree even for bootable images? Would it be reasonable for Builder
to use that to keep things in sync?

-- Christian
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


GNOME Build situation and BuildStream

2017-04-25 Thread Tristan Van Berkom
TL;DR: We are working to improve build tooling for GNOME software development, 
and
   are very interested in GNOME developer community feedback on the 
BuildStream
   approach we are taking, and how to achieve a successful transition.


Hi all,

By now many participants of this list are already aware of our efforts on
the BuildStream tooling from reading my blog posts ([0] and [1]), which we 
aspire
to use in GNOME to improve GNOME's developer experience as well as improving the
process around maintaining GNOME releases.

At this time I would like to start a more open conversation about our plans,
ensure that we are all aligned on what the best approach should be for building
GNOME and also look at how we can implement a better build story for GNOME,
hopefully over the following months.

There is a lot to say here as it's a big topic, so to structure this a bit,
I'll talk about:

  o What are the existing use cases of building GNOME ?
o What are the pain points related to these use cases ?
  o How do we plan to address these pain points using BuildStream ?
  o How would we implement BuildStream in GNOME with a smooth transition ?


What are the existing use cases of building GNOME ?
~~~

  o A developer uses JHBuild to build and hack on parts of
GNOME on their local laptop or desktop machine, which
can be any linux distro.

Pain Points:

* Non deterministic set of dependencies cause builds to fail in
  random unpredictable ways, which can be off putting for newcomers
  and also an annoyance to seasoned developers who want to fix some
  specific bug, but get caught up fixing other bugs instead.

* It's difficult to debug core components such as the interactions
  of gnome-keyring, Linux-PAM, GDM, GNOME Session, GNOME Shell.

  Manual tinkering is needed, you either need a separate machine or
  a VM you've setup manually to recreate login scenarios and
  gnome-initial-setup scenarios, ensuring a seamless handoff to the
  initial user with their online accounts setup and all of this.

  o The release team needs to build and publish GNOME releases

Pain Points:

* Non deterministic dependencies again make things unclear as to
  what exactly we are releasing.

  E.g., we might know that this vector of GNOME module versions
  work well on some specific distro it has been tried with, but
  we can't know for sure that the result of a JHBuild of GNOME
  will behave the same on any person's distro.

  By the same logic, it becomes difficult as time passes to
  build older releases of GNOME in the future on more modern
  dependency sets.

* Current tooling does not allow for any distinction from
  a specific version of something (be it a commit sha in a git
  repository or a specific tarball), and a symbolic branch name.

  With JHBuild (or flatpak-builder for that matter), you must
  either specify a symbolic branch, or a specific commit.

  To advertise a specific release involves a lot of tedious
  editing of build metadata to communicate new versions of a stable
  or development release set manually.

  o The Flatpak maintainers currently maintain their own set
of build metadata to build the GNOME SDK and runtime.

Pain Points:

* Arguably the flatpak maintainership should not be accountable
  for maintaining the GNOME Runtime and SDK builds. I think we
  mostly agree by now that it would be great to have the GNOME
  release team have control of their own flatpak builds.

  There is however a large overlap of libraries and tooling
  which must appear in the GNOME Runtimes/SDKs and must also
  appear on an operating system running GNOME (libraries and
  services to support the display manager, the shell,
  the session, control center, etc.).

  Maintaining these sets of build metadata in separate formats
  and separate repositories is both burdensome and error prone:
  releases are not communicated atomically and nothing guarantees
  that GNOME 3.24 moduleset versions and GNOME 3.24 flatpak runtime
  versions coincide at all times.

  o Application module maintainers need to either build Flatpaks
of their module or at least provide some flatpak-builder json
for a third party to build it on your behalf.

Pain Points:

* Here there is not much in terms of pain points for the author
  of a Flatpak application. As long as the only bundled format
  your application wants to support is Flatpak, you'll only
  need to maintain one set of build metadata.

  o The CI story of building GNOME on automated build machines.

Pain Points:

* Here again the problem of maintaining multiple sets of
  build metadata in various formats comes up for the release team.

  We should ideally be using the same build metadata for building