+Thomas and Qian from our validation team, in case they have any insight
on build CI.

On Fri, Sep 08, 2017 at 02:55:08PM +0100, Bruce Richardson wrote:
> On Fri, Sep 08, 2017 at 07:57:06AM -0400, Neil Horman wrote:
> > On Fri, Sep 08, 2017 at 09:50:26AM +0100, Bruce Richardson wrote:
> > > On Thu, Sep 07, 2017 at 12:21:57PM -0400, Neil Horman wrote:
> > > > On Fri, Sep 01, 2017 at 11:04:00AM +0100, Bruce Richardson wrote:
> > > > > To build with meson and ninja, we need some initial infrastructure in
> > > > > place. The build files for meson always need to be called 
> > > > > "meson.build",
> > > > > and options get placed in meson_options.txt
> > > > > 
> > > > > This commit adds a top-level meson.build file, which sets up the 
> > > > > global
> > > > > variables for tracking drivers, libraries, etc., and then includes 
> > > > > other
> > > > > build files, before finishing by writing the global build 
> > > > > configuration
> > > > > header file and a DPDK pkgconfig file at the end, using some of those 
> > > > > same
> > > > > globals.
> > > > > 
> > > > > >From the top level build file, the only include file thus far is for 
> > > > > >the
> > > > > config folder, which does some other setup of global configuration
> > > > > parameters, including pulling in architecture specific parameters 
> > > > > from an
> > > > > architectural subdirectory. A number of configuration build options 
> > > > > are
> > > > > provided for the project to tune a number of global variables which 
> > > > > will be
> > > > > used later e.g. max numa nodes, max cores, etc. These settings all 
> > > > > make
> > > > > their way to the global build config header "rte_build_config.h". 
> > > > > There is
> > > > > also a file "rte_config.h", which includes "rte_build_config.h", and 
> > > > > this
> > > > > file is meant to hold other build-time values which are present in our
> > > > > current static build configuration but are not normally meant for
> > > > > user-configuration. Ideally, over time, the values placed here should 
> > > > > be
> > > > > moved to the individual libraries or drivers which want those values.
> > > > > 
> > > > > Signed-off-by: Bruce Richardson <bruce.richard...@intel.com>
> > > > > Reviewed-by: Harry van Haaren <harry.van.haa...@intel.com>
> > > > 
> > > > I feel like I need to underscore my previous concern here.  While I'm 
> > > > not
> > > > opposed per-se to a new build system, I am very concerned about the 
> > > > burden that
> > > > switching places on downstream consumers, in particular distributions 
> > > > (since I
> > > > represent one of them).  Moving to a new build system with new tools 
> > > > means those
> > > > tools need to be packaged, tested and shipped, which is a significant 
> > > > work
> > > > effort.  While it might be a net gain long term, its something you need 
> > > > to keep
> > > > in mind when making these changes.
> > > > 
> > > Understood. If there is anything we/I can do to make this transition
> > > easier, please flag it for consideration.
> > > 
> > Thank you, I appreciate that.
> > 
> > > > I know you've said that we will be keepting the existing build system,
> > > > I just need to be sure everyone understands just how important that
> > > > is.
> > > > 
> > > What is your feeling here, in terms of timescale. After any new system
> > > reaches feature parity, how long would you estimate that we would need
> > > to support the existing makefile system before it would be safe to
> > > deprecate it? Should we start a deprecation plan, or is it best just to
> > > commit to support both until we get all - or almost all - downstream
> > > consumers switched over? While I wouldn't push for deprecating the old
> > > system any time soon, and I wouldn't consider maintaining the two
> > > unduly burdensome, it's not something we want to do in the long term.
> > > 
> > I was hoping to avoid putting a specific time frame on it, but its a fair
> > question to ask.  I feel like any particular timetable is somewhat 
> > arbitrary.
> > Keith suggested a year, which is likely as good as any in my mind.  To put 
> > a bit
> > more detail behind it, a RHEL release cycle is anywhere from 6 to 18 
> > months, so
> > a year fits well.  If we assume starting a few weeks back when you first
> > proposed this change, that its going to be merged, that gives us time to 
> > package
> > the build components, build the new package using them, get it through a qa
> > cycle and fix anything that pops up as a result.  That way, when the switch 
> > is
> > made, it can be done with an immediate deprecation of the old build system 
> > with
> > a level of confidence that some of the more esoteric build targets/configs 
> > will
> > likely work.
> > 
> > > > Though perhaps the time frame for keeping the current build system as 
> > > > priarmy is
> > > > less concerning, as feature parity is even more critical.  That is to 
> > > > say, the
> > > > new build system must be able to produce the same configurations that 
> > > > the
> > > > current build system does.  Without it I don't think anyone will be 
> > > > able to use
> > > > it consistently, and that will leave a great number of users in a very 
> > > > poor
> > > > position.  I think getting a little closer to parity with the current 
> > > > system is
> > > > warranted.  I'd suggest as a gating factor:
> > > > 
> > > > 1) Building on all supported arches
> > > > 2) Cross building on all supported arches
> > > > 3) Proper identification of targeted machine (i.e. equivalent of the 
> > > > machine
> > > > component of the current build system)
> > > > 
> > > The question there is gating factor for what? Presumably not for merging
> > > into the staging tree. But for merging into the main tree for releases?
> > > I'd push back a little on that, as the new system does not interfere in
> > > any way with the old, and by keeping it in a staging tree until it
> > > reaches full feature parity will make the job considerably harder. For
> > > example, it means that anyone submitting a new driver or library has to
> > > submit the code and makefiles in one set and the meson patches in a
> > > separate one for a separate build tree. It also makes it less likely
> > > that people will try out the new system and find the issues with it, and
> > > help fill in the gaps. While I can understand us not recommending the
> > > new build system until it reaches feature parity, I think there are a
> > > lot of benefits to be got by making it widely available, even if it's
> > > incomplete. 
> > 
> > Yes, sorry, the implied "what" here is gating its introduction to mainline. 
> >  I
> > have no problem with this going into a development or staging branch/tree, 
> > only
> > with it getting merged to mainline and becoming the primary build system 
> > today.
> > I get that it makes reaching feature parity harder, but to not do so 
> > relegates
> > anyone that hasn't had a chance to test the new build system to second class
> > citizen status (or at least potentially does so).  To be a bit more 
> > specific, I
> > can see how energized you might be to get this in place now because you've
> > tested it on a wide array on intel hardware, but I'm guessing if it went in
> > today, people at IBM and Linaro would have to drop active developement to 
> > start
> > switching their build environments over to the new system lest they get 
> > left out
> > in the cold.  I think its more about balacing where the hardship lies here.
> > 
> > As I'm writing this, I wonder if a reasonable compromise couldn't involve 
> > the
> > use of CI?  That is to say, what if we integrated the build system now-ish, 
> > and
> > stood up an official CI instance, that both:
> > 1) built the dpdk in all style configuration, which mandated the use of the 
> > old
> > build system (i.e. we implicitly mandate that the current build system stay
> > working, and is not forgotten), and gate patch merges on that result.
> > 
> > 2) add a test that any change to a meson file in mainline also include a 
> > change
> > to the a Makefile
> > 
> > I'm just spitballing here, but I'm looking for ways to enforce the 
> > continued use
> > of the current build system above and beyond a verbal promise to do so.  The
> > idea is to ensure that it stays operational and primary to the development 
> > of
> > dpdk until build system parity is reached.
> > 
> 
> Yes, that all makes sense, and I agree that the main issue is how to
> ensure/enforce the fact that the old build system stays as the main way
> to build DPDK, until such time as the new system reaches parity. A CI
> system like you describe would be the ideal case, and we have some parts
> of that, just not enough I think to satify our needs. On the other hand,
> while holding the new system in a staging tree till it reaches parity
> would definitely achieve that objective I think the cost of that is too
> high.
> 
> Right now patchwork reports results of "Intel-compilation". I'm not
> aware of any immediate plans to expand that - somebody please should if
> there are. There is also the planned DPDK lab, but I'm not sure of the
> schedule for that or if it will help in this instance.
> 
> For now, I'll prepare a slightly-updated V2 patchset, with a view to
> getting work started on the build-next tree. While that is going on, we
> can maybe continue to think about how to put in suitable guard-rails for
> ensuring old system compatibility for when it's pulled to mainline.
> 
> Thanks again for raising your concerns about this. It's easy for me to
> sometimes push too hard for something I'm enthusiastic about! :-) I do
> want this migration done properly, so if you have the bandwidth I'd
> appreciate any reviews or feedback you can give, as the work goes on.
> 
> Regards,
> /Bruce
> 

Reply via email to