Developer awareness absolutely part of it, and I don't think we're there
yet, based on this thread. But I'm also a perf regression extremist :)

The tooling is the other part, and it sounds like now the only tooling we
have is for self-service awareness.

If you require better tooling and people to build the automation you need
to ship performant software, then Gaia as a group should demand that from
the people who are doing the prioritization and hiring for the project.

>From our previous experiences, the ideal automation and tooling almost
always comes from the developers involved, or they're at least fully
integrated into that project. I recommend as a group you design (and
possibly even build) the perf automation you want to see in the world. It
will work better than building up a separate team and making it their
responsibility.

On Tue, Oct 6, 2015 at 7:55 PM Etienne Segonzac <[email protected]> wrote:

>
> On Tue, Oct 6, 2015 at 7:21 PM, Eli Perelman <[email protected]>
> wrote:
>
>> On Tue, Oct 6, 2015 at 11:58 AM, Etienne Segonzac <[email protected]>
>> wrote:
>>
>>>
>>> So we're fine with the system that didn't work for 2.5 and we're making
>>> no promise for the future.
>>> Nice commitment to performance.
>>>
>>
>> I'd hardly say that what we've built doesn't have a positive impact
>> towards performance. The fact that this conversation can even exist with
>> real data is a testament to how far we've come. The intent of my email was
>> not to back people into corners and play blame games, but just to shine a
>> light as to what things look like right now so owners and peers have
>> ammunition to make decisions. Let me repeat what I just said, because it is
>> the crux of the problem:
>>
>>
>>
>> *The current scope of performance automation is not in the state that it
>> is something automatically sheriffable. We've built up the tools and
>> infrastructure from almost the ground-up and it has accomplished exactly
>> what it was set out to do: to put the knowledge of performance and problems
>> into the hands of owners and peers to make their own decisions.*
>> Automating performance is usually not a binary decision like a unit test.
>> It takes analysis and guesswork, and even then it still needs human eyes.
>> Rob and I are working towards making this better, to automate as much as
>> possible, but right now the burden of making the tough calls still lies
>> with those landing patches. We equip you to make those determinations until
>> we have more tooling and automation in place for the sheriffing to actually
>> be an option, because right now *it is not*.
>>
>
>
> We're back to my first message on the thread.
> We don't have the adequate tooling to achieve our performance goal.
>
> Every release we talk about performance like if the issue was a "developer
> awareness" issue, and we take strong stance on how "we should never
> regress".
> But if we meant it we'd have more that 2 people working on the very
> challenging tooling work required. And believe me I'm fully aware of how
> challenging it is.
>
> We can't hand every gaia and gecko developer a link to moztrap (manual
> test case tracker), remove all automated tests, and then be all high-minded
> about how we should never regress a feature. But it's exactly what we're
> doing with launch time performance.
>
_______________________________________________
dev-fxos mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-fxos

Reply via email to