Re: Changing reftest required resolution

2012-08-28 Thread Benoit Girard
I've already done this work but we decided to just increase the
resolution for our tegra board:

See https://bugzilla.mozilla.org/show_bug.cgi?id=66
which includes an outdated patch that adds a screen(w,h) annotation to
each test and a patch to compute the required size per test.

On Tue, Aug 28, 2012 at 1:35 PM, Jeff Hammel  wrote:
> If the exact width/height could be found for each test then these could be
> marked in a manifest (theoretically, not speaking to the existing reftest
> manifest format per se).  Then reftest could be modified to take a (e.g.)
> --resoltuion 400x400 argument and the test runner passing over any test that
> exceeds either dimension.
>
> Just a thought.  Not sure how feasible this is.
>
>
> On 08/28/2012 09:52 AM, Andrew Halberstadt wrote:
>>
>> == The Problem ==
>>
>> Reftests currently assume a window size of 800x1000. This is not possible
>> on mobile devices, and using this resolution on tegras and pandas consumes
>> too much memory and results in many timeouts and random-failures.
>>
>> Changing the resolution to something like 400x400 is very stable and only
>> results in a handful of failures. The problem is it's difficult to figure
>> out how many tests are false positives as a result of lost coverage.
>>
>> See also: https://bugzilla.mozilla.org/show_bug.cgi?id=737961
>>
>>
>> == Possible Solutions ==
>>
>> * Somehow figure out the exact pixel width/height required by each test
>> and re-write the ones that are over the new target resolution
>>
>> * Scroll the contentView and use multiple drawWindow calls to stitch the
>> canvases together piece by piece
>>
>> * Disable the DRAW_WIDGET_LAYERS flag so content is rendered off screen
>> (I'm under the impression this is not a valid solution)
>>
>> * ?
>>
>> Things to keep in mind:
>>
>> * Needed for fennec and b2g (must work oop)
>> * Some of the tests are shared (apparently?) outside of mozilla
>>
>>
>> == Feedback ==
>>
>> I'm looking for feedback on the best way to move forward. Also, does
>> anyone have any opinions on what the new target resolution should be?
>>
>> Andrew
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


RE: HTML depth limit?

2012-10-29 Thread Benoit Girard
I actually run into the same problem with the cleopatra tree widget.
Each tree level adds 2 or 3 levels to the DOM for that page so I think
after 100 to 200 levels the expansion stops.

Ehsan mentions that we have a limit on our frame tree as we use
recursion but I don't know where this code lives.

On Mon, Oct 29, 2012 at 2:47 PM, Jan Honza Odvarko  wrote:
> Is there any depth  limit for HTML elements in a document?
>
> Related Firebug bug report:
> http://code.google.com/p/fbug/issues/detail?id=5780
>
> Honza
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Try Server wait times - please cancel unwanted/busted runs

2012-12-07 Thread Benoit Girard
Is there an API we can query to know what the estimated wait time or load
for a slave pool is? Perhaps 'http://trychooser.pub.build.mozilla.org/'
could be modified to give an indication of the load for a particular
platform. I would be more mindful at balancing my load if the information
was provided.

On Fri, Dec 7, 2012 at 9:24 AM, Ed Morley wrote:

> On Thursday, 27 September 2012 14:30:01 UTC+1, Ed Morley  wrote:
> > > In the meantime please can everyone remember to cancel unwanted/busted
> Try runs, to help the overall wait time.
> > >
> > > Builds/tests can be cancelled all at once, or on a per job basis:
> > > http://people.mozilla.com/~emorley/misc/tbpl-cancel-buttons.png
> >
> > A quick reminder:
> >
> > Please can people cancel unwanted Try runs -- a glance at Try right now
> shows many runs that could be cancelled to help with wait times. See the
> image above for how to mass-cancel via TBPL.
> >
>
> Few months on another quick reminder about this - many more people are now
> cancelling their builds (thank you <3), but looking at Try the last few
> days still shows more people that could be.
>
> This is particularly crucial at the moment, due to the shortage of linux32
> test slaves (which run both the B2G emulator tests & the linux32 tests).
> See bug 818833 for more info.
>
> Thank you :-)
>
> Ed
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Try Server wait times - please cancel unwanted/busted runs

2012-12-07 Thread Benoit Girard
If we could expose the data via a cross domain API in text format I can
modify trychooser to display loaded platforms.


On Fri, Dec 7, 2012 at 2:06 PM, Ehsan Akhgari wrote:

> On 2012-12-07 1:54 PM, Benoit Girard wrote:
>
>> Is there an API we can query to know what the estimated wait time or load
>> for a slave pool is? Perhaps 
>> 'http://trychooser.pub.build.**mozilla.org/<http://trychooser.pub.build.mozilla.org/>
>> '
>> could be modified to give an indication of the load for a particular
>> platform. I would be more mindful at balancing my load if the information
>> was provided.
>>
>
> <http://build.mozilla.org/**builds/pending/try.html<http://build.mozilla.org/builds/pending/try.html>>
> is one way of getting that information.
>
> Cheers,
> Ehsan
>
>
> __**_
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/**listinfo/dev-platform<https://lists.mozilla.org/listinfo/dev-platform>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Try Server wait times - please cancel unwanted/busted runs

2012-12-19 Thread Benoit Girard
I filed a bug for doing what I suggested. Turns out we don't need CORS
headers for a GET:
https://bugzilla.mozilla.org/show_bug.cgi?id=823135


On Fri, Dec 7, 2012 at 5:02 PM, Benoit Girard  wrote:

> If we could expose the data via a cross domain API in text format I can
> modify trychooser to display loaded platforms.
>
>
>
> On Fri, Dec 7, 2012 at 2:06 PM, Ehsan Akhgari wrote:
>
>> On 2012-12-07 1:54 PM, Benoit Girard wrote:
>>
>>> Is there an API we can query to know what the estimated wait time or load
>>> for a slave pool is? Perhaps 
>>> 'http://trychooser.pub.build.**mozilla.org/<http://trychooser.pub.build.mozilla.org/>
>>> '
>>> could be modified to give an indication of the load for a particular
>>> platform. I would be more mindful at balancing my load if the information
>>> was provided.
>>>
>>
>> <http://build.mozilla.org/**builds/pending/try.html<http://build.mozilla.org/builds/pending/try.html>>
>> is one way of getting that information.
>>
>> Cheers,
>> Ehsan
>>
>>
>> __**_
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/**listinfo/dev-platform<https://lists.mozilla.org/listinfo/dev-platform>
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


GTest has landed

2013-02-25 Thread Benoit Girard
Hello dev.platform,

GTest has landed this weekend on mozilla-central[1]. It should now be
ready for developers to start writing test. It will appear on
tinderbox once it is build off '--enable-tests'. For more details see
the documentation: https://developer.mozilla.org/en-US/docs/GTest

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=767231
[2] 
http://benoitgirard.wordpress.com/2013/02/25/gtest-has-landed-start-writing-your-unit-tests/

Benoit Girard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GTest has landed

2013-02-25 Thread Benoit Girard
It's not and it's a great suggestion. I filed bug 844869.

On Mon, Feb 25, 2013 at 11:04 AM, L. David Baron  wrote:
> On Monday 2013-02-25 10:57 -0500, Benoit Girard wrote:
>> GTest has landed this weekend on mozilla-central[1]. It should now be
>> ready for developers to start writing test. It will appear on
>> tinderbox once it is build off '--enable-tests'. For more details see
>> the documentation: https://developer.mozilla.org/en-US/docs/GTest
>>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=767231
>> [2] 
>> http://benoitgirard.wordpress.com/2013/02/25/gtest-has-landed-start-writing-your-unit-tests/
>
> When tests are run in debug builds, are NS_ASSERTIONs fatal, as they
> are for |make check| tests?  They should be, and the fewer tests you
> currently have, the easier it is to make them fatal.
>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla   http://www.mozilla.org/   𝄂
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reorganizing some test directories

2013-04-10 Thread Benoit Girard
With the fix to bug 844288 gtest will also need their own directory. I was
planning on allowing users to use /tests folder for their tests
but to go in line with this upcoming change I'll update my patches and
suggest that anyone who adds gtest to use /tests/gtest to
conform with this new style.

I was under the impression this will further recursify our builds making
them more expensive.


On Wed, Apr 10, 2013 at 3:07 PM, jmaher  wrote:

> There are a couple common directory structures used for storing tests in
> the tree:
> 1) /tests
> 2) /tests/
>
> I have a series of patches which will move most of the directory
> structures from #1 to a format of #2.  This means we would see:
> /tests/mochitest
> /tests/browser
> /tests/chrome
>
> I have noticed this happens naturally in the newer folders (i.e.
> browser/metro).  A couple weeks ago I heard a concern this might bitrot
> some patches, while that might happen in the last couple weeks only a
> handful directories had tests which changed (added/deleted/editing) such
> that my patches experienced bitrot. Please comment here or in bug 852065 as
> I would like to land these patch this by this weekend.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Doxygen For Mozilla-Central Modules

2013-05-01 Thread Benoit Girard
I've started to run doxygen on a fresh mozilla-central by cron once a day
in the hopes of encouraging source code documentation. I run doxygen on sub
modules only for users that are interested in the output. You can see the
latest results here:

http://people.mozilla.com/~bgirard/doxygen

And here is an example of a non trivial class GLContext:
http://people.mozilla.com/~bgirard/doxygen/gfx/classmozilla_1_1gl_1_1GLContext.html

You can see my script and configuration files here:
https://github.com/bgirard/doxygen-mozilla

I'll be accepting requests to run doxygen on additional submodules. There
are several problems with the configuration files that could improve the
results (e.g. include path) that I do NOT plan on fixing but will gladly
accept a pull request. Note that the results appear to be sufficiently
useful without these fixes.

-Benoit Girard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Doxygen For Mozilla-Central Modules

2013-05-01 Thread Benoit Girard
Yes I had found that one but it was last updated in 2011. I want something
that is updated daily.


On Wed, May 1, 2013 at 12:31 PM, Andrew McCreight wrote:

> I've also come across somebody running doxygen on mozilla code here:
>  http://doxygen.db48x.net/mozilla/html/
>
> It shows up in google searches when you search Mozilla-y class names, but
> I don't know who or what runs it.
>
> Andrew
>
> - Original Message -
> > I've started to run doxygen on a fresh mozilla-central by cron once a
> > day
> > in the hopes of encouraging source code documentation. I run doxygen
> > on sub
> > modules only for users that are interested in the output. You can see
> > the
> > latest results here:
> >
> > http://people.mozilla.com/~bgirard/doxygen
> >
> > And here is an example of a non trivial class GLContext:
> >
> http://people.mozilla.com/~bgirard/doxygen/gfx/classmozilla_1_1gl_1_1GLContext.html
> >
> > You can see my script and configuration files here:
> > https://github.com/bgirard/doxygen-mozilla
> >
> > I'll be accepting requests to run doxygen on additional submodules.
> > There
> > are several problems with the configuration files that could improve
> > the
> > results (e.g. include path) that I do NOT plan on fixing but will
> > gladly
> > accept a pull request. Note that the results appear to be
> > sufficiently
> > useful without these fixes.
> >
> > -Benoit Girard
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Doxygen For Mozilla-Central Modules

2013-05-01 Thread Benoit Girard
Right now doxygen runs directly on the source code so it's not trivial to
run doxygen on there. I'd be happy to accept a pull that builds and indexes
dist/idl.


On Wed, May 1, 2013 at 12:35 PM, Joshua Cranmer 🐧 wrote:

> On 5/1/2013 11:21 AM, Benoit Girard wrote:
>
>> I'll be accepting requests to run doxygen on additional submodules. There
>> are several problems with the configuration files that could improve the
>> results (e.g. include path) that I do NOT plan on fixing but will gladly
>> accept a pull request. Note that the results appear to be sufficiently
>> useful without these fixes.
>>
>>  Would it be possible to run doxygen just on the entire dist/idl
> directory?
>
> --
> Joshua Cranmer
> Thunderbird and DXR developer
> Source code archæologist
>
>
> __**_
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/**listinfo/dev-platform<https://lists.mozilla.org/listinfo/dev-platform>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Doxygen For Mozilla-Central Modules

2013-05-01 Thread Benoit Girard
On Wed, May 1, 2013 at 12:47 PM, Ralph Giles  wrote:

> You might consider putting only the variables you've
> changed in your Doxyfiles, relying on the defaults for everything else.


Thanks for the feedback. I started with the config in
config/doxygen.cfg.inbut it does seem significantly out of date. I'll
take a look if I notice
significant problems with the current results.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standalone GTests

2013-05-08 Thread Benoit Girard
If you plan on adding more tests to GTest please build against patches in
bug 844288 which will hopefully land soon. Note that I have one outstanding
problem left on Windows where the linking only fails on TBPL jobs.

I personally have a strong preference for keeping tests, particularly unit
tests, running fast. My workflow is to develop new code by testing it via
unit tests so longer turn around times slow down my development workflow.
Sadly we don't have a good a way with dealing with slow but efficient tests
and they end up making it difficult to test locally. But this isn't a
reason to reject a useful test so feel free to check it in. Perhaps we
should consider introducing a concept of smoke tests for developers to run
locally where we can aim to maintain a good trade off between code coverage
and turn around times. But that's orthogonal to your request. I think
splitting GTest into a separate test job should be a prerequisite to not
slow down the build job significantly which AIUI reduces test turn around
times by delaying all test jobs from starting.




On Wed, May 8, 2013 at 1:43 PM, Adam Roach  wrote:

> On 5/8/13 12:10, Gregory Szorc wrote:
>
>> I think this is more a question for sheriffs and people closer to
>> automation. Generally, you need to be cognizant of timeouts enforced by our
>> automation infrastructure and the scorn people will give you for running a
>> test that isn't efficient. But if it is efficient and passes try, you're
>> generally on semi-solid ground.
>>
>
> The issue with the signaling tests is that there are necessarily a lot of
> timers involved, so they'll always take a long time to run. They're pretty
> close to optimally "efficient" inasmuch as there's really no faster way to
> do what they're doing. I suspect you mean "runs in a very short amount of
> time" rather than "efficient."
>
> It should be noted that not running the signaling_unittests on checkin has
> bitten us several times, as we'll go for long period of times with
> regressions that would have been caught if they'd been running (and then
> it's a lot more work to track down where the regression was introduced).
>
> /a
>
> __**_
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/**listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code Review Session

2013-05-24 Thread Benoit Girard
On Fri, May 24, 2013 at 10:30 AM, Benjamin Smedberg
wrote:

> * Automated tools: mhoye has identified lack of automated review as one of
> our biggest blockers to getting more mentors involved and having successful
> mentoring for new volunteers. It turns out that nobody wants to mentor bugs
> when most of the interaction involves "please fix this
> whitespace/style/etc". cc'ing him so he can provide more details.
>

I've got some patches that import webkit's check-style script to check the
style[1]. It just runs regex rather then doing a real parse of the code but
importing it turns out to be a low effort/high reward. I've already started
working on a robot to post to bugzilla. Perhaps later we can replace it
with a more intelligent tool.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=875605

-Benoit Girard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code Review Session

2013-05-29 Thread Benoit Girard
On Mon, May 27, 2013 at 10:54 PM, Anthony Jones  wrote:
> A pre-upload check would give the fastest feedback.

I'll be checking in a script in the mozilla repo that can be ran offline
and produce the same results.

On Tue, May 28, 2013 at 10:44 AM, Mike Hoye  wrote:

> On 2013-05-27 10:54 PM, Anthony Jones wrote:
>
>> A pre-upload check would give the fastest feedback.
>>
>> It would help me (and those who review my code) if there is an easy way
>> to check my patches from the command line before doing hg bzexport. Even
>> if it is only for white space. What we need is a way to specify which
>> file paths the standard formatting rules apply to. I just need to figure
>> out how to install clang-format.
>>
>
> The clang-format list tells me that there are near-term plans for a
> standalone "clang-tidy" utility built on clang-format that will do much of
> what we're looking for as far as basic code-cleanup goes. I'm asking around
> about what "near-term" means, and if the answer isn't good enough I'm going
> to try to add something to clang-format to give users some moz-style
> guidance as a temporary measure.
>
>
I'd be happy to replace what's I'll be rolling out in bug 875605 once
something better comes along. But c++ isn't new so I'm not holding my
breath :).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: You want faster builds, don't you?

2013-09-23 Thread Benoit Girard
On Mon, Sep 23, 2013 at 12:49 AM, Robert O'Callahan wrote:

> I observe that Visual Studio builds do not spawn one cl process per
> translation unit. Knowing how slow Windows is at spawning processes, I
> suspect the build would be a lot faster if we used a single cl process to
> compile multiple tranlation units.
>
> /MP apparently spawns multiple processes to compile a set of files. That's
> kind of the opposite of what we want here.
>
> I see how compiling several files in the same cl invocation would mess up
> using /showincludes to track dependencies, making this difficult to fix.
> The only possibility I can think of for fixing this is to emit a Visual
> Studio project and make the Visual Studio builder responsible for tracking
> #include dependencies.
>
>
Vlad Ehsan and I did just that. With hacky.mk you can generate a fully
working Visual Studio project which will let you make changes to XUL as
necessary. The problem with MSBuild used by VS is that invokes one process
at a time. So as soon as a group of files requires different flags (and we
aren't even closed to having unified build flag across the tree) they
require a separate invocation of cl.exe by MSBuild and so you end up with
several invocation of cl.exe running serially and your CPU utilization is
far from 100% as the first cl.exe invocation is down to the last 1-2
remaining object file before the next invocation can start. It takes over
an hour to build with VS using hacky.mk. The benefit of specifying a
working VS solution is perfect intellisense. For building it's better to
call a good external build system from your IDE.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unified shader for layer rendering

2013-10-10 Thread Benoit Girard
On Thu, Oct 10, 2013 at 7:59 AM, Andreas Gal  wrote:

> Rationale:
> switching shaders tends to be expensive.
>

In my opinion this is the only argument for working on this at moment.
Particularly at the moment where we're overwhelmed with high priority
desktop and mobile graphics work, I'd like to see numbers before we
consider a change. I have seen no indications that we get hurt by switching
shaders. I suspected it might matter when we start to have 100s of layers
in a single page but we always fall down from another reason before this
can become a problem. I'd like to be able to answer 'In which use cases
would patching this lead to a user measurable improvement?' before working
on this. Right now we have a long list of bugs where we have a clear answer
to that question. Patching this is good to check off that we're using the
GPU optimally on the GPU best practice dev guides and will later help us
batch draw calls more aggressively but I'd like to have data to support
this first.

Also old Android drivers are a bit touchy with shaders so I recommend
counting some dev times for resolving these issues.

I know that roc and nrc have some plans for introducing more shaders which
will make a unified shader approach more difficult. I'll let them weight in
here.

On the flip side I suspect having a single unified shader will be faster to
compile then the several shaders we have on the start-up path.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Measuring power usage

2013-11-06 Thread Benoit Girard
You might be interested in bug 769431 where Intel modified power gadget to
export symbols that the profiler can use to sample the power state and
correlate it with execution.


On Tue, Nov 5, 2013 at 11:02 AM, jmaher  wrote:

> I am working on using intel power gadget to measure the power usage.
>  Currently this is on windows with an idle test.  Our test slaves have
> older CPUs which do not support the intel power gadget.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: The profiling branch has shut down

2013-11-12 Thread Benoit Girard
That's correct. It means that benchmarking on nighties isn't really
accurate so beware when running web/js benchmarks. Also it is wrong to
assume an average performance cost and scale the nightly results by a
factor.

We made this decision with the hope that we could better gather performance
data that would in the end translate to shipping performance improvements
to release users faster. This come as a slight cost to Nightly users.


On Sun, Nov 10, 2013 at 6:01 PM, Ehsan Akhgari wrote:

> On 2013-11-09 6:30 PM, Boris Zbarsky wrote:
>
>> On 11/9/13 12:53 PM, Philip Chee wrote:
>>
>>> Not directly related but. Some time back I wanted to turn on profiling
>>> for SeaMonkey on our trunk builds but was vetoed because turning on
>>> profiling (I was told) causes a pref hit.
>>>
>>
>> It does, but a pretty small one, and only on x86-32 (because there is
>> one less register to work with).
>>
>
> Also note that we only enable profiling by default on Nightly and not the
> channels that matter to our users.  But that does mean that performance
> comparisons between Nightly and Aurora on Windows for example are not
> possible unless you do your own non-profiling Nightly builds.
>
> Cheers,
> Ehsan
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC for Windows users (nightly only)

2013-12-04 Thread Benoit Girard
Congratulations! This is a major step forward for modernizing our rendering
on desktop, removing old code and simplifying how we render. This will
unblock important optimizations such as OMTAnimation and APZC. I'm omitting
many benefits. Great work!


On Wed, Dec 4, 2013 at 3:05 AM, Nicholas Cameron
wrote:

> I just landed a patch flipping the switch for all Windows users with HWA
> (d3d9/10/11) to use off-main-thread compositing. This is a fairly big
> change to our rendering pipeline, so if you notice rendering issues on
> Windows, please file bugs.
>
> For now, only nightly users will get this change. Riding the trains
> depends on bugs 913503 and 904890, and general stability. We wanted to land
> this early to get some extra testing time and because without being tested,
> it has been rotting super quickly. I will arrange for a branch to keep
> testing main thread composition asap.
>
> One known issue is windowed plugins lagging during scrolling (913503), so
> please ignore that (for now) if you observe it.
>
> OMTC can be disabled by setting the
> 'layers.offmainthreadcomposition.enabled' pref to false. If there are more
> problems than we can fix relatively quickly, we can do this for all users
> very easily.
>
> Cheers, Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should we build a new in-process unwind library?

2014-01-05 Thread Benoit Girard
My goal is to make SPS's stack easier to grab. SPS provides native stack
(if possible) plus pseudo stack. So it generally has more data then just a
native stack and is much more portable.

That being said making the unwind library independent is a win-win.


On Thu, Jan 2, 2014 at 11:03 AM, Jim Chen  wrote:

> This is great! Thanks for working on it! Can the new library be used
> independently outside of SPS? For hang detection (bug 909974) we'd
> like to have the ability to unwind the hung thread's stack, and this
> library would be perfect for that.
>
> Cheers,
> Jim
>
>
> On 12/19/13 2:04 PM, Julian Seward wrote:
> >
> > Here's a status update.
> >
> > Recap: I proposed to create a new library to do fast in process
> > CFI and EXIDX based stack unwinding, so as to be able to profile
> > on tablets and phones with less overhead than using Breakpad.
> >
> > The library now exists.  Tracking bug is 938157.  It does
> > CFI unwinding on x86_64-linux and arm-android, EXIDX on arm-android,
> > and stack scanning (as a last resort) on both.  Initial
> > integration with SPS has been completed and it produces plausible
> > profiles at least on x86_64-linux.
> >
> > Compared with the best Breakpad based schemes, this library gives
> > easily a factor of 10 cost reduction for CFI unwinding.  My best
> > attempts with Breakpad achieved a cost of about 6600 insns/frame
> > on x86_64-linux.  The new library comes in at around 470 insns/frame,
> > without much attempt at optimisation.
> >
> > It also facilitates implementation of the the kind of space-saving
> > techniques documented in
> >
> https://blog.mozilla.org/jseward/2013/09/03/how-compactly-can-cfiexidx-stack-unwinding-info-be-represented/
> >
> > J
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please give ask.mozilla.org for a spin

2014-02-04 Thread Benoit Girard
I notice that right now we need 5 karma to up vote so there's a bit of a
catch-22 for the up voting to start. I think right now it's up to the admin
to get a pool of users to break the catch-22.


On Mon, Feb 3, 2014 at 5:32 PM, Taras Glek  wrote:

> Hi,
> A few people noticed that we do not have a nice, searchable knowledge base
> for Gecko tech. We have places to ask questions such as various newsgroups,
> irc and places to document things like the wikis. It is hard to search
> through all of that, so questions get repeated.
>
> Lets give ask.mozilla.org a try. If you see someone asking questions on
> IRC or newsgroups, please ask them to write the question on ask.m.o and
> answer it there. If the answer is already documented elsewhere, provide a
> link in the answer or duplicate it, up to you.
>
> See http://ask.mozilla.org/question/3/why-use-
> askmozillaorg/?answer=4#post-id-4 for more info :)
>
> If you would like to help customize the theme and admin* this, please ping
> me.
>
> Taras
>
> * You only get admin rights if I know you
>
> ** Please note, this is a tech preview to get a feel if people will
> appreciate this. If we find this service to be useful, we'll invest more
> into this(eg SSL, custom theme, persona login, etc)
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tagging legitimate main thread I/O

2014-02-07 Thread Benoit Girard
With the profiler' IO tracking feature we have a few options:

We match certain signatures after the data is collected.
+ Doesn't require changes to gecko, adjustments are cheap
- Matching signatures can be tricky/unreliable

We instrumented gecko to allow IO between two calls
+ Similar to shutdown write poisoning, more reliable
+ Can benefit other tools instead of just the profiler
- Requires patching gecko to any time we need to adjust this.


On Fri, Feb 7, 2014 at 11:23 AM, Jeff Muizelaar wrote:

>
> On Feb 7, 2014, at 10:31 AM, David Rajchenbach-Teller 
> wrote:
>
> > When we encounter main thread I/O, most of the time, it is something
> > that should be rooted out. However, in a few cases (e.g. early during
> > startup, late during shutdown), these pieces of I/O should actually be
> > left untouched.
> >
> > Since main thread I/O keeps being added to the tree, for good or bad
> > reasons, I believe that we should adopt a convention of tagging
> > legitimate main thread I/O.
> >
> > e.g. :
> > - << Reading on the main thread as threads are not up yet >>.
> > - << Reading on the main thread as we may need to install XPCOM
> > components required before profile-after-change. >>
> > - ...
> >
> > Any thoughts?
>
> I think this is a good idea.
>
> Another example of main thread I/O that we don't have a lot of control
> over is some of the font reading that happens during rasterization or other
> system api's that we call.
>
> -Jeff
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using preferences off the main thread now asserts

2014-03-06 Thread Benoit Girard
Thanks for doing this.

However I feel like our options for code that need preferences off the main
thread are a bit poor. The first option is to send an IPC message to the
main thread but that has very poor performance, requires a lot of
boilerplate code and either an sync message or async+restructuring the
control flow. The second option is to read the preferences on startup and
have them as read-only shared globals like gfxPrefs:
http://mxr.mozilla.org/mozilla-central/source/gfx/thebes/gfxPrefs.h#106

Ideally I'd like to only read in the preference when/if I need to access
it, particularly as we have increasingly more code living off the main
thread.



On Thu, Mar 6, 2014 at 11:20 AM, Kyle Huey  wrote:

> It's taken over 3 years, but Bug 619487 is now fixed, and the
> preferences service will assert (fatally) if you try to use it off the
> main thread.  This is currently disabled on b2g while I track down the
> last misuses of the pref service in b2g-specific code.
>
> After the next train leaves I plan to upgrade these to release mode
> assertions so we can start catching addons that misuse the pref
> service.
>
> - Kyle
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Allowing web apps to delay layout/rendering on startup

2015-07-31 Thread Benoit Girard
It should be represented as a color layer which is very cheap. We should
only composite it once. We will use a bit of memory bandwidth but nothing
major, the main thread impact should be very small.

I agree, we should really have some data to support that drawing something
like a display:none is causing a measurable slow down if this is the road
we're going to go down. If we're just inefficient at it then I agree we
should do better. I'd be surprised if we could not afford the main thread
paint tick of a display:none page assuming we're not just hitting a
performance problem. We'd have a nearly empty displaylist, no rasterization
and it would use an async transaction to the compositor.

On Thu, Jul 30, 2015 at 8:37 PM, Bobby Holley  wrote:

> On Thu, Jul 30, 2015 at 4:27 PM, James Burke  wrote:
>
> > On Thu, Jul 30, 2015 at 1:28 PM, Jeff Muizelaar 
> > wrote:
> > > Can't you just make everything display:none until you're ready to show
> > it?
> >
> > Just using display: none seems like it will run into the same problem
> > that prompted bug 863499, where the browser did some render/paints of
> > a white page, which took time away from the async work completing.
> >
> > So maybe I should not frame it as just pausing layout? I was hoping it
> > would also delay render and maybe paints that happen during startup,
> > so more time is given up front to the async activities.
> >
>
> Painting a document with display:none on the body should be more or less
> free, I'd think. If it isn't, please file a bug.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we make a plan to retire Universal Mac builds?

2015-08-06 Thread Benoit Girard
Is this the data for people who are running only the latest release or some
arbitrary Firefox releases where FHR/data collection is enabled? I ask
because this data doesn't include any 10.4 and 10.5 usage so it's not an
overall population snapshot. Sampling the crash data (very noisy I know)
puts 10.5 at ~3% of our OSX users and 10.4 at ~1% of our OSX users. Of
course those users are stranded on some outdated and insecure Firefox build
so wouldn't be impacted by this discussion,

My point is that we need more context to these numbers if we're going to
make a decision based on them.

On Wed, Aug 5, 2015 at 6:02 PM, Syd Polk  wrote:

> So, in March of 2015, these were our usage stats:
>
> 32.20%  10.10 (14.0.x) (Yosemite)
> 27.98%  10.9 (13.0.x) (Mavericks)
> 19.22%  10.6 (10.0.x) (Snow Leopard)
> 11.06%  10.7 (11.0.x) (Lion)
> 9.53%   10.8 (12.0.x) (Mountain Lion)
>
> I have requested a more modern run from Brendan, who gave Clint Talbert
> and me these numbers. Let’s see what current data tells us. I am also
> curious if we can tell 32 vs. 64-bit in our numbers.
>
> Syd Polk
> sp...@mozilla.com
> +1-512-905-9904
> irc: sydpolk
>
>
>
>
>
> > On Aug 5, 2015, at 16:49, Eric Shepherd  wrote:
> >
> > Syd Polk wrote:
> >> I don’t think we can do this until we stop supporting Mac OS X 10.6.
> Last time we calculated percentage of users, this was still over 15%. I
> don’t think that very many of them would be running 64-bit, either. 10.7
> has that problem as well, but it is a very small percentage of users.
> > Those are worthwhile stats to double-check.
> >
> > --
> >
> > Eric Shepherd
> > Senior Technical Writer
> > Mozilla 
> > Blog: http://www.bitstampede.com/ 
> > Twitter: http://twitter.com/sheppy 
> > Check my Availability 
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mach mozregression command

2015-09-16 Thread Benoit Girard
This probably doesn't need to be mentioned but I'd like to discuss it
anyways:

We often ask bug reporters and various non developers to run bisection for
us. Maintaining mozregression to work well without a code checkout (i.e.
standalone) is important. I nearly feel that it should be so easy to run
standalone, since we require non developers to do it, that ideally there
should be little to no benefit to having a mach wrapper. However it's not a
pragmatic position and I can see the value of putting it in mach. I just
hope that we continue to maintain mozregression as a standalone tool and
that this wrapper doesn't cause us to miss regressions in it.

On Mon, Sep 14, 2015 at 12:43 PM, Julien Pagès  wrote:

> Hey everyone,
>
> I'm pleased to announce that we just added a "mach mozregression" command
> that allow to run mozregression (a regression range finder for Mozilla
> nightly
> and inbound builds) directly from your checkout of mozilla-central. To
> learn more about
> how to use it, just run:
>
> ./mach mozregression --help
>
> See http://mozilla.github.io/mozregression/ if you don't know about the
> tool.
>
> I hope you'll find this useful!
>
> Julien
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mach mozregression command

2015-09-17 Thread Benoit Girard
Yes that's a good point and a perfectly sensible. Thanks for the handy
wrapper!

On Wed, Sep 16, 2015 at 5:37 PM, J. Ryan Stinnett  wrote:

> On Wed, Sep 16, 2015 at 1:42 PM, Benoit Girard 
> wrote:
> > I just
> > hope that we continue to maintain mozregression as a standalone tool and
> > that this wrapper doesn't cause us to miss regressions in it.
>
> The mach wrapper essentially just calls "pip install mozregression"[1]
> and passes args along, so the standalone tool should be safe.
>
> [1]:
> https://dxr.mozilla.org/mozilla-central/rev/9ed17db42e3e46f1c712e4dffd62d54e915e0fac/tools/mach_commands.py#398
>
> - Ryan
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is APZ meant to be a permanent part of the platform?

2015-10-04 Thread Benoit Girard
On Sun, Oct 4, 2015 at 2:20 PM, Marcus Cavanaugh  wrote:

> we can't
> achieve flawless 60fps performance without APZ. We can get close, but any
> nontrival-but-reasonable demo will encounter jank, ostensibly due to
> compositing and rendering taking too much time. (APZ pathways, rendering
> the same content, provide consistent 60fps without frame drops, leading me
> to believe that some part of the JS-driven pipeline incurs substantial
> cost.)


> This means that on Firefox OS, the only way to achieve buttery-smooth
> touch-driven animations is to use overflow-scrollable containers rather
> than touch events. Scrollable containers provide a reasonable abstraction
> for user-driven fluid touch animations. If we had synchronous control over
> scroll events, we could do a lot more with just this; but because of APZ,
> we can only do so much:
>

 This isn't technically true, but I'll admit it's very difficult. It means
that you must be aware of your budget for everything and make sure that you
can render at 60 FPS (16ms). It means limiting styles, limiting your DOM
complexity, limiting your display items, making sure you're getting a
correct layer tree, making sure your display items can be painted easily.
Also don't forget to minimize your JS garbage to avoid long predictable GC
pauses. So it's possible but very difficult for complex apps because it
means designing with performance with clear budgets in mind right from the
start. However it's not a binary goal so every bit counts. The better you
manage, the better your app will be. Like Roc says there's also
improvements we can make on the platforms side to make this easier that
will continue to happen giving more flexibility on what is allowable in the
budget.


> On Firefox OS, the "utility tray" (swipe down from the top of the screen)
> is now implemented with native scrolling. However, the tray's footer, which
> is intended to only appear when the tray reaches the bottom, cannot be
> displayed without visual glitches due to APZ -- the user can scroll the
> container before JS has a chance to react.
>
>
Even with APZ, the better on budget you are, the smaller this visual glitch
will be. If you're running at 60 FPS the glitch shouldn't happen (or barely
since the compositor might still output a frame first before you can
respond). Now the further and further you are from the budget the more
noticeable this glitch will be.


> My question is this: Is APZ intended to be a stopgap measure, to compensate
> for platform deficiencies in rendering performance, or is it intended to
> become a permanent part of the web? Put another way: Is "onscroll" forever
> redefined to be "an estimate of the current scroll position", rather than a
> synchronous position callback? (I thought I overheard talk about how
> WebRender might obsolete APZ due to its performance, but I may have
> misheard.)
>

I'd imagine that it will remain a permanent part of the web platform since
it's moving to all browsers even on desktop. At least medium term.

Basically it means that keeping 60 FPS is still important just that the
reason has changed. Now instead of getting jittery scrolling your app will
take longer to respond.


>
> If APZ is with us forever (and 60fps JS-based animation is out of the
> question), then I think we need to create an API that allows sites to
> provide more fine-grained control over scroll motion. I have more thoughts
> on this, if APZ is the way forward.
>

I think CSS scroll snap points are good example of a recent-ish new API but
there's still a lot that can't be expressed currently so yes I agree.

For your original problem you might be able to get creative by using masks
which will be implemented by APZ. This is a bit complex to describe so bear
with me.

You want to position your content at the bottom of the screen but you only
want the part where the status bar has been pulled down on top over the
footer to be visible. You need to create and position the footer to be
absolute positioned at the bottom. Now you need to create a mask for the
footer that will scroll with the utility tray. When the mask starts to
overlap the footer it will become visible. This means exploiting mask
layers and containerless APZ to reveal your footer at the bottom while
scrolling is occurring. Not overly simple but I *think* it could work.


>
> Marcus [:mcav]
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Building js/xul/css from Firefox faster

2015-10-05 Thread Benoit Girard
This is great progress!

I had hope that something like this would also include the 'build binaries'
DAG. It might make it slightly slower but it should still be very fast and
lessen the cognitive load. I was under the impression that 'build binaries'
at some point was a single DAG but it doesn't seem to be the case anymore
(I see make entering/exiting directories now)?

It would be nice if we got to a state where we had a 'build faster' like
target that supported all the most frequently modified extensions
all-in-one (say cpp/h/js/jsm/idl/css/xul).

On Mon, Oct 5, 2015 at 12:30 PM, Gregory Szorc  wrote:

> Basically a set of build actions related to frontend development and known
> by moz.build files are assembled in a single make file that contains a
> single DAG and doesn't need to recurse into directories. See
> python/mozbuild/mozbuild/backend/fastermake.py and /faster/Makefile
> for the low-level details.
>
> On Fri, Oct 2, 2015 at 12:18 PM, Justin Dolske  wrote:
>
> > Nice! Out of curiosity, how does "faster" work? Does it just ignore build
> > targets/directories that involve native code?
> >
> > FWIW, I benchmarked various no-changes builds with yesterday's m-c, on my
> > low-end Surface Pro 3 (i3, 4GB)...
> >
> > mach build: 7:38
> > mach build browser: 0:43
> > mach build toolkit: 1:42
> > mach build faster: 0:22
> >
> > Big wins!
> >
> > Justin
> >
> > On Thu, Oct 1, 2015 at 5:23 PM, Mike Hommey  wrote:
> >
> >> Hi,
> >>
> >> I recently landed a new build backend that, if you opt-in to running it,
> >> will make your non-C++ changes to Firefox more readily available in
> >> local builds.
> >>
> >> After you built Firefox normally once, and assuming you only changed
> >> non-C++ code, you can now use the following command for a faster
> >> incremental build:
> >>   ./mach build faster
> >>
> >> Now, since this is fresh out of the oven, there are a few things to
> >> know:
> >> - it doesn't handle removing files
> >> - it doesn't handle files that would end up outside of dist/bin
> >> - it doesn't handle a few things like the files from the default profile
> >> - it currently only works for desktop Firefox
> >>
> >> Obviously, this is not an end state, so things will improve in the
> >> future, but it should work well enough for most day-to-day work for
> >> desktop Firefox, thus this message.
> >>
> >> On my machine, `mach build faster` completes in about 4 seconds for an
> >> incremental build. This should get even faster very soon.
> >>
> >> Additionally, while requiring some manual steps (which bug 1207888 and
> >> bug 1207890 will help with), it is also possible to use this to build
> >> desktop Firefox without actually building any C++. Here is how that
> >> goes:
> >> - Run `./mach configure` with a mozconfig containing:
> >>   ac_add_options --disable-compile-environment
> >> - Download and unpack a nightly
> >> - Use `./mach python toolkit/mozapps/installer/unpack.py `, where
> >>is the nightly's firefox/ directory.
> >> - Move that fully unpacked nightly to $objdir/dist/bin (for mac, that
> >>   involves more fiddling, because dist/bin is a somewhat flattened
> >>   version of the .app directory)
> >> - Ensure the files in $objdir/dist/bin are older than the source files.
> >> - Run `./mach build faster`.
> >> - On mac, you will need to run something like (untested)
> >> ./mach build browser/app/repackage
> >>
> >> After that $objdir/dist/bin should contain a bastardized Firefox, with
> >> xul, js, etc. coming from the source tree, and the remainder still being
> >> there from the original nightly.
> >>
> >> `mach run` should work with that.
> >>
> >> Cheers,
> >>
> >> Mike
> >> ___
> >> firefox-dev mailing list
> >> firefox-...@mozilla.org
> >> https://mail.mozilla.org/listinfo/firefox-dev
> >>
> >
> >
> > ___
> > firefox-dev mailing list
> > firefox-...@mozilla.org
> > https://mail.mozilla.org/listinfo/firefox-dev
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Building js/xul/css from Firefox faster

2015-10-05 Thread Benoit Girard
This is great progress!

I had hope that something like this would also include the 'build binaries'
DAG. It might make it slightly slower but it should still be very fast and
lessen the cognitive load. I was under the impression that 'build binaries'
at some point was a single DAG but it doesn't seem to be the case anymore
(I see make entering/exiting directories now)?

It would be nice if we got to a state where we had a 'build faster' like
target that supported all the most frequently modified extensions
all-in-one (say cpp/h/js/jsm/idl/css/xul).


On Mon, Oct 5, 2015 at 12:30 PM, Gregory Szorc  wrote:

> Basically a set of build actions related to frontend development and known
> by moz.build files are assembled in a single make file that contains a
> single DAG and doesn't need to recurse into directories. See
> python/mozbuild/mozbuild/backend/fastermake.py and /faster/Makefile
> for the low-level details.
>
> On Fri, Oct 2, 2015 at 12:18 PM, Justin Dolske  wrote:
>
> > Nice! Out of curiosity, how does "faster" work? Does it just ignore build
> > targets/directories that involve native code?
> >
> > FWIW, I benchmarked various no-changes builds with yesterday's m-c, on my
> > low-end Surface Pro 3 (i3, 4GB)...
> >
> > mach build: 7:38
> > mach build browser: 0:43
> > mach build toolkit: 1:42
> > mach build faster: 0:22
> >
> > Big wins!
> >
> > Justin
> >
> > On Thu, Oct 1, 2015 at 5:23 PM, Mike Hommey  wrote:
> >
> >> Hi,
> >>
> >> I recently landed a new build backend that, if you opt-in to running it,
> >> will make your non-C++ changes to Firefox more readily available in
> >> local builds.
> >>
> >> After you built Firefox normally once, and assuming you only changed
> >> non-C++ code, you can now use the following command for a faster
> >> incremental build:
> >>   ./mach build faster
> >>
> >> Now, since this is fresh out of the oven, there are a few things to
> >> know:
> >> - it doesn't handle removing files
> >> - it doesn't handle files that would end up outside of dist/bin
> >> - it doesn't handle a few things like the files from the default profile
> >> - it currently only works for desktop Firefox
> >>
> >> Obviously, this is not an end state, so things will improve in the
> >> future, but it should work well enough for most day-to-day work for
> >> desktop Firefox, thus this message.
> >>
> >> On my machine, `mach build faster` completes in about 4 seconds for an
> >> incremental build. This should get even faster very soon.
> >>
> >> Additionally, while requiring some manual steps (which bug 1207888 and
> >> bug 1207890 will help with), it is also possible to use this to build
> >> desktop Firefox without actually building any C++. Here is how that
> >> goes:
> >> - Run `./mach configure` with a mozconfig containing:
> >>   ac_add_options --disable-compile-environment
> >> - Download and unpack a nightly
> >> - Use `./mach python toolkit/mozapps/installer/unpack.py `, where
> >>is the nightly's firefox/ directory.
> >> - Move that fully unpacked nightly to $objdir/dist/bin (for mac, that
> >>   involves more fiddling, because dist/bin is a somewhat flattened
> >>   version of the .app directory)
> >> - Ensure the files in $objdir/dist/bin are older than the source files.
> >> - Run `./mach build faster`.
> >> - On mac, you will need to run something like (untested)
> >> ./mach build browser/app/repackage
> >>
> >> After that $objdir/dist/bin should contain a bastardized Firefox, with
> >> xul, js, etc. coming from the source tree, and the remainder still being
> >> there from the original nightly.
> >>
> >> `mach run` should work with that.
> >>
> >> Cheers,
> >>
> >> Mike
> >> ___
> >> firefox-dev mailing list
> >> firefox-...@mozilla.org
> >> https://mail.mozilla.org/listinfo/firefox-dev
> >>
> >
> >
> > ___
> > firefox-dev mailing list
> > firefox-...@mozilla.org
> > https://mail.mozilla.org/listinfo/firefox-dev
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Decommissioning "dumbmake"

2015-10-15 Thread Benoit Girard
+1

For my use case breaking dumbmake is preferable given that we now have
'build binaries'. When touching commonly included header I often like to
run ./mach build gfx && ./mach build binaries. This effectively let's me
say 'Make sure my gfx changes are good before you recompile the rest of
gecko' giving me control over the recompile order. This will notify me
quickly if there's a compile error without having to rebuild everything
that include the header that I changed giving me compile errors much
faster. With the current behavior './mach build gfx' will try to link when
I haven't yet recompiled all the files that I need.

On Thu, Oct 15, 2015 at 8:15 PM, Mike Hommey  wrote:

> Hi,
>
> I started a thread with the same subject almost two years ago. The
> motivation hasn't changed, but the context surely has, so it's probably
> time to reconsider.
>
> As a reminder, "dumbmake" is the feature that makes "mach build foo/bar"
> sometimes rebuild in some other directories as well. For example, "mach
> build gfx" will build gfx, as well as toolkit/library.
>
> OTOH, it is pretty limited, and, for instance, "mach build gfx/2d" will
> only build gfx/2d.
>
> There are however now two build targets that can do the right thing for
> most use cases:
> - mach build binaries, which will build C/C++ related things
>   appropriately
> - mach build faster, which will build JS, XUL, CSS, etc. (iow, non
>   C/C++) (although it skips what doesn't end up in dist/bin)
>
> At this point, I think "dumbmake" is more harmful than helpful, and the
> above two targets should be used instead. Removing "dumbmake" would mean
> that "mach build foo/bar" would still work, but would stop to "magically"
> do something else than what was requested (or fail to do that thing for
> all the cases it doesn't know about).
>
> Are there still objections to go forward, within the new context?
>
> Cheers,
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Finding out if the main thread is currently animating

2015-10-29 Thread Benoit Girard
We've explored several different ways of measuring this. Several of these
are in the tree. Generally what I have found the most useful is to measure
how we're servicing the content' main thread. This measurement is great
because its measures how responsive Firefox is not only for
scrolling/animations but nearly all use cases like typing latency.

There's EventTracer.h which is our best general responsiveness measurement
at the moment. However it only traces the event loop up to each 20ms so
it's a laggy indicator.

On Thu, Oct 29, 2015 at 9:47 AM, David Rajchenbach-Teller <
dtel...@mozilla.com> wrote:

> The main thread of the current chrome/content process.
>
> Indeed, animations are one of my two use cases, the other one being
> user-input latency, but I'm almost sure I know how to deal with the latter.
>
> Cheers,
>  David
>
>
> On 29/10/15 14:32, Benjamin Smedberg wrote:
> > On the main thread of which process?
> >
> > Please consider non-"animation" use-cases. In particular, users do
> > notice the latency of typing into edit boxes as much as anything else.
> > So let's make sure that editing latency triggers this as much as a
> > current animation.
> >
> > --BDS
>
>
> --
> David Rajchenbach-Teller, PhD
>  Performance Team, Mozilla
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updates to Chrome platform support

2015-11-10 Thread Benoit Girard
There's been discussion of dropping 10.6.0 to 10.6.2 (free upgrade path for
everyone to 10.6.3+) in hope of removing a graphics workaround but it's
stalled and the upside wasn't really high:
https://bugzilla.mozilla.org/show_bug.cgi?id=1003270

On Tue, Nov 10, 2015 at 4:37 PM, Chris Peterson 
wrote:

> http://chrome.blogspot.com/2015/11/updates-to-chrome-platform-support.html
>
> * Chrome’s support for Windows XP will be extended from December 2015 to
> April 2016 (after previously extending from April 2015 to December 2015).
>
> * Chrome will also drop support for Windows Vista and Mac OS X 10.6, 10.7,
> and 10.8 in April 2016. "Chrome will continue to function on these
> platforms but will no longer receive updates and security fixes."
>
> What is Mozilla's current plan for supporting XP, Vista, and OS X
> 10.6–10.8?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: SPS Profiles are now captured for entire subprocess lifetime

2015-12-04 Thread Benoit Girard
Thanks Mike for your hard work pushing this through!

In theory it does let us profile e10s on TAlos, but I'm sure well will find
more usability issues. It's unclear if they will be a blocker or not. If
there's outstanding issues I don't think we know about them. Please file
them and CC me on any issues you run into. I want us to continue to make
Talos profiler easier.

Once we've reached a good baseline we can have another look at how to
display 'comparative profiles' and making it easier to highlight
difference. For instance it took us way too long to identify that image
cache was disable on e10s and I'd like to get us to the point where this
type of regression would be trivial from a before/after profile.

On Thu, Dec 3, 2015 at 5:27 PM, Kartikaya Gupta  wrote:

> \o/
>
> Does this get us all the way to "profile talos runs with e10s
> enabled", or are there still pieces missing for that? IIRC this set of
> patches was a prerequisite for being able to do that.
>
> On Thu, Dec 3, 2015 at 4:52 PM, Mike Conley  wrote:
> > Just a heads up that there have been recent developments with regards to
> > gathering SPS profiles from multiple processes.
> >
> > Bug 1103094[1] recently landed in mozilla-central, which makes it so that
> > if a subprocess starts up _after_ profiling has already been started in
> the
> > parent, then the subprocess will start profiling as well using the same
> > features and settings as the parent.
> >
> > Bug 1193838[2], which is still on inbound, will make it so that if we are
> > profiling while a process exits, we will hold onto that profile until the
> > profiles are all requested by the parent process for analysis. Right now
> we
> > hold these "exit profiles" in a circular buffer that's hardcoded at an
> > arbitrary limit of 5 profiles.
> >
> > The upshot is that in many cases, if you start profiling, you'll not lose
> > any profiles for subprocesses that start or finish before you choose to
> > analyze the profiles. \o/
> >
> > Just wanted to point those out. Thanks to BenWa for the reviews! Happy
> > profiling,
> >
> > -Mike
> >
> > [1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1103094
> > [2]: https://bugzilla.mozilla.org/show_bug.cgi?id=1193838
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozregression – Engineering Productivity Project of the Month

2016-01-11 Thread Benoit Girard
I wanted to chime-in and emphasis this:

One of the first thing you do when looking at a bug is establish if it's a
regression and starting with mozregression right away if it is!

From my experience running mozregression for easily reproduced regressions
can be done in about 10 minutes and it really gets the bug going in the
right direction.

On Mon, Jan 11, 2016 at 7:21 AM, Julien Pagès  wrote:

> Hello from Engineering Productivity! Once a month we highlight one of our
> projects to help the Mozilla community discover a useful tool or an
> interesting contribution opportunity.
>
> This month’s project is mozregression!
>
> mozregression helps to find regressions in Mozilla projects like Firefox or
> Firefox on Android. It downloads and runs the builds between two dates (or
> changesets) known to be good and bad, and lets you test each build to
> finally find by bisection the smallest possible range of changesets where
> the regression appears.
>
> You can read more about it on my blog post:
>
>
> https://parkouss.wordpress.com/2016/01/11/mozregression-engineering-productivity-project-of-the-month/
>
>
> Julien - on behalf of the Engineering Productivity team.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: rr chaos mode update

2016-02-14 Thread Benoit Girard
I've got RR working under digital oceans and it works great there.

We've built a harness for generating replays. Once a replay is generated I
match the replay with the bug and comment in the bug looking for developers
to investigate. When they respond they can investigate by ssh'ing. Example:
https://bugzilla.mozilla.org/show_bug.cgi?id=1223249#c12

If we can we should prefer to have an ssh end point running rather than
ship a large VM image. It's also my understanding that while RR works
inside a VM, the trace will not work if the VM has changed host.

However right now we've decided that it's really overkill for the time
being. Producing interesting RR replays is really trivial at the moment.
Finding enough engineers to analyze them is not.

On Mon, Feb 15, 2016 at 1:21 AM, Mike Hommey  wrote:

> On Sun, Feb 14, 2016 at 09:25:58PM -0800, Bobby Holley wrote:
> > This is so. Damn. Exciting. Thank you roc for having the vision and
> > persistence to bring this dream to reality.
> >
> > How far are we from being able to use cloud (rather than local) machine
> > time to produce a trace of an intermittently-failing bug? Some one-click
> > procedure to produce a trace from a failure on treeherder seems like it
> > would lower the activation energy significantly.
>
> One limiting factor is the CPU features required, that are not
> virtualized on AWS (they are on digital ocean, and that's about the only
> cloud provider where they are ttbomk).
>
> Relatedly, roc, is it possible to replay, on a different host, with
> possibly a different CPU, a record that would have been taken on the
> cloud? Does using a VM make it possible? If yes, having "the cloud" (or
> a set of developers) try to reproduce intermittents, and then have
> developers download the records and corresponding VM would be very
> useful. If not, we'd need a system like we have for build/test slave
> loaners.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Testing Wanted: APZ Scrollbar dragging

2016-02-17 Thread Benoit Girard
Currently APZ does not cause scrollbar initiated scrolling to be async.
I've been working in fixing this and I'd like some help testing it out
before enabling it on Nightly. If you're interested please flip
'apz.drag.enabled' to true and restart. If you find any issue please make
it block https://bugzilla.mozilla.org/show_bug.cgi?id=1211610.

I've got a test page here to confirm that it's turned on properly:
http://people.mozilla.org/~bgirard/scroll_no_paint.html

Scrolling on this page is slow in FF because it hits a performance bugs in
layer construction code. However with APZ the scrolling is smooth and a bit
checkerboard-y instead. If you have 'layers.acceleration.draw-fps' set you
should notice the left most FPS counter at 60 FPS while the middle counter
will be much lower. This is APZ handling the scroll.

Happy Testing,
Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Testing Wanted: APZ Scrollbar dragging

2016-02-17 Thread Benoit Girard
This is mouse based. I don't believe we support mouses at all on mobile. I
also don't think we support touch interacting with the scroll thumb on
mobile. So unless there's an unusual device configuration I don't think
this applies to mobile.

On Wed, Feb 17, 2016 at 4:34 PM, Nicholas Alexander 
wrote:

> Benoit, (possibly kats),
>
> On Wed, Feb 17, 2016 at 10:35 AM, Benoit Girard 
> wrote:
>
>> Currently APZ does not cause scrollbar initiated scrolling to be async.
>> I've been working in fixing this and I'd like some help testing it out
>> before enabling it on Nightly. If you're interested please flip
>> 'apz.drag.enabled' to true and restart. If you find any issue please make
>> it block https://bugzilla.mozilla.org/show_bug.cgi?id=1211610.
>>
>
> Does this apply to Fennec?  Do you also want testing on Fennec?  A
> cross-post to mobile-firefox-dev might be in order.
>
> Nick
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposing preferring Clang over GCC for developer buidls

2016-03-02 Thread Benoit Girard
Note that, as you say, the debugging information produced by the compiler
and the debugger that consumes it are completely orthogonal. I've tried
several times to use lldb but I keep coming back to GDB. Particularly now
with RR+GDB it's light years ahead.

I find that GDB works quite well with the information that clang generates.
That's what I use day-to-day.

On Wed, Mar 2, 2016 at 6:48 PM, Bill McCloskey 
wrote:

> Is the debugging information generated by clang as good or better than
> GCC's? My experience with lldb has been terrible, but that may have more to
> do with the debugger itself than with the information clang generates.
>
> -Bill
>
> On Wed, Mar 2, 2016 at 2:50 PM, Gregory Szorc  wrote:
>
> > Over in bug 1253064 I'm proposing switching developer builds to prefer
> > Clang over GCC because the limited numbers we have say that Clang can
> build
> > mozilla-central several minutes faster than GCC (13 minutes vs 17.5 on my
> > I7-6700K). I'm not proposing switching what we use to produce builds in
> > automation. I'm not proposing dropping GCC support. This is all about
> > choosing faster defaults so people don't spend as much time waiting for
> > builds to complete and become more productive as a result.
> >
> > Please comment on the bug if you have something to add.
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Introducing MozGTestBench - Platform micro-benchmarks

2016-03-18 Thread Benoit Girard
In bug 1256408 I've landed code to allow adding *in-tree* platform
micro-benchmarks for XUL code in less than 10 lines. These are just GTest
where the execution time is reported to perfherder. This makes it easy to
add low level platform micro-benchmarks for testing things that Talos is
not well suited for. Maybe a bit too easy in fact so please keep in mind
that there is a cost to collecting perf data and particularly monitoring
the alerts so try to use sparingly.

You can find more information on this here:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/GTest#MozGTestBench

A big caveat is that we're rolling this out, at the moment, as part of the
GTest job which doesn't run on a stable hardware configuration like the
Talos job. If the results are too noisy we might look into fixing this.
However since we're running micro-benchmarks we also lose a lot of the
noise that Talos has to deal with like the event queue, GC, paint
scheduling etc...

Here's an example of micro-benchmarks for nsRegion And()/Or() performance
which is critical for layout and graphics performance and basic headless
compositing:
https://treeherder.allizom.org/perf.html#/graphs?series=%5Bmozilla-inbound,db7b8908a950b105f475ead838f8f472c89b20ad,1%5D&series=%5Bmozilla-inbound,ad41ac00e3191623cd89ed0c7df7464a5faf86c2,1%5D&series=%5Bmozilla-inbound,feded2f5510c634c95b0810fb35e7f633ffa6443,1%5D



Alert will be posted here [1]. I should mention that micro-benchmark
regressions shouldn't and wont be treated the same way a Talos regression.
More details here:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/GTest#Sheriffing_policy

Changes should be deployed to treeherder.mozilla.org next week.

[1] https://treeherder.allizom.org/perf.html#/alerts?status=0&framework=6
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: #include sorting: case-sensitive or -insensitive?

2016-03-28 Thread Benoit Girard
a) It's explained in the style docs:

   1. The main header: Foo.h in Foo.cpp
   2. Standard library includes: #include 
   3. Mozilla includes: #include "mozilla/dom/Element.h"

Thus you'd want the second

b) I'm assuming it includes the path. That's what I've seen most of the
code do too and it means that once you've split it out in the 3 section you
can use ':sort -i' or similar on each section.


On Mon, Mar 28, 2016 at 6:54 PM, Cameron McCormack  wrote:

> David Keeler:
> > The style guidelines at
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
> > indicate that #includes are to be sorted. It does not say whether or not
> > to consider case when doing so (and if so, which case goes first?). That
> > is, should it be:
> >
> > #include "Foo.h"
> > #include "bar.h"
> >
> > or
> >
> > #include "bar.h"
> > #include "Foo.h"
>
> If you are preparing to make some changes to the Coding style document
> around #include order, can you also please prescribe (a) where system-
> includes get placed, e.g.
>
> #include "aaa.h"
> #include 
> #include "ccc.h"
> #include 
>
> or
>
> #include 
> #include 
> #include "aaa.h"
> #include "ccc.h"
>
> and (b) how includes with paths are sorted, e.g.
>
> #include "aaa.h"
> #include "bbb/bbb.h"
> #include "bbb/ccc/ddd.h"
> #include "bbb/eee/fff.h"
> #include "bbb/ggg.h"
> #include "ccc.h"
>
> or
>
> #include "bbb/ccc/ddd.h"
> #include "bbb/eee/fff.h"
> #include "bbb/bbb.h"
> #include "bbb/ggg.h"
> #include "aaa.h"
> #include "ccc.h"
>
> or some other order that makes sense.
>
> --
> Cameron McCormack ≝ http://mcc.id.au/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Testing Wanted: APZ Scrollbar dragging

2016-03-29 Thread Benoit Girard
Thanks for helping test! Feel free to disable the feature for now since
right now enabling this has been de-prioritize for the time being while we
focus on shipping APZ and E10S.

On Tue, Mar 29, 2016 at 4:55 PM, Jonas Sicking  wrote:

> Hi Benoit,
>
> There's two problems that I've seen related to scrolling, and so
> possibly the result of APZ.
>
> The most obvious one is that if the child process is busy doing
> things, then the scrollable area seems "sticky". When you start
> scrolling there's a noticeable delay before the page starts scrolling.
>

> My guess is that we send some message to the child process when
> scrolling is initiated and don't start scrolling until the child
> process responds (possibly with some form of timeout)?
>
> This isn't noticeable on the example page that you provided, possibly
> because the child process is quite responsive before we start the
> heavy painting operations.
>

Yes, this is how it's implemented and a design limitation. Given that the
design changed a lot since it was planned it might be possible to remove
this limitation down the road and have APZ initiate the scroll.


> The other problem, which is more serious, is that every now and then
> the browser seems to get into a state where I can't use the trackpad
> at all to do scrolling. The scroll bar is displayed, as is common when
> using the trackpad to scroll, but no scrolling actually happens.
>
> The only way I can scroll when the browser is in this state is to
> hover the scrollbar and click below the scroll thumb, which enables
> scrolling one page at a time.
>
> The only way to get out of the state is by restarting the browser.
>
> Sadly this happens very infrequently, so I haven't been able to create
> steps to reproduce, nor seen any pattern in where it triggers.
>

As Felipe points out this is a recent APZ regression and isn't tied to this
feature.

>
> / Jonas
>
>
> On Wed, Feb 17, 2016 at 10:35 AM, Benoit Girard 
> wrote:
> > Currently APZ does not cause scrollbar initiated scrolling to be async.
> > I've been working in fixing this and I'd like some help testing it out
> > before enabling it on Nightly. If you're interested please flip
> > 'apz.drag.enabled' to true and restart. If you find any issue please make
> > it block https://bugzilla.mozilla.org/show_bug.cgi?id=1211610.
> >
> > I've got a test page here to confirm that it's turned on properly:
> > http://people.mozilla.org/~bgirard/scroll_no_paint.html
> >
> > Scrolling on this page is slow in FF because it hits a performance bugs
> in
> > layer construction code. However with APZ the scrolling is smooth and a
> bit
> > checkerboard-y instead. If you have 'layers.acceleration.draw-fps' set
> you
> > should notice the left most FPS counter at 60 FPS while the middle
> counter
> > will be much lower. This is APZ handling the scroll.
> >
> > Happy Testing,
> > Benoit
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Run firefox as headless???

2016-04-07 Thread Benoit Girard
Check out slimerjs: https://slimerjs.org/

On Thu, Apr 7, 2016 at 6:50 PM, Devan Shah  wrote:

> Is it possible to run firefox in headless form to fetch url and get the
> full Dom as rendered. Same way it would render on normal foredox
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Run firefox as headless???

2016-04-07 Thread Benoit Girard
If xvfb doesn't work for you then as far as I know there's no way for it to
be truly headless unfortunately. Looks like slimerjs is trying to to solve
this issue:
https://github.com/laurentj/slimerjs/issues/80

They mention using createWindowlessBrowser but I'm not familiar with it.

On Thu, Apr 7, 2016 at 10:11 PM, Rik Cabanier  wrote:

> If the problem is that you're running on a windowless system, you can use
> xfvb-run to launch your application
>
> On Thu, Apr 7, 2016 at 5:11 PM, Devan Shah  wrote:
>
> > slimerjs is not fully headless unless using xvfb,  is there anything else
> > that can he used.
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Documentation on how to read crash reports

2016-05-26 Thread Benoit Girard
There's some information I've learned about reading crash reports, which is
obvious now but wasn't when I was an intern many years ago, that isn't
really covered by these.

Here's my workflow when looking at crashes:
- Windows tells you if the exception occurred during a write or read. Look
at the exception type. This is often the first thing I look at and
important to keep in mind for the next step.
- Look at the crashing source first, check the blame here and up the stack
as well for a recent change.
- Correlate the crash start date with the blame dates in the crashing stack
and nearby. A strong match points to a regression that's easy to deal with
if identified quickly.
- Crash address can give you a hint as to what's going on: 0x0 -> null
crash, 0x8 -> null offset accessing a member like this->mFoo, 0xfd8
-> accessing stack on some platforms, 0x80cdefd3 -> heap access, (the
jemalloc poison values)
- With the above look at the crash address distribution. Look at the
exception type as well. If they are not all in the same bucket you *may*
have a bad vtable or corruption occurring. Might want to do this per
platform.
- If you have a minidump see:
https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/Debugging_a_minidump
- If you don't have a minidump (non windows), and the line level info is
insufficient, you can grab the instruction pointer and disassemble the
binary. This will often tell you exactly what you were doing and which
value is bad but is a bit tricky to follow with the optimizer.

That's my rough workflow when looking at Soccoro. I'm leaving out the more
specific searches like driver correlation for gfx crashers for instance.

On Thu, May 26, 2016 at 9:52 AM, Benjamin Smedberg 
wrote:

> Here is a selection of docs that we've written over the years. Many are
> incomplete.
>
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Crash_reporting
> https://developer.mozilla.org/en-US/docs/Crash_Data_Analysis
>
> https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/Debugging_a_minidump
>
> --BDS
>
>
> On Wed, May 25, 2016 at 2:06 AM, Nicholas Nethercote <
> n.netherc...@gmail.com
> > wrote:
>
> > Hi,
> >
> > Do we have documentation on how to read crash reports? The only thing
> > I have found is this page:
> >
> > https://support.mozilla.org/en-US/kb/helping-crashes
> >
> > which is not bad, but is lacking some details. I suspect there is
> > quite a bit of crash report interpretation wisdom that is in various
> > people's head, but not written down anywhere...
> >
> > Nick
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Benoit Girard
This barely works in a office with 10MB/sec wireless uplink. Ideally you
want machines to be accessible on a gigabit LAN. It's more about bandwidth
throughput than latency AFAIK. i.e. can you *upload* dozens of 2-4MB
compressed pre-processed file faster than you compile it? I'd imagine
unless you can get reliable 50MB/sec upload throughput then you probably
wont benefit from connecting to a remote cluster.

However the good news is you can see a lot of benefits from having a
network of just one machine! In my case my Linux desktop can compile a mac
build faster than my top of the line 2013 macbook pro. and with a network
of 2 machines it's drastically faster. A cluster of 12 machines is nice,
but you're getting diminishing returns on that until the build system gets
better.

I'd imagine distributed object caching will have a similar bandwidth
problem, however users tend to have better download speeds than upload
speeds.

So to emphasize, if you compile a lot and only have one or two machines on
your 100mps or 1gbps LAN you'll still see big benefits.

On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch 
wrote:

> What about people not lucky enough to (regularly) work in an office,
> including but not limited to our large number of volunteers? Do we intend
> to set up something public for people to use?
>
> ~ Gijs
>
>
> On 04/07/2016 20:09, Michael Layzell wrote:
>
>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
>> will know that in the Toronto office, we have set up a distributed
>> compiler
>> called `icecc`, which allows us to perform a clobber build of
>> mozilla-central in around 3:45. After some work, we have managed to get it
>> so that macOS computers can also dispatch cross-compiled jobs to the
>> network, have streamlined the macOS install process, and have refined the
>> documentation some more.
>>
>> If you are in the Toronto office, and running a macOS or Linux machine,
>> getting started using icecream is as easy as following the instructions on
>> the wiki:
>>
>> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
>>
>> If you are in another office, then I suggest that your office starts an
>> icecream cluster! Simply choose one linux desktop in the office, run the
>> scheduler on it, and put its IP in the Wiki, then everyone can connect to
>> the network and get fast builds!
>>
>> If you have questions, myself, BenWa, and jeff are probably the ones to
>> talk to.
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Benoit Girard
In my case I'm noticing an improvement with my mac distributing jobs to a
single Ubuntu machine but not compiling itself (Right now we don't support
distributing mac jobs to other mac, primarily because we just want to
maintain one homogeneous cluster).

On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch 
wrote:

> On 04/07/2016 22:06, Benoit Girard wrote:
>
>> So to emphasize, if you compile a lot and only have one or two machines
>> on your 100mps or 1gbps LAN you'll still see big benefits.
>>
>
> I don't understand how this benefits anyone with just one machine (that's
> compatible...) - there's no other machines to delegate compile tasks to (or
> to fetch prebuilt blobs from). Can you clarify? Do you just mean "one extra
> machine"? Am I misunderstanding how this works?
>
> ~ Gijs
>
>
>
>> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch > >
>> wrote:
>>
>> What about people not lucky enough to (regularly) work in an office,
>>> including but not limited to our large number of volunteers? Do we intend
>>> to set up something public for people to use?
>>>
>>> ~ Gijs
>>>
>>>
>>> On 04/07/2016 20:09, Michael Layzell wrote:
>>>
>>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
>>>> will know that in the Toronto office, we have set up a distributed
>>>> compiler
>>>> called `icecc`, which allows us to perform a clobber build of
>>>> mozilla-central in around 3:45. After some work, we have managed to get
>>>> it
>>>> so that macOS computers can also dispatch cross-compiled jobs to the
>>>> network, have streamlined the macOS install process, and have refined
>>>> the
>>>> documentation some more.
>>>>
>>>> If you are in the Toronto office, and running a macOS or Linux machine,
>>>> getting started using icecream is as easy as following the instructions
>>>> on
>>>> the wiki:
>>>>
>>>>
>>>> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
>>>>
>>>> If you are in another office, then I suggest that your office starts an
>>>> icecream cluster! Simply choose one linux desktop in the office, run the
>>>> scheduler on it, and put its IP in the Wiki, then everyone can connect
>>>> to
>>>> the network and get fast builds!
>>>>
>>>> If you have questions, myself, BenWa, and jeff are probably the ones to
>>>> talk to.
>>>>
>>>>
>>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Do we have some documents listing up when we need to touch CLOBBER?

2016-12-16 Thread Benoit Girard
One of my goal when introducing CLOBBER was to document what was causing us
to CLOBBER so that we could audit and fix them if we ever found the time.
You can get a pretty good idea by going through the history of the file.

I don't believe anyone has taken to time to go through the CLOBBER hg
history to find the causes and document them. That could be interesting.

On Fri, Dec 16, 2016 at 12:16 AM, Masayuki Nakano 
wrote:

> Hi,
>
> I'm looking for some documents which explain when we need to touch CLOBBER
> to avoid build failure. However, I've not found such document yet. I see
> such information only in CLOBBER about WebIDL change.
>
> So, is there any document which lists up when we need to touch CLOBBER?
>
> --
> Masayuki Nakano 
> Manager, Internationalization, Mozilla Japan.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Doxygen output?

2017-02-24 Thread Benoit Girard
I find it a bit unfortunate that people link are removed since mozilla
loses a lot of valuable resources and small test cases. With that being
said I have a repo with the scripts I used to run doxygen. I think mstange
might be running it somewhere.

https://github.com/bgirard/doxygen-mozilla

It will clone a tree and run doxygen on a list of modules with their own
doxygen configuration. Last time I tried a year or so ago doxygen would
hang if I ran it on the top level tree.

On Wed, Feb 22, 2017 at 12:55 AM, Henri Sivonen 
wrote:

> On Tue, Feb 21, 2017 at 12:42 AM,   wrote:
> > My short (<2yr) experience of the code gave me the impression that only
> a small amount of it has proper doxygen comments.
> > We must be frequenting different circles; or I'm somehow blind to them.
> :-)
>
> I get to look at stuff like:
> /**
>  * Cause parser to parse input from given URL
>  * @updategess5/11/98
>  * @param   aURL is a descriptor for source document
>  * @param   aListener is a listener to forward notifications to
>  * @return  TRUE if all went well -- FALSE otherwise
>  */
>
> > Anyway, they're mainly useful when generated websites/documents are
> readily available, which it seems isn't the case (anymore).
>
> Right. I'm trying to assess how much effort I should put into writing
> Doxygen-formatted docs, and if we aren't really publishing Doxygen
> output, I feel like it's probably good to write /** ... */ in case we
> start using Doxygen again but probably not worthwhile to use the @
> tags.
>
> On Tue, Feb 21, 2017 at 10:13 PM, Bill McCloskey 
> wrote:
> > I've been thinking about how to integrate documentation into Searchfox.
> One
> > obvious thing is to allow it to display Markdown files and
> > reStructuredText. I wonder if it could do something useful with Doxygen
> > comments though? Is this something people would be interested in?
>
> I think integrating docs with Searchfox would be more useful than
> having unintegrated Doxygen output somewhere. Compared to just reading
> a .h with comments, I think a documentation view would be particularly
> useful for templates and headers with a lot of inline definitions as a
> means to let the reader focus on the interface and hide the
> implementation (including hiding whatever is in a namespace with the
> substring "detail" in the name of the namespace for templates).
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Recommendations on source control and code review

2014-04-13 Thread Benoit Girard
I didn't know this existed. I filed bug 995763 to get this link added to
the 'review requested' email to hopefully increase visibility.


On Sat, Apr 12, 2014 at 12:10 PM, Kartikaya Gupta wrote:

> Just a reminder that this page exists:
>
> https://developer.mozilla.org/en-US/docs/Developer_Guide/
> Reviewer_Checklist
>
> and you should feel free to add things to it, and use it when reviewing
> any code (your own or other people's).
>
> kats
>
>
> On 11/4/2014, 17:47, Mike Conley wrote:
>
>> Whoa, didn't expect to see a blog post I wrote in grad school to get
>> called out here. :) Interesting to see it show up on the radar.
>>
>> Re-reading it, I think the most interesting thing about the Cohen study
>> that I absorbed was the value of reviewing my own code before requesting
>> review from other people. I've found that when I look at my own code
>> using Splinter or Review Board, my brain switches into critique mode,
>> and I'm able to notice and flag the obvious things.
>>
>> This has the dual benefit of making the code better, and making it
>> easier for my reviewer to not get distracted by minor things that I
>> could have caught on my own. I almost made this a topic for my graduate
>> study research paper[1], but then did this[2] instead.
>>
>> Always happy to talk about code review,
>>
>> -Mike
>>
>> [1]:
>> http://mikeconley.ca/blog/2010/03/04/research-proposal-
>> 1-the-effects-of-author-preparation-in-peer-code-review/
>> [2]:
>> http://mikeconley.ca/blog/2010/12/23/the-wisdom-of-
>> peers-a-motive-for-exploring-peer-code-review-in-the-classroom/
>>
>> On 11/04/2014 5:32 PM, Chris Peterson wrote:
>>
>>> Code review tool company SmartBear published an interesting study [1] of
>>> the effectiveness of code reviews at Cisco. (They used SmartBear's
>>> tools, of course.) Mozillian Mike Conley reviewed SmartBear's study on
>>> his blog [2].
>>>
>>> The results are interesting and actionable. Some highlights:
>>>
>>> * Review fewer than 200-400 lines of code at a time.
>>> * Spend no more than 60-90 minutes per review session.
>>> * Authors should pre-review their own code before submitting a review
>>> request and add explanations and questions to guide reviewers.
>>>
>>>
>>> chris
>>>
>>>
>>> [1]
>>> http://smartbear.com/SmartBear/media/pdfs/WP-CC-11-
>>> Best-Practices-of-Peer-Code-Review.pdf
>>>
>>>
>>> [2]
>>> http://mikeconley.ca/blog/2009/09/14/smart-bear-cisco-
>>> and-the-largest-study-on-code-review-ever/
>>>
>>>
>>>
>>>
>>>
>>> On 4/11/14, 1:29 PM, Gregory Szorc wrote:
>>>
 I came across the following articles on source control and code review:

 *
 https://secure.phabricator.com/book/phabflavor/article/
 recommendations_on_revision_control/


 *
 https://secure.phabricator.com/book/phabflavor/article/
 writing_reviewable_code/


 *
 https://secure.phabricator.com/book/phabflavor/article/
 recommendations_on_branching/



 I think everyone working on Firefox should take the time to read them as
 they prescribe what I perceive to be a very rational set of best
 practices for working with large and complex code bases.

 The articles were written by a (now former) Facebooker and the
 recommendations are significantly influenced by Facebook's experiences.
 They have many of the same problems we do (size and scale of code base,
 hundreds of developers, etc). Some of the pieces on feature development
 don't translate easily, but most of the content is relevant.

 I would be thrilled if we started adopting some of the recommendations
 such as more descriptive commit messages and many, smaller commits over
 fewer, complex commits.

>>>
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-19 Thread Benoit Girard
I completely agree with Jeff Gilbert on this one.

I think we should try to coalesce -better-. I just checked the current
state of mozilla-inbound and it doesn't feel any of the current patch
really need their own set of tests because they're are not time
sensitive or sufficiently complex. Right now developers are asked to
create bugs for their own change with their own patch. This leads to a
lot of little patches being landed by individual developers which
seems to reflect the current state of mozilla-inbound.

Perhaps we should instead promote checkin-needed (or a similar simple)
to coalesce simple changes together. Opting into this means that your
patch may take significantly longer to get merged if it's landed with
another bad patch and should only be used when that's acceptable.
Right now developers with commit access are not encouraged to make use
of checkin-needed AFAIK. If we started recommending against individual
landings for simple changes, and improved the process, we could
probably significantly cut the number of tests jobs by cutting the
number of pushes.

On Tue, Aug 19, 2014 at 3:57 PM, Jeff Gilbert  wrote:
> I would actually say that debug tests are more important for continuous 
> integration than opt tests. At least in code I deal with, we have a ton of 
> asserts to guarantee behavior, and we really want test coverage with these 
> via CI. If a test passes on debug, it should almost certainly pass on opt, 
> just faster. The opposite is not true.
>
> "They take a long time and then break" is part of what I believe caused us to 
> not bother with debug testing on much of Android and B2G, which we still 
> haven't completely fixed. It should be unacceptable to ship without CI on 
> debug tests, but here we are anyways. (This is finally nearly fixed, though 
> there is still some work to do)
>
> I'm not saying running debug tests less often is on the same scale of bad, 
> but I would like to express my concerns about heading in that direction.
>
> -Jeff
>
> - Original Message -
> From: "Jonathan Griffin" 
> To: dev-platform@lists.mozilla.org
> Sent: Tuesday, August 19, 2014 12:22:21 PM
> Subject: Experiment with running debug tests less often on mozilla-inbound
>   the week of August 25
>
> Our pools of test slaves are often at or over capacity, and this has the
> effect of increasing job coalescing and test wait times.  This, in turn,
> can lead to longer tree closures caused by test bustage, and can cause
> try runs to be very slow to complete.
>
> One of the easiest ways to mitigate this is to run tests less often.
>
> To assess the impact of doing this, we will be performing an experiment
> the week of August 25, in which we will run debug tests on
> mozilla-inbound on most desktop platforms every other run, instead of
> every run as we do now.  Debug tests on linux64 will continue to run
> every time.  Non-desktop platforms and trees other than mozilla-inbound
> will not be affected.
>
> This approach is based on the premise that the number of debug-only
> platform-specific failures on desktop is low enough to be manageable,
> and that the extra burden this imposes on the sheriffs will be small
> enough compared to the improvement in test slave metrics to justify the
> cost.
>
> While this experiment is in progress, we will be monitoring job
> coalescing and test wait times, as well as impacts on sheriffs and
> developers.  If the experiment causes sheriffs to be unable to perform
> their job effectively, it can be terminated prematurely.
>
> We intend to use the data we collect during the experiment to inform
> decisions about additional tooling we need to make this or a similar
> plan permanent at some point in the future, as well as validating the
> premise on which this experiment is based.
>
> After the conclusion of this experiment, a follow-up post will be made
> which will discuss our findings.  If you have any concerns, feel free to
> reach out to me.
>
> Jonathan
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: gtests that start XPCOM

2014-10-23 Thread Benoit Girard
Like Ted mentions GTest doesn't support running test in parallel -in
the same process-, you have to launch multiple processes which the
./mach gtest command helps you do.

Currently GTest has a ScopedXPCOM thing. I'm not sure exactly what
this implies however:
http://mxr.mozilla.org/mozilla-central/source/testing/gtest/mozilla/GTestRunner.cpp#86

On Wed, Oct 22, 2014 at 2:07 PM, Benjamin Smedberg
 wrote:
>
> On 10/22/2014 10:49 AM, Kyle Huey wrote:
>>
>> I've been wanting this too. I was thinking about just making the gtest
>> harness itself start XPCOM. - Kyle
>
> I don't think that's quite right. 1) We'd have to serialize a bunch of tests
> 2) it would be really easy for tests to interfere with eachother.
>
> What I'd us to do is split gtests into two: keep the existing no-XPCOM
> "pure" parallelizable gtests, and have a separate suite which is more like a
> binary xpcshell tests: one process per test which uses no profile or a
> throwaway profile, and any parallelization is handled by the harness. It
> would use the same gtest-libxul so that we could compile in arbitrary test
> code which uses internal linkage.
>
> I'm not sure how hard that would be to implement, though.
>
>
> --BDS
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Announcing Eclipse CDT IDE Support

2014-10-23 Thread Benoit Girard
A new command has now landed: './mach ide eclipse' and all you need is
the eclipse CDT binary on your path. This is your ideal
pre-coffee/lunch command and will perform the following:

- Rebuild the project
- Generate an eclipse workspace + gecko project
- Import the mozilla coding style
- Launch the eclipse workspace
- Index the code base in the background.

I've blogged about the benefits of using this before here:
http://benoitgirard.wordpress.com/2014/03/10/cc-eclipse-project-generation/

Bug 973770 tracks outstanding improvements.

Let me know if you have any comments or questions.

- BenWa
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing Eclipse CDT IDE Support

2014-10-24 Thread Benoit Girard
I believe for b2g you can use the following which will only rebuild gecko:
cd objdir-gecko/
../gecko/mach ide eclipse

On Fri, Oct 24, 2014 at 12:22 AM, Botond Ballo  wrote:
>> A new command has now landed: './mach ide eclipse'
>
> Nice! Thanks for all your work on this.
>
>> will perform the following:
>>
>> - Rebuild the project
>
> Does this work on B2G?
>
> Cheers,
> Botond
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: profiler in TB

2014-10-29 Thread Benoit Girard
The profiler addon on TB shouldn't be using the panel. It has another
piece of UI because jetpack doesn't support the panel in TB. Adding
that support will make these issues go away of course. Since it's the
same code base it's likely just a regression where the panel code is
used in shared code.

Currently there's an old build of the addon that should still work:

https://dl.dropboxusercontent.com/u/2921989/geckoprofiler.xpi

However it sounds like not all the TB symbols are being uploaded to
the symbol server.

On Sun, Oct 26, 2014 at 2:56 AM, Philip Chee  wrote:
> On 26/10/2014 10:48, ISHIKAWA,chiaki wrote:
>> Hi,
>>
>> I thought I try profiler to see how it works in TB, but
>> I get the following error.
>> It looks a module called |panel| is not usable in
>> TB.
>> I looked at jetpack-panel-apps web page noted in the message, but
>> am clueless.
>>
>> Error Message:
>> error: geckoprofiler: An exception occurred.
>> Error: The panel module currently supports only Firefox.  In the future
>> we would like it to support other applications, however.  Please see
>> https://bugzilla.mozilla.org/show_bug.cgi?id=jetpack-panel-apps for more
>> information.
>> resource://jid0-edalmuivkozlouyij0lpdx548bc-at-jetpack/addon-sdk/lib/sdk/panel.js
>> 12
>> Traceback (most recent call last):
>>File "resource://gre/modules/NetUtil.jsm:123", line 17, in
>> NetUtil_asyncOpen/<.onStopRequest
>>File
>> "resource://jid0-edalmuivkozlouyij0lpdx548bc-at-jetpack/addon-sdk/lib/sdk/net/url.js:49",
>> line 9, in readAsync/<
>>   [...]
>
> For panel.js etc please see:
>
> Bug 1023661 - Add support for SeaMonkey (and Thunderbird)
> https://bugzilla.mozilla.org/show_bug.cgi?id=1023661#c3
>> After these smoke tests I think context-menu.js, selection.js and
>> panel.js can be marked SeaMonkey compatible.
>
> Bug 1071048 - update sdk/tabs metadata to work in SeaMonkey (for Ghostery)
> https://bugzilla.mozilla.org/show_bug.cgi?id=1071048
>
> Phil
>
> --
> Philip Chee , 
> http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
> Guard us from the she-wolf and the wolf, and guard us from the thief,
> oh Night, and so be good for us to pass.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: CSS will-change

2014-10-31 Thread Benoit Girard
As of next week I intend to turn will-change on by default on all
platform. It has been developed behind the
layout.css.will-change.enabled;true preference since Firefox 31 and
has been enabled for certified FirefoxOS apps since 1.4[1] [2]. Blink
has already shipped this [3], IE lists the feature as
'under-consideration [4] with 163 votes [5].

Our MDN page for the feature is being finalized [6].

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=977757
[2] http://comments.gmane.org/gmane.comp.mozilla.devel.gaia/6463
[3] 
https://groups.google.com/a/chromium.org/d/msg/blink-dev/LwvyVCMQx1k/R_cPDxwEtEYJ
[4] https://status.modern.ie/csswillchange?term=will-change
[5] 
https://wpdev.uservoice.com/forums/257854-internet-explorer-platform/suggestions/6261294-css-will-change
[6] https://developer.mozilla.org/en-US/docs/Web/CSS/will-change

- Benoit Girard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS will-change

2014-10-31 Thread Benoit Girard
Yes, it's implemented in part 1-4 of my patch queue in bug 961871.

Here's how it works -but is subject to change at any time-:
- The following are all in untransformed CSS pixel unit. This makes
the implementation *much* simpler[1] and more predicable for web
authors[2].
- We look at the scrollport area (the bounds of the visible area) of
the document that will-change is used within to set the budget. This
is multiplied by some constant which is currently 3 times but I'll
open a discussion to change that to perhaps as high as 9 times within
the Firefox 36 time-frame.
- When a frame is seen that uses will-change the area of the frame is
added to the budget for that document.
- If the total usage for that document is in budget then all
will-change optimizations are performed. Otherwise none are performed.

[1] We need to decide the will-change budget at the end of the display
list phase which is too early to know the will-change costs in layer
pixel which happens in the follow layer building phase. Using CSS
pixels prevents us from requiring a second pass in some cases to
properly implement the will-change budget in terms of layer pixels.
[2] If layer pixel were used an author could lose their will-change
optimizations because we decided to re-rasterize a scaled layer at a
higher resolution. This happens seemingly unpredictably from an
author' point of view.


On Fri, Oct 31, 2014 at 3:10 PM, L. David Baron  wrote:
> On Friday 2014-10-31 14:17 -0400, Benoit Girard wrote:
>> As of next week I intend to turn will-change on by default on all
>> platform. It has been developed behind the
>> layout.css.will-change.enabled;true preference since Firefox 31 and
>> has been enabled for certified FirefoxOS apps since 1.4[1] [2]. Blink
>> has already shipped this [3], IE lists the feature as
>> 'under-consideration [4] with 163 votes [5].
>
> Have we implemented protections to deal with overuse of will-change
> (e.g., just ignoring all will-change uses in a document that uses it
> too much, for some definition of too much)?
>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS will-change

2014-10-31 Thread Benoit Girard
That's a good idea. Filed bug 1092320.

On Fri, Oct 31, 2014 at 3:48 PM, Ehsan Akhgari  wrote:
> Can we make sure to log something to the web console when we choose to
> dishonor will-change?  That will help web developers to be able to reason
> about why will-change doesn't give them the performance benefits that they
> expect.
>
>
> On 2014-10-31 3:36 PM, Benoit Girard wrote:
>>
>> Yes, it's implemented in part 1-4 of my patch queue in bug 961871.
>>
>> Here's how it works -but is subject to change at any time-:
>> - The following are all in untransformed CSS pixel unit. This makes
>> the implementation *much* simpler[1] and more predicable for web
>> authors[2].
>> - We look at the scrollport area (the bounds of the visible area) of
>> the document that will-change is used within to set the budget. This
>> is multiplied by some constant which is currently 3 times but I'll
>> open a discussion to change that to perhaps as high as 9 times within
>> the Firefox 36 time-frame.
>> - When a frame is seen that uses will-change the area of the frame is
>> added to the budget for that document.
>> - If the total usage for that document is in budget then all
>> will-change optimizations are performed. Otherwise none are performed.
>>
>> [1] We need to decide the will-change budget at the end of the display
>> list phase which is too early to know the will-change costs in layer
>> pixel which happens in the follow layer building phase. Using CSS
>> pixels prevents us from requiring a second pass in some cases to
>> properly implement the will-change budget in terms of layer pixels.
>> [2] If layer pixel were used an author could lose their will-change
>> optimizations because we decided to re-rasterize a scaled layer at a
>> higher resolution. This happens seemingly unpredictably from an
>> author' point of view.
>>
>>
>> On Fri, Oct 31, 2014 at 3:10 PM, L. David Baron  wrote:
>>>
>>> On Friday 2014-10-31 14:17 -0400, Benoit Girard wrote:
>>>>
>>>> As of next week I intend to turn will-change on by default on all
>>>> platform. It has been developed behind the
>>>> layout.css.will-change.enabled;true preference since Firefox 31 and
>>>> has been enabled for certified FirefoxOS apps since 1.4[1] [2]. Blink
>>>> has already shipped this [3], IE lists the feature as
>>>> 'under-consideration [4] with 163 votes [5].
>>>
>>>
>>> Have we implemented protections to deal with overuse of will-change
>>> (e.g., just ignoring all will-change uses in a document that uses it
>>> too much, for some definition of too much)?
>>>
>>> -David
>>>
>>> --
>>> 𝄞   L. David Baron http://dbaron.org/   𝄂
>>> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>>>   Before I built a wall I'd ask to know
>>>   What I was walling in or walling out,
>>>   And to whom I was like to give offense.
>>> - Robert Frost, Mending Wall (1914)
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: NS_StackWalk is totally broken on Win64

2014-11-06 Thread Benoit Girard
Off the top of my head:
- Are you compiling with --enable-profiling?
- The actual unwind is performed by DbgHelp library. Make sure it's
up-to-date. We have had this issue on windows xp that require a
DbgHelp update. Unlikely for win8 but a good thing to check.
- The call is made here:
http://mxr.mozilla.org/mozilla-central/source/xpcom/base/nsStackWalk.cpp#381
but I don't see anything wrong.
- Are you compiling with frame pointers? Apparently in AMD64 the
defaults may be different. Try compiling explicitly with frame
pointers (or making sure they are there). If that's it we need to
update --enable-profiling to do the right thing.

On Thu, Nov 6, 2014 at 12:26 AM, Nicholas Nethercote
 wrote:
> Hi,
>
> NS_StackWalk is totally broken on Win64. I've been looking into this
> because it prevents DMD from working usefully, but I am stuck. Details
> are in https://bugzilla.mozilla.org/show_bug.cgi?id=1088343.
>
> You can see examples of this in debug mochitest logs when assertions
> failures occur. E.g. here's one on Windows 7 (32-bit):
>
> 11:53:45 INFO -  [Parent 2180] ###!!! ASSERTION: Invalid value
> (157286400 / 102760448) for
> explicit/js/compartment(http://too-big.com/)/stuff: 'false', file
> aboutMemory.js, line 0
> 11:53:45 INFO -  #01: NS_InvokeByIndex
> [xpcom/reflect/xptcall/md/win32/xptcinvoke.cpp:71]
> 11:53:45 INFO -  #02: CallMethodHelper::Invoke()
> [js/xpconnect/src/XPCWrappedNative.cpp:2394]
> 11:53:45 INFO -  #03: XPCWrappedNative::CallMethod(XPCCallContext
> &,XPCWrappedNative::CallMode)
> [js/xpconnect/src/XPCWrappedNative.cpp:1713]
> 11:53:45 INFO -  #04: XPC_WN_CallMethod(JSContext *,unsigned
> int,JS::Value *) [js/xpconnect/src/XPCWrappedNativeJSOps.cpp:1250]
> 11:53:45 INFO -  #05: js::CallJSNative(JSContext *,bool
> (*)(JSContext *,unsigned int,JS::Value *),JS::CallArgs const &)
> [js/src/jscntxtinlines.h:231]
>
> (And so on for another 96 frames.)
>
> Compare it with the corresponding one on Windows 8 (64-bit):
>
> 12:22:36 INFO -  [Parent 3484] ###!!! ASSERTION: Invalid value
> (157286400 / 102760448) for
> explicit/js/compartment(http://too-big.com/)/stuff: 'false', file
> aboutMemory.js, line 0
> 12:22:36 INFO -  #01: KERNELBASE + 0x26ea
>
> Yeah, a single unhelpful frame is all you get.
>
> It seems that the first two frames are gotten correctly and then
> things go haywire on the third frame. Typically we skip at least the
> first two frames so often the stack traces end up empty or almost
> empty.
>
> This is a bad situation; stack traces are used in lots of different
> places. If anyone has any idea what might be wrong, I'd love to hear
> about it. Thank you.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MozReview ready for general use

2014-11-06 Thread Benoit Girard
Cool. I'm eager to try this out. Sadly
https://hg.mozilla.org/hgcustom/version-control-tools is giving me a
503 error at this time.

On Wed, Nov 5, 2014 at 11:50 PM, Mark Côté  wrote:
> A couple months ago I gave a sneak peak into our new repository-based
> code-review tool based on Review Board.  I'm excited to announce that
> this tool, now named (descriptively but unimaginatively) MozReview, is
> ready for general use.
>
> In the interests of beta-testing our documentation at the same time as
> our code, I want to mostly confine my post to a link to our docs:
> http://mozilla-version-control-tools.readthedocs.org/en/latest/mozreview.html
>
> Everything you need should be in there.  The only item I want to
> highlight here is that, yes, we know the UI is currently not great.  We
> bolted some stuff onto the basic Review Board interface for our
> repository-centric work flow.  We're working on integrating that into a
> whole new look & feel that will also strip out the bits that aren't as
> pertinent to our approach to code review.  Please bear with us!  We hope
> that, UI warts notwithstanding, you'll still enjoy this fresh approach
> to code review at Mozilla.
>
> Mark
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling on Linux

2014-11-13 Thread Benoit Girard
Thanks for pointing this out, there's no single all purpose tool.

Just a reminder that we have documentation on how to look into performance
problems here:
https://developer.mozilla.org/en-US/docs/Mozilla/Performance

Zoom already has a page on there. If there's any mozilla specific
information about using Zoom it should probably live here:
https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_Zoom

On Thu, Nov 13, 2014 at 1:03 PM, smaug  wrote:

> On 11/13/2014 08:01 PM, smaug wrote:
>
>> Hi all,
>>
>>
>> looks like Zoom profiler[1] is now free.
>> It has rather good UI on top of oprofile/rrprofile
>>
> perf/oprofile/rrprofile
>
>
>
> making profiling quite easy.
>
>> I've found it easier to use than Gecko profiler and it gives different
>> kinds of views to the same
>> data. However it does lack the JS specific bits Gecko profiler has.
>> Anyhow, anyone hacking Gecko on Linux[2], I suggest you to give a try.
>>
>>
>>
>>
>> -Olli
>>
>>
>>
>>
>> [1] http://www.rotateright.com/zoom/
>> [2] Zoom should run on OSX too, but never tried.
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Capturing additional metadata in moz.build files

2014-12-10 Thread Benoit Girard
On Tue, Dec 9, 2014 at 1:46 PM, Gregory Szorc  wrote:

> * Building a subscription service for watching code and reviews
>

They all sound great.  Except I'm not sure what you mean by this one. Are
you suggesting that we have something like a list of email in moz.build to
register for updates to a component? I'm not sure I'd like to see people
committing to the tree to register/CC themselves. But maybe you had
something else in mind?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is MOZ_SHARK still used?

2015-04-06 Thread Benoit Girard
+1 for removing it. While a great tool at the time, it's now a dead end
with good alternatives.

On Mon, Apr 6, 2015 at 11:11 AM, Eric Shepherd (Sheppy) <
esheph...@mozilla.com> wrote:

> Is it worth talking to someone from the TenFourFox project
> (http://www.floodgap.com/software/tenfourfox/)  to see if they have an
> opinion on this? They may be using Shark still for their debugging and
> testing processes.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Xcode+gecko for newbies instructional video

2015-05-22 Thread Benoit Girard
I did the eclipse generation. It's not really meant to compile, it's only
meant for writing code ATM. The challenge with Eclipse is dealing with the
CDT limitations and quirks. There's part of our code base that is correct
C++ that the CDT does not understand, some of which is for performance
reasons. Particularly in the string code which is pretty common throughout
Gecko. I've tried to submit patches to convert from the subset 'Valid C++
that the CDT doesn't like' to 'Valid C++ that the C++ does like' but those
patches were rejected.

Given the limitations described we can make further improvements. I still
haven't investigated all the failures cases. Feel free to sync up with
myself and Botond if you're interested to help.

> I wouldn't recommend this for newbies

I'm not sure if I agree. The current eclipse project is not perfect but
very functional. For people wanting to write a lot of cross-platform
patches, as is a must with graphics patches, the eclipse backend is the
clear choice even with its limitations.

On Fri, May 22, 2015 at 2:49 PM,  wrote:

> On Friday, May 22, 2015 at 2:37:14 PM UTC-4, Mike Hoye wrote:
> > On 2015-05-22 2:19 PM, Jet Villegas wrote:
> > > Do we have instructions for Visual Studio anywhere?
> > Video for Visual Studio Community Edition and Eclipse would both be very
> > much welcomed by the community.
> >
> > - mhoye
>
> The Eclipse project generator is a WIP/incomplete (doesn't compile fully,
> code-complete or navigate fully, incorrect defines causing wrong
> completions), which we need to make very clear if we are promoting it. I
> wouldn't recommend this for newbies, the current state is far too confusing.
> That said, I think the approach I am planning to use for Xcode (using the
> build backend directly to get the build commands) is what the Eclipse
> project needs to do. Hopefully I can help with that.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Per-test chaos mode now available, use it to help win the war on orange!

2015-07-10 Thread Benoit Girard
I've filed a bug for enabled Chaos Mode without recompiling:

https://bugzilla.mozilla.org/show_bug.cgi?id=1182516

On Mon, Jun 8, 2015 at 9:12 AM,  wrote:

> On Thursday, June 4, 2015 at 6:15:35 PM UTC-4, Chris AtLee wrote:
> > Very interesting, thank you!
> >
> > Would there be a way to add an environment variable or harness flag to
> run
> > all tests in chaos mode?
> >
>
> There isn't at the moment but one can definitely be added if there is a
> use for it. At the moment the easiest way to enable chaos mode globally is
> to change the value of sChaosFeatures in ChaosMode.h and recompile.
>
> kats
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Summary of e10s performance (Talos + Telemetry + crash-stats)

2015-07-15 Thread Benoit Girard
For the e10s talos regressions see
https://bugzilla.mozilla.org/show_bug.cgi?id=1174776 and
https://bugzilla.mozilla.org/show_bug.cgi?id=1184277. We've already
diagnose one source of the regression to be a difference with GC/CC
behavior when running e10s talos.

On Fri, Jul 10, 2015 at 5:44 PM, Vladan Djeric  wrote:

> Yup, the median shutdown duration for Release 39 users on Windows with
> Telemetry is 2.3 seconds for example: http://mzl.la/1HSHiD8
> Those are also the kinds of shutdown times I see on my Windows machines
> when I have 3-5 windows open with 5-10 tabs each.
>
> What is your experience?
> Btw, you can go to about:telemetry and look through your archived Telemetry
> pings to see a history of your own shutdownDurations. Open about:telemetry,
> select "Archived ping data", open the "Simple Measurements" section, and
> use the next-previous arrows to look through your Telemetry submissions.
> Focus on the "saved-session" pings.
>
> On Fri, Jul 10, 2015 at 5:33 PM, Mike Hommey  wrote:
>
> > On Fri, Jul 10, 2015 at 03:59:43PM -0400, Vladan Djeric wrote:
> > > A few of us on the perf team (+ Joel Maher) looked at e10s performance
> &
> > > stability using Talos, Telemetry, and crash-stats. I wrote up the
> > > conclusions below.
> > >
> > > Notable improvements in Talos tests [1]:
> > >
> > > * Hot startup time in Talos improved by about 50% across all platforms
> > > (ts_paint [2]). This test measures time from Firefox launch until a
> > Firefox
> > > window is first painted (ts_paint); I/O read costs are not accounted
> for,
> > > as data is already cached in the OS disk buffer before the test.
> > > * The tsvgr_opacity test improved 50-80% across all platforms. This is
> a
> > > sign of a reduction in the overhead of loading a page, instead of an
> > > improvement in actual SVG performance.
> > > * Linux scrolling performance improved 5-15%
> > > * The long-standing e10s WebGL performance regression has been fixed
> > > * SVG rendering performance (tsvgx) is ~25% better on Windows 7 & 8,
> but
> > it
> > > is 10% worse on Windows XP and 25% worse on Linux
> > >
> > > Notable regressions in Talos tests [1]:
> > >
> > > * There are several large regressions unique to Windows XP. Scrolling
> > > smoothness regressed significantly (5-6 times worse on tp5o_scroll and
> > > tscrollx [2]), resizing of Firefox windows is 150% worse (tresize), SVG
> > > rendering performance is 25% worse (tsvgx)
> > > * Page loading time regressed across all platforms (tp5o). Linux
> > regressed
> > > ~30%, OS X 10.10 regressed 20%, WinXP/Win8/Win7 all regressed ~10%.
> > > Page-loading with accessibility enabled (a11yr) saw similar
> regressions.
> > > * Time to open a new Firefox window (tpaint) regressed 30% on Linux,
> and
> > > across different versions of Windows (<10%)
> > > * Resizing of Firefox windows (tresize) is ~15% worse on Linux
> > > * Note: not all tests are compatible with e10s yet (e.g.
> session-restore
> > > performance test) so this list isn't complete
> > >
> > > Notable improvements from Telemetry data [3]:
> > >
> > > * Overall tab animation smoothness improved significantly: 50% vs 30%
> of
> > > tab animation frames are hitting the target 16ms inter-frame interval.
> > See
> > > FX_TAB_ANIM_* graphs in [3] to see the distribution of frame intervals.
> > > Note that not all tab animations benefited equally.
> > > * e10s significantly decreased jank caused by GC & CC, both in parent &
> > > content processes (GC_MAX_PAUSE_MS, GC_SCC_SWEEP_MAX_PAUSE_MS,
> > > CYCLE_COLLECTOR_MAX_PAUSE, etc [3])
> > > * Unlike Talos, Telemetry suggests that the time to open a new Firefox
> > > window improved with e10s (FX_NEW_WINDOW_MS)
> > > * Median time to restore a saved session improved by 40ms or 20%
> > > ("simpleMeasurements/sessionRestored")
> > > * Median shutdown duration improved by 120ms or 10%
> > > ("simpleMeasurements/shutdownDuration")
> >
> > Wait. What? Median shutdown duration is 1.2s ?!?
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Vim Users: Introducing YouCompleteMe Support

2015-07-20 Thread Benoit Girard
YouCompleteMe supports has landed in Gecko. It doesn't require any mozilla
configuration changes on your part. We've landed a file at the root of the
tree '.ycm_extra_conf.py' that lets YCM query the build system for the
configuration of each file giving accurate results that match your
environment without any hassle on your part.

I've been using YCM for over a week locally without any issue. The benefits
of YCM is (1) it doesn't need to maintain an index like CTags (cognitive
load), (2) the completion is snappy, (3) you can jump to
definition/declarations, (4) show compile errors while editing, (5) it's
accurate.

Follow this guide to setup YCM with the clang option, it should take ~15
minutes to setup:
https://developer.mozilla.org/en-US/docs/YouCompleteMe

-Benoit Girard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform