Re: mach has landed

2012-10-05 Thread Neil

Nicholas Nethercote wrote:


On Thu, Oct 4, 2012 at 10:18 AM, Justin Lebar justin.le...@gmail.com wrote:
 


1) Build errors are hard to identify with make. Parallel execution can make 
them even harder to track down. Poor output from invoked processes is also a 
problem.
 


I have a script [1] which works well enough for my purposes in the normal 
Mozilla build (I haven't tried it with mach).  It highlights stderr in red and 
summarizes all the errors after make finishes so you don't have to go searching 
for them.
   


I redirect output to a file and then search for the first error: string, but 
I'm kind of low-tech like that.
 

I run a parallel make, then if that fails I run a serial make in the 
deepest folder that I can identify (not always easy with parallel make, 
but better than doing a top-level serial make). If there are too many 
errors to deal with then (except on Windows) I run make from inside vim.


--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: IMPORTANT: Do not pull from inbound

2012-10-05 Thread Neil

Ehsan Akhgari wrote:


use a better revision control system


Or a better file system perhaps ;-)

--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Flash Player freezes XULRunner

2012-10-05 Thread Philipp Wagner
Am 05.10.2012 03:54, schrieb James Newell:
 Loading any page with a SWF into a browser element causes XULRunner
 15.0.1 to freeze on Win7 32bit and 64bit. I've encountered the same
 problem with a few versions of Flash Player, both v10 and v11.
 
 Firefox 15.0.1 works fine. What do I need to do to get Flash to
 work?

Did you enable OOP plugins? I remember the setting not being default on
XULRunner, but the non-OOP codepath is not tested any more and broken
with newer Flash/XULRunner versions (I don't remember since when).

Philipp
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: NS_New$THING vs. new $THING

2012-10-05 Thread Jonas Sicking
On Mon, Oct 1, 2012 at 9:27 AM, Nathan Froyd froy...@mozilla.com wrote:
 I recently filed a bug (bug 792169) for adding NS_NewIMutableArray, in
 service of deleting nsISupportsArray.  The reviewer asked me to use
 more standard C++ instead of perpetuating the NS_New* idiom and I did
 so, with a static |Create| member function.

 However, looking through other code--especially the various bits under
 dom/ that have been added for B2G--I see that the NS_New* style is alive
 and well.

 Points for NS_New*:

 + Avoid exporting internal datastructures;
 + Possibly more efficient than do_CreateInstance and friends;
 - Usually requires %{C++ blocks in IDL files;
 - Less C++-y.

 Points for new and/or static member functions:

 + More C++-y;
 + Less function call overhead compared to NS_New* (modulo LTO);
 - Drags in more headers.

 ...and there are probably other things that I haven't thought of for
 both of them.

 So which style do we want to embrace?  Or are there good reasons to
 continue with both?

First off, using do_CreateInstance should generally be very far down
the list of alternatives. You should only really use that if you're
writing a component that you're expecting external developers to be
able to override.

Using NS_New* makes sense if you want to avoid include hell and are
going to be using an XPCOM interface rather than the concrete class to
interact with your object.

However generally speaking, we have been overusing XPCOM interfaces
*way* too much. Ask yourself if you really need an interface or if you
could just as well be using the concrete class. With WebIDL you don't
even need to use XPCOM interfaces to expose something to javascript!

Using concrete classes has enough advantages in code simplicity and
clarity that if you can do it (which is often the case) then it can be
worth adding an extra #include or two. And if you are #including the
header for concrete class rather than xpidl generated interface, then
you've not even added any extra #includes!

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: NS_New$THING vs. new $THING

2012-10-05 Thread Boris Zbarsky

On 10/5/12 8:55 PM, Jonas Sicking wrote:

With WebIDL you don't even need to use XPCOM interfaces to expose something to 
javascript!


Indeed.  As of a few days ago, you don't even need to inherit from 
nsISupports.  ;)


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: An object that corresponds to the life of a document load?

2012-10-05 Thread Jonas Sicking
On Mon, Oct 1, 2012 at 7:25 AM, Henri Sivonen hsivo...@iki.fi wrote:
 Do we have an object that represents the life of a document load from
 the very beginning of deciding to load a URL in a browsing context to
 the firing of the load event?

 nsIChannel/nsIRequest is not it, because it only lasts until the end
 of the network stream after which point there's still time to the
 completion to the parse after which there is still time to the firing
 of the load event.

I thought we held on to the nsIChannel for as long as the nsDocument
was alive. See mChannel.

But it might still not be what you're looking for since we create a
new nsIChannel on each redirect.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed policy change: reusability of tests by other browsers

2012-10-05 Thread Jonas Sicking
Sorry to bring back an old thread, but the upcoming Test the web
forward meeting reminded me of this thread.

In general I really approve of this idea, however I have one major concern.

 2) Write an introduction to testharness.js targeted at people familiar
 with mochitest.  testharness.js is the de facto standard testing
 harness in the web standards world, and we already can run such tests
 as mochitests automatically (see dom/imptests/), so JavaScript tests
 meant to be usable by other browsers should be written in that format.

As others have pointed out, the testharness.js test suite is much less
convenient to use than mochitest.

Simply wrapping

test(function() {
 // test here
})

only works for the most simple tests. Most tests that I write use lots
of synchronous and asynchronous callbacks. Each one of those needs to
be wrapped to catch exceptions. There's also a lot more overhead in
the harness due to trying to count how much of a test you pass.

In general, testharness.js seems to be more optimized for producing a
result report which measure how close an implementation is to
implementing a feature, than it is optimized for making it easy to
write tests.

I believe many developers right now end up spending as much time
writing tests as they do implementing features. That is a very high
cost, but something that is definitely worth it. However we should be
working towards lowering that cost rather than increasing it.

Rather than trying to convince developers that testharness.js would in
fact not increase the cost of writing tests, I think we should try to
get W3C to adjust testharness.js such that it's easier to write tests
for it. If we make it as easy to write W3C tests as it is to write
mochitests then I would absolutely agree with your proposal. I would
imagine that would also make it easier to get other browser vendors to
do the same, as well as members of the webdevelopment community.

Another problem that I think we'd have is that many of our tests use
generators and yield. This *dramatically* cuts down on the complexity
of writing complex tests which has lots of asynchronous callbacks. For
example [1][2] would have been much harder to write without them. I
think our approach here could be migrate these tests to use ES6 based
generators as soon as we have them implemented in gecko, and then
submit them to W3C as soon as enough browsers implement ES6.

I don't think that we should be telling people to not use generators
in the meantime. My experience is that rewriting tests to use
generators both cuts down on the test writing time, and makes it much
less likely that the test ends up with intermittent orange bugs.

[1] 
http://mxr.mozilla.org/mozilla-central/source/content/base/test/test_xhr_progressevents.html?force=1
[2] 
http://mxr.mozilla.org/mozilla-central/source/dom/indexedDB/test/unit/test_add_put.js

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform