Re: Include dependencies in Gecko

2013-09-08 Thread Neil

Nicholas Nethercote wrote:


On Wed, Aug 8, 2012 at 12:36 PM, Nicolas Silva nsi...@mozilla.com wrote:
 


I have an ugly script that goes through the dependency files generated by make 
to collect informations about dependencies. I'll clean it up if you are 
interested (and rewrite it in python because I suppose people in here don't 
want to deal with D code).
   


There's a clang tool called include-what-you-use that does this properly 
(http://code.google.com/p/include-what-you-use/).

https://bugzilla.mozilla.org/show_bug.cgi?id=634839 is a bug for using it to 
clean up SpiderMonkey's headers, which had progress made but stalled out.  
https://bugzilla.mozilla.org/show_bug.cgi?id=772807 was a bug for cleaning up 
editor/ which actually completed, thought the build time improvements were 
minor.

IME, this is one those things that seems easy and worthwhile but usually ends 
up being a real pain, and doesn't seem worth it. Getting it 90% right isn't too 
bad but there's often some trouble in the last 10% that torpedoes it.
 

Someone pointed out a small problem with include-what-you-use: it tries 
to include mozilla-config.h although that is of course already 
force-included on the command line.


--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Include dependencies in Gecko

2013-09-08 Thread Nicholas Cameron
Yes. One of many mistakes it can helpfully make for you (along with including 
impl headers instead of the API ones, only being correct for the current build, 
etc.).

I believe you can setup rules to stop it doing this particular thing. But in 
general, IWYU is a semi-automatic process and requires some checking on the 
part of the user.

 
 Someone pointed out a small problem with include-what-you-use: it tries 
 
 to include mozilla-config.h although that is of course already 
 
 force-included on the command line.
 
 
 
 -- 
 
 Warning: May contain traces of nuts.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Nicholas Cameron
I timed builds to see if this makes a significant difference and it did not.

I timed a clobber debug build using clang with no ccache on Linux on a fast 
laptop. I timed using a pull from m-c about a week old (I am using this pull 
because I have a lot of other stats on it). I then applied bjacob's nscoord 
patch from the bug and a patch of my own which does a similar thing for some 
Moz2D headers which get pulled into a lot of files (~900 other headers, 
presumably more cpp files). For both runs I did a full build, then clobbered, 
then timed a build. I avoided doing any other work on the laptop. n=1, so there 
might be variation, but my experience with build times is that there usually 
isn't much.

Before changes:

real  38m54.373s
user  234m48.508ms
sys 7m18.708s

after changes:

real  39m11.123s
user  234m26.864ms
sys 7m10.336s

The removed headers are also the ideal case for ccache, so incremental builds 
or real life clobber builds should be affected even less by these changes.

I don't think these kind of time improvements make it worth duplicating std 
library code into mfbt, we may as well just pull in the headers and forget 
about it. A caveat would be if it makes a significant difference on slower 
systems.

Given that improving what gets included via headers can make significant 
difference to build time, this makes me wonder exactly what aspect of header 
inclusion (if not size, which we should catch here) makes the difference.

Nick.

On Sunday, September 8, 2013 3:22:01 PM UTC+12, Benoit Jacob wrote:
 Hi,
 
 
 
 It seems that we have some much-included header files including algorithm
 
 just to get std::min and std::max.
 
 
 
 That seems like an extreme case of low ratio between lines of code included
 
 (9,290 on my system, see Appendix below) and lines of code actually used
 
 (say 6 with whitespace).
 
 
 
 I ran into this issue while trying to minimize nsCoord.h (
 
 https://bugzilla.mozilla.org/show_bug.cgi?id=913868 ) and in my patch, I
 
 resorted to defining my own min/max functions in a nsCoords_details
 
 namespace.
 
 
 
 This prompted comments on that bug suggesting that it might be better to
 
 have that in MFBT. But that, in turn, sounds like overturning our recent
 
 decision to switch to std::min / std::max, which I feel is material for
 
 this mailing list.
 
 
 
 It is also conceivable to keep saying that we should use std::min /
 
 std::max *except* in headers that don't otherwise include algorithm,
 
 where it may be more reasonable to use the cheap-to-#include variant
 
 instead.
 
 
 
 What do you think?
 
 
 
 Benoit
 
 
 
 === Appendix: how big and long to compile is algorithm ? ===
 
 
 
 On my Ubuntu 12.04 64bit system, with GCC 4.6.3, including algorithm
 
 means recursively including 9,290 lines of code:
 
 
 
 $ echo '#includealgorithm'  a.cpp  g++ -save-temps -c a.cpp  wc -l
 
 a.ii
 
 9290 a.ii
 
 
 
 On may wonder what this implies in terms of compilation times; here is a
 
 naive answer. I'm timing 10 successive compilations of a file that just
 
 includes iostream, and then I do the same with a file that also includes
 
 algorithm.
 
 
 
 $ echo '#includeiostream'  a.cpp  time (g++ -c a.cpp  g++ -c a.cpp
 
  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c
 
 a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp)
 
 
 
 real0m1.391s
 
 user0m1.108s
 
 sys 0m0.212s
 
 
 
 echo '#includealgorithm'  a.cpp  echo '#includeiostream'  a.cpp 
 
 time (g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++
 
 -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp 
 
 g++ -c a.cpp)
 
 
 
 real0m1.617s
 
 user0m1.324s
 
 sys 0m0.244s
 
 
 
 (I actually repeated this many times and kept the best result for each; my
 
 hardware is a Thinkpad W520 with a 2.5GHz, 8M cache Core i7).
 
 
 
 So we see that adding the #includealgorithm made each compilation 23 ms
 
 longer in average (226 ms for 10 compilations).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Unable to restore focus and 32-bit Linux builds

2013-09-08 Thread Nicholas Nethercote
Hi,

I want to land https://bugzilla.mozilla.org/show_bug.cgi?id=910517,
which is just a clean-up of memory reporters (and is blocking a bunch
of follow-up work.)  But I'm blocked by some baffling time-outs
occurring only on 32-bit opt Linux builds.  (32-bit debug Linux builds
are fine.)

About 90% of mochitest-1 runs hit this time-out.  The problem is that
in a few tests this error from TestRunner.js is hit:

  Error: Unable to restore focus, expect failures and timeouts.

and then the test times out.  This happens reliably;  it's shown up
once on inbound (the patch was backed out) and twice on try, all on
different revisions.

Does anyone know what this error means, and how I might fix it?  I
have no idea how my patch could be having this effect.  The code I've
changed shouldn't even be running in mochitest-1.



In order to try to debug it, I've been attempting to do 32-bit Linux
builds. I have an Ubuntu 13.04 machine and I'm following the
Instructions for Ubuntu at
https://developer.mozilla.org/en/docs/Compiling_32-bit_Firefox_on_a_Linux_64-bit_OS.

The build gets quite a long way, but linking of libnptest.so (whatever
that is) fails due lots of errors like this one:

0:19.91 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so while searching for
gtk-x11-2.0
 0:19.91 /usr/bin/ld.gold.real: error: cannot find -lgtk-x11-2.0

(The full list is below.)  I apparently have a copy of a 32-bit
libgtk-x11-2.0.so:

[bayou:~] locate libgtk-x11
/usr/lib/i386-linux-gnu/libgtk-x11-2.0.so.0
/usr/lib/i386-linux-gnu/libgtk-x11-2.0.so.0.2400.17
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.a
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0.2400.17

but I guess it's not being found.

Has anyone had success with 32-bit builds on Ubuntu 13.04?  I've
included the mozconfig I used below, in case it's useful.

Thanks.

Nick


export CROSS_COMPILE=1
ac_add_options --enable-optimize='-O'
ac_add_options --enable-tests

mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/o32
#mk_add_options MOZ_MAKE_FLAGS=-j8 --quiet --no-print-directory
mk_add_options MOZ_MAKE_FLAGS=-j8

export CC=clang -m32
export CXX=clang++ -m32
AR=ar

ac_add_options --x-libraries=/usr/lib32
ac_add_options --target=i686-pc-linux
ac_add_options --disable-crashreporter  # no 32-bit curl-dev lib
ac_add_options --disable-libnotify  # no 32-bit libinotify-dev
ac_add_options --disable-crashreporter  # no 32-bit libgnomevfs-dev
ac_add_options --disable-gstreamer  # no 32-bit libgstreamer, etc



 0:19.91 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so while searching for
gtk-x11-2.0
 0:19.91 /usr/bin/ld.gold.real: error: cannot find -lgtk-x11-2.0
 0:19.91 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libatk-1.0.so while searching for atk-1.0
 0:19.91 /usr/bin/ld.gold.real: error: cannot find -latk-1.0
 0:19.91 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libgio-2.0.so while searching for gio-2.0
 0:19.91 /usr/bin/ld.gold.real: error: cannot find -lgio-2.0
 0:19.91 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libpangoft2-1.0.so while searching for
pangoft2-1.0
 0:19.91 /usr/bin/ld.gold.real: error: cannot find -lpangoft2-1.0
 0:19.91 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libfreetype.so while searching for freetype
 0:19.91 /usr/bin/ld.gold.real: error: cannot find -lfreetype
 0:19.92 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libfontconfig.so while searching for
fontconfig
 0:19.92 /usr/bin/ld.gold.real: error: cannot find -lfontconfig
 0:19.92 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libgdk-x11-2.0.so while searching for
gdk-x11-2.0
 0:19.92 /usr/bin/ld.gold.real: error: cannot find -lgdk-x11-2.0
 0:19.92 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libpangocairo-1.0.so while searching for
pangocairo-1.0
 0:19.92 /usr/bin/ld.gold.real: error: cannot find -lpangocairo-1.0
 0:19.92 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libgdk_pixbuf-2.0.so while searching for
gdk_pixbuf-2.0
 0:19.92 /usr/bin/ld.gold.real: error: cannot find -lgdk_pixbuf-2.0
 0:19.92 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libpango-1.0.so while searching for
pango-1.0
 0:19.92 /usr/bin/ld.gold.real: error: cannot find -lpango-1.0
 0:19.92 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libcairo.so while searching for cairo
 0:19.92 /usr/bin/ld.gold.real: error: cannot find -lcairo
 0:19.92 /usr/bin/ld.gold.real: warning: skipping incompatible
//usr/lib/x86_64-linux-gnu/libgobject-2.0.so while searching for
gobject-2.0
 0:19.92 /usr/bin/ld.gold.real: 

Re: Unable to restore focus and 32-bit Linux builds

2013-09-08 Thread Mike Hommey
On Sun, Sep 08, 2013 at 05:29:03PM -0700, Nicholas Nethercote wrote:
 0:19.91 /usr/bin/ld.gold.real: warning: skipping incompatible
 //usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so while searching for
 gtk-x11-2.0
  0:19.91 /usr/bin/ld.gold.real: error: cannot find -lgtk-x11-2.0
 
 (The full list is below.)  I apparently have a copy of a 32-bit
 libgtk-x11-2.0.so:

In fact, you *don't* have a copy of a 32-bit libgtk-x11-2.0.so, which is
required for linking. You have libgtk-x11-2.0.so.0*. Those won't be used
by the linker.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Nicholas Nethercote
On Sun, Sep 8, 2013 at 4:29 PM, Nicholas Cameron
nick.r.came...@gmail.com wrote:

 I don't think these kind of time improvements make it worth duplicating std 
 library code into mfbt, we may as well just pull in the headers and forget 
 about it. A caveat would be if it makes a significant difference on slower 
 systems.

 Given that improving what gets included via headers can make significant 
 difference to build time, this makes me wonder exactly what aspect of header 
 inclusion (if not size, which we should catch here) makes the difference.

My gut feeling is that the wins come from (a) cases where an
additional include causes *huge* amounts of extra code to be pulled
in, and (b) faster incremental builds due to fewer dependencies.
(I've been focusing on (b) with the jsapi.h-dependence minimization.)
In contrast, the inclusion of algorithm.h causes only a moderate
amount of extra code to be pulled in.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Benoit Jacob
We have many other headers including algorithm; it would be interesting
to compare the percentage of our cpp files that recursively include
algorithm before and after that patch; I suppose that just a single patch
like that is not enough to move that needle much, because there are other
ways that algorithm gets included in the same cpp files.

I do expect, though, that the 23 ms overhead from including algorithm is
real (at least as an order of magnitude), so I still expect that we can
save 23 ms times the number of cpp files that currently include algorithm
and could avoid to.

Those 9,000 lines of code are indeed a moderate amount of extra code
compared to the million lines of code that we have in many compilation
units, and yes I think we all expect that there are bigger wins to make.
This one is a relatively easy one though and 9,000 lines, while moderate,
is not negligible. How many other times are we neglecting 9,000 lines and
to how much does that add up? In the end, I believe that the ratio

(number of useful lines of code) / (total lines of code included)

is a very meaningful metric, and including algorithm for min/max scores
less than 1e-3 on that metric.

Benoit



2013/9/8 Nicholas Cameron nick.r.came...@gmail.com

 I timed builds to see if this makes a significant difference and it did
 not.

 I timed a clobber debug build using clang with no ccache on Linux on a
 fast laptop. I timed using a pull from m-c about a week old (I am using
 this pull because I have a lot of other stats on it). I then applied
 bjacob's nscoord patch from the bug and a patch of my own which does a
 similar thing for some Moz2D headers which get pulled into a lot of files
 (~900 other headers, presumably more cpp files). For both runs I did a full
 build, then clobbered, then timed a build. I avoided doing any other work
 on the laptop. n=1, so there might be variation, but my experience with
 build times is that there usually isn't much.

 Before changes:

 real  38m54.373s
 user  234m48.508ms
 sys 7m18.708s

 after changes:

 real  39m11.123s
 user  234m26.864ms
 sys 7m10.336s

 The removed headers are also the ideal case for ccache, so incremental
 builds or real life clobber builds should be affected even less by these
 changes.

 I don't think these kind of time improvements make it worth duplicating
 std library code into mfbt, we may as well just pull in the headers and
 forget about it. A caveat would be if it makes a significant difference on
 slower systems.

 Given that improving what gets included via headers can make significant
 difference to build time, this makes me wonder exactly what aspect of
 header inclusion (if not size, which we should catch here) makes the
 difference.

 Nick.

 On Sunday, September 8, 2013 3:22:01 PM UTC+12, Benoit Jacob wrote:
  Hi,
 
 
 
  It seems that we have some much-included header files including
 algorithm
 
  just to get std::min and std::max.
 
 
 
  That seems like an extreme case of low ratio between lines of code
 included
 
  (9,290 on my system, see Appendix below) and lines of code actually used
 
  (say 6 with whitespace).
 
 
 
  I ran into this issue while trying to minimize nsCoord.h (
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=913868 ) and in my patch, I
 
  resorted to defining my own min/max functions in a nsCoords_details
 
  namespace.
 
 
 
  This prompted comments on that bug suggesting that it might be better to
 
  have that in MFBT. But that, in turn, sounds like overturning our recent
 
  decision to switch to std::min / std::max, which I feel is material for
 
  this mailing list.
 
 
 
  It is also conceivable to keep saying that we should use std::min /
 
  std::max *except* in headers that don't otherwise include algorithm,
 
  where it may be more reasonable to use the cheap-to-#include variant
 
  instead.
 
 
 
  What do you think?
 
 
 
  Benoit
 
 
 
  === Appendix: how big and long to compile is algorithm ? ===
 
 
 
  On my Ubuntu 12.04 64bit system, with GCC 4.6.3, including algorithm
 
  means recursively including 9,290 lines of code:
 
 
 
  $ echo '#includealgorithm'  a.cpp  g++ -save-temps -c a.cpp  wc -l
 
  a.ii
 
  9290 a.ii
 
 
 
  On may wonder what this implies in terms of compilation times; here is a
 
  naive answer. I'm timing 10 successive compilations of a file that just
 
  includes iostream, and then I do the same with a file that also
 includes
 
  algorithm.
 
 
 
  $ echo '#includeiostream'  a.cpp  time (g++ -c a.cpp  g++ -c a.cpp
 
   g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c
 
  a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp)
 
 
 
  real0m1.391s
 
  user0m1.108s
 
  sys 0m0.212s
 
 
 
  echo '#includealgorithm'  a.cpp  echo '#includeiostream'  a.cpp
 
 
  time (g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++
 
  -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp
 
 
  g++ -c a.cpp)
 
 
 
  real0m1.617s
 
  user0m1.324s
 
  sys 

Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Mike Hommey
On Sun, Sep 08, 2013 at 08:52:23PM -0400, Benoit Jacob wrote:
 We have many other headers including algorithm; it would be interesting
 to compare the percentage of our cpp files that recursively include
 algorithm before and after that patch; I suppose that just a single patch
 like that is not enough to move that needle much, because there are other
 ways that algorithm gets included in the same cpp files.
 
 I do expect, though, that the 23 ms overhead from including algorithm is
 real (at least as an order of magnitude), so I still expect that we can
 save 23 ms times the number of cpp files that currently include algorithm
 and could avoid to.

23ms times 6000 sources is about 2 minutes and 20 seconds, if you don't
account for parallelism. If you count 6 processes compiling at the same
time on average, that's about 23s on a clobber build.
And according to the .o.pp files in my recently built fennec, we include
algorithm in less than 3000 files. So we'd be looking at about 10s of
overhead including algorithm on a clobber build. On a 20-something
minutes build.
I'd say there's not much to worry about here.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Benoit Jacob
Again, how many other similar wins are we leaving on the table because
they're only 10s on a clobber build? It's of course hard to know, which is
why I've suggested the (number of useful lines of code) / (total lines of
code included) ratio as a meaningful metric.

But I'm completely OK with focusing on the bigger wins in the short terms
and only reopening this conversation once we'll be done with the big items.

Benoit


2013/9/8 Mike Hommey m...@glandium.org

 On Sun, Sep 08, 2013 at 08:52:23PM -0400, Benoit Jacob wrote:
  We have many other headers including algorithm; it would be interesting
  to compare the percentage of our cpp files that recursively include
  algorithm before and after that patch; I suppose that just a single
 patch
  like that is not enough to move that needle much, because there are other
  ways that algorithm gets included in the same cpp files.
 
  I do expect, though, that the 23 ms overhead from including algorithm
 is
  real (at least as an order of magnitude), so I still expect that we can
  save 23 ms times the number of cpp files that currently include
 algorithm
  and could avoid to.

 23ms times 6000 sources is about 2 minutes and 20 seconds, if you don't
 account for parallelism. If you count 6 processes compiling at the same
 time on average, that's about 23s on a clobber build.
 And according to the .o.pp files in my recently built fennec, we include
 algorithm in less than 3000 files. So we'd be looking at about 10s of
 overhead including algorithm on a clobber build. On a 20-something
 minutes build.
 I'd say there's not much to worry about here.

 Mike

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Mike Hommey
On Mon, Sep 09, 2013 at 10:12:35AM +0900, Mike Hommey wrote:
 On Sun, Sep 08, 2013 at 08:52:23PM -0400, Benoit Jacob wrote:
  We have many other headers including algorithm; it would be interesting
  to compare the percentage of our cpp files that recursively include
  algorithm before and after that patch; I suppose that just a single patch
  like that is not enough to move that needle much, because there are other
  ways that algorithm gets included in the same cpp files.
  
  I do expect, though, that the 23 ms overhead from including algorithm is
  real (at least as an order of magnitude), so I still expect that we can
  save 23 ms times the number of cpp files that currently include algorithm
  and could avoid to.
 
 23ms times 6000 sources is about 2 minutes and 20 seconds, if you don't
 account for parallelism. If you count 6 processes compiling at the same
 time on average, that's about 23s on a clobber build.
 And according to the .o.pp files in my recently built fennec, we include
 algorithm in less than 3000 files. So we'd be looking at about 10s of
 overhead including algorithm on a clobber build. On a 20-something
 minutes build.
 I'd say there's not much to worry about here.

FWIW the average build time for a source file, as just gotten from a
clobber x86-64 linux build is  1.5s.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Boris Zbarsky

On 9/8/13 7:29 PM, Nicholas Cameron wrote:

I timed builds to see if this makes a significant difference and it did not..


The other thing that reducing .i size helps is Windows PGO memory usage. 
 See graph at 
http://graphs.mozilla.org/graph.html#tests=[[205,63,8]]sel=nonedisplayrange=90datatype=running 
showing the impact of the header reductions recently


The tradeoff may still not be worth it, of course.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


JavaScript Style Guide. Emacs mode line.

2013-09-08 Thread ishikawa
I have been recently editing javascript files to
reduce warnings but found an issue of adopted styls in comm-central
thunderbird codes.

I checked for the preferred style:

[1] I found one reference here:
http://autonome.wordpress.com/2006/03/24/javascript-style-guide-for-mozilla-projects/

[2] I found another which looks official.
https://developer.mozilla.org/en-US/docs/Developer_Guide/Coding_Style

It is not entirely clear which indentation levels are preferred.
 - two spaces in [1], or
 - four spaces (Java style section of the official-looking page in [2]
   states  that it intentionally deviates from Java Style guide.

It seems that existing javascript files were created with different style
ideas. We need to remedy this when a file is re-visited for modification.
But trying to stick to some ideal style without any guideline, and tool
support is difficult.

Concrete tool requirement example:
I am using Emacs for editing.
Does anyone have a good Emacs mode line that I can embed in the JavaScript
source code at the beginning to guide to help
the coders who write new code or modify existing code to stick to the
suggested style guideline?

These days, Emacs defaults to use so called JavaScript mode for javascript
editing, and
it is now rather difficult with its default setting to uniformly
apply 2 space indentation through out.

I recall there was a mode line in
  ... /mozilla/toolkit/mozapps/downloads/nsHelperAppDlg.js

quoted below:
/* -*- Mode: javascript; tab-width: 8; indent-tabs-mode: nil;
c-basic-offset: 2 -*- */
/* vim: set ts=2 et sw=2 tw=80: */

I found out I can set the 2 space indentation for sentence that follows {
with the addition of js-curly-indent-offset : - 2.  However, if a previous
line does not end with {, I get four space indentation.

function f () {
  if (a) {
sss
  }
  if (b)
  xxx;
}

I can ask gnu-emacs-help mailing list for help, but without the exact
preferred style specification, I will not be able to obtain satisfactory answer.

So my question boils down to
 - what is the preferred style for JavaScript now for mozilla source code?

 - Has anyone have mode-line (or .emacs) setting to make the indentation in
Emacs to follow the prefered style?

TIA
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Creating mock nsIPrintingPromptService

2013-09-08 Thread Gavin Sharp
Here are a few examples of mocked components:

http://mxr.mozilla.org/mozilla-central/source/testing/specialpowers/content/MockPermissionPrompt.jsm?force=1
mocks nsIContentPermissionPrompt

http://mxr.mozilla.org/mozilla-central/source/dom/tests/mochitest/bugs/test_bug61098.html?raw=1
mocks nsIPrompt and nsIPromptService/nsIPromptFactory.

Gavin

On Sun, Sep 8, 2013 at 8:21 AM, Sunil Agrawal su...@armor5.com wrote:
 (Apologies if this is not the right forum, in which case please direct me to 
 one).

 I am looking at creating a mock nsIPrintingPromptService (so that I can bring 
 up my own Print dialog) for my testing purpose, preferably in Javascript.

 I looked around Mozilla code base but couldn't find a starting point. Is 
 there an existing test that could be doing something similar that I can use?

 Thanks in advance,
 Sunil
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript Style Guide. Emacs mode line.

2013-09-08 Thread Gavin Sharp
On Mon, Sep 9, 2013 at 10:15 AM, ishikawa ishik...@yk.rim.or.jp wrote:
 So my question boils down to
  - what is the preferred style for JavaScript now for mozilla source code?

There isn't one that applies across all of Mozilla, and I think that's
not a problem.

(https://developer.mozilla.org/en-US/docs/User:GavinSharp_JS_Style_Guidelines
is my personal style, and I think  the one that is most commonly used
for Firefox code. But even amongst Firefox front-end developers there
are minor variations, and as you've noted there is legacy code that
does not follow current style.)

Gavin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript Style Guide. Emacs mode line.

2013-09-08 Thread Karl Tomlinson
ishikawa ishik...@yk.rim.or.jp writes:

  - Has anyone have mode-line (or .emacs) setting to make the indentation in
 Emacs to follow the prefered style?

I've got by so-far with M-x set-variable js-indent-level 2 when
necessary, but this doesn't automatically become buffer-local, so
I find myself manually changing back to 4 for other files.

A mode line would be helpful and perhaps js-indent-level is the
right variable there.  I don't know if there is an agreed style.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript Style Guide. Emacs mode line.

2013-09-08 Thread ishikawa
On (2013年09月09日 12:45), Karl Tomlinson wrote:
 ishikawa ishik...@yk.rim.or.jp writes:
 
   - Has anyone have mode-line (or .emacs) setting to make the indentation in
 Emacs to follow the prefered style?
 
 I've got by so-far with M-x set-variable js-indent-level 2 when
 necessary, but this doesn't automatically become buffer-local, so
 I find myself manually changing back to 4 for other files.
 
 A mode line would be helpful and perhaps js-indent-level is the
 right variable there.  I don't know if there is an agreed style.
 

Thank you for the information about js-indent-level.
(Funny I thought I tinkered with this variable, but maybe my other setting
might have interfered it. Emacs and its made up of
emacs-lisp functions and variables, etc. can behave in an unexpected manner
sometimes, indeed.)

I will try to create a mode-line with explicit setting of js-indent-level
and such so that, we can tweak them to fit various indentation styles of
legacy files.

Thank you again.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript Style Guide. Emacs mode line.

2013-09-08 Thread ishikawa
On (2013年09月09日 12:44), Gavin Sharp wrote:
 On Mon, Sep 9, 2013 at 10:15 AM, ishikawa ishik...@yk.rim.or.jp wrote:
 So my question boils down to
   - what is the preferred style for JavaScript now for mozilla source code?
 
 There isn't one that applies across all of Mozilla, and I think that's
 not a problem.
 
 (https://developer.mozilla.org/en-US/docs/User:GavinSharp_JS_Style_Guidelines
 is my personal style, and I think  the one that is most commonly used
 for Firefox code. But even amongst Firefox front-end developers there
 are minor variations, and as you've noted there is legacy code that
 does not follow current style.)
 
 Gavin
 

Thank you for the pointer.

Your guidelines page is more comprehensive and detailed and so I will try to
follow it.

I wonder if trying to re-indent legacy code according to the current style
is a good idea or not. (It is a little irritating to modify a part of legacy
code and find my editor trying to follow the current style while the rest of
the file is in different style.)

TIA

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform