Re: PSA: RIP MOZ_ASSUME_UNREACHABLE

2014-09-22 Thread Benoit Jacob
Great work Chris! Thanks for linking to the study; the link gives me error
400, github links are tricky:

2014-09-22 4:06 GMT-04:00 Chris Peterson :

> [1] https://raw.githubusercontent.com/bjacob/builtin-unreachable-study
>

Repo link: https://github.com/bjacob/builtin-unreachable-study
Notes file:
https://raw.githubusercontent.com/bjacob/builtin-unreachable-study/master/notes

Benoit


> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-12 Thread Benoit Jacob
As far as I know, the only downside in replacing already_AddRefed by
nsCOMPtr would be to incur more useless calls to AddRef and Release. In the
case of "threadsafe" i.e. atomic refcounting, these use atomic
instructions, which might be expensive enough on certain ARM CPUs that this
might matter. So if you're interested, you could take a low-end ARM CPU
that we care about and see if replacing already_AddRefed by nsCOMPtr causes
any measurable performance regression.

Benoit


2014-08-12 10:59 GMT-04:00 Benjamin Smedberg :

> Just reading bug 1052477, and I'm wondering about what are intentions are
> for already_AddRefed.
>
> In that bug it's proposed to change the return type of NS_NewAtom from
> already_AddRefed to nsCOMPtr. I don't think that actually saves any
> addref/release pairs if done properly, since you'd typically .forget() into
> the return value anyway. But it does make it slightly safer at callsites,
> because the compiler will guarantee that the return value is always
> released instead of us relying on every already_AddRefed being saved into a
> nsCOMPtr.
>
> But now that nsCOMPtr/nsRefPtr support proper move constructors, is there
> any reason for already_AddRefed to exist at all in our codebase? Could we
> replace every already_AddRefed return value with a nsCOMPtr?
>
> --BDS
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Building with a RAM disk

2014-07-18 Thread Benoit Jacob
I have a plain old mechanical disk too, but I have 32 G of RAM, enough that
disk access is not relevant to my (unified) build times. After a build, I
have 12 to 14 G of RAM used for cache, so I suppose that disk performance
is still relevant if you have <= 16 G of RAM, yeah.

Benoit


2014-07-19 1:40 GMT-04:00 Geoff Lankow :

> Ah yes, I forgot to say. I am on Linux. I've found using RAM instead of my
> (mechanical) disk saves about 5 minutes of a roughly half-hour build.
>
> GL
>
>
> On 19/07/14 16:24, Benoit Jacob wrote:
>
>> What OS are we talking about?
>>
>> (On Linux, ramdisks are mountpoints like any other so that would be
>> trivial; but then again, on Linux, the kernel is good enough at using
>> extra
>> RAM for disk cache automatically, that you get the benefits of a RAMdisk
>> automatically).
>>
>> Benoit
>>
>>
>> 2014-07-18 22:39 GMT-04:00 Geoff Lankow :
>>
>>  Today I tried to build Firefox on a RAM disk for the first time, and
>>> although I succeeded through trial and error, it occurs to me that there
>>> are probably things I could do better. Could someone who regularly does
>>> this make a blog post or an MDN page about their workflow and some tips
>>> and
>>> tricks? I think it'd be useful to many people but I (read: Google)
>>> couldn't
>>> find anything helpful.
>>>
>>> Thanks!
>>> GL
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Building with a RAM disk

2014-07-18 Thread Benoit Jacob
What OS are we talking about?

(On Linux, ramdisks are mountpoints like any other so that would be
trivial; but then again, on Linux, the kernel is good enough at using extra
RAM for disk cache automatically, that you get the benefits of a RAMdisk
automatically).

Benoit


2014-07-18 22:39 GMT-04:00 Geoff Lankow :

> Today I tried to build Firefox on a RAM disk for the first time, and
> although I succeeded through trial and error, it occurs to me that there
> are probably things I could do better. Could someone who regularly does
> this make a blog post or an MDN page about their workflow and some tips and
> tricks? I think it'd be useful to many people but I (read: Google) couldn't
> find anything helpful.
>
> Thanks!
> GL
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: DebugOnly<> fields aren't zero-sized in non-DEBUG builds

2014-07-16 Thread Benoit Jacob
That sounds like a good idea, if possible.


2014-07-16 14:41 GMT-04:00 Ehsan Akhgari :

> Should we make DebugOnly MOZ_STACK_CLASS?
>
>
> On 2014-07-15, 9:21 PM, Nicholas Nethercote wrote:
>
>> Hi,
>>
>> The comment at the top of mfbt/DebugOnly.h includes this text:
>>
>>   * Note that DebugOnly instances still take up one byte of space, plus
>> padding,
>>   * when used as members of structs.
>>
>> I'm in the process of making js::HashTable (a very common class)
>> smaller by converting some DebugOnly fields to instead be guarded by
>> |#ifdef DEBUG| (bug 1038601).
>>
>> Below is a list of remaining DebugOnly members that I found using
>> grep. People who are familiar with them should inspect them to see if
>> they belong to classes that are commonly instantiated, and thus if
>> some space savings could be made.
>>
>> Thanks.
>>
>> Nick
>>
>>
>> uriloader/exthandler/ExternalHelperAppParent.h:  DebugOnly
>> mDiverted;
>> layout/style/CSSVariableResolver.h:  DebugOnly mResolved;
>> layout/base/DisplayListClipState.h:  DebugOnly mClipUsed;
>> layout/base/DisplayListClipState.h:  DebugOnly mRestored;
>> layout/base/DisplayListClipState.h:  DebugOnly mExtraClipUsed;
>> gfx/layers/Layers.h:  DebugOnly mDebugColorIndex;
>> ipc/glue/FileDescriptor.h:  mutable DebugOnly
>> mHandleCreatedByOtherProcessWasUsed;
>> ipc/glue/MessageChannel.cpp:DebugOnly mMoved;
>> ipc/glue/BackgroundImpl.cpp:  DebugOnly mActorDestroyed;
>> content/media/MediaDecoderStateMachine.h:  DebugOnly
>> mInRunningStateMachine;
>> dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnly
>> mRequestType;
>> dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnly
>> mRequestType;
>> dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnly
>> mRequestType;
>> dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnly mRequestType;
>> dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnly mRequestType;
>> dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnly mRequestType;
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: DebugOnly<> fields aren't zero-sized in non-DEBUG builds

2014-07-15 Thread Benoit Jacob
It may be worth reminding people that this is not specific to DebugOnly but
general to all C++ classes: In C++, there is no such thing as a class with
size 0. So expecting DebugOnly to be of size 0 is not misunderstanding
DebugOnly, it is misunderstanding C++. The only way to have empty classes
behave as if they had size 0, is to inherit from them instead of having
them as the types of members.That's called the Empty Base Class
Optimization.
http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Empty_Base_Optimization

Since DebugOnly incurs a size overhead in non-debug builds, maybe we
should officially consider it bad practice to have any DebugOnly class
members. Having to guard them in #ifdef DEBUG takes away much of the point
of DebugOnly, doesn't it?

Benoit


2014-07-15 21:21 GMT-04:00 Nicholas Nethercote :

> Hi,
>
> The comment at the top of mfbt/DebugOnly.h includes this text:
>
>  * Note that DebugOnly instances still take up one byte of space, plus
> padding,
>  * when used as members of structs.
>
> I'm in the process of making js::HashTable (a very common class)
> smaller by converting some DebugOnly fields to instead be guarded by
> |#ifdef DEBUG| (bug 1038601).
>
> Below is a list of remaining DebugOnly members that I found using
> grep. People who are familiar with them should inspect them to see if
> they belong to classes that are commonly instantiated, and thus if
> some space savings could be made.
>
> Thanks.
>
> Nick
>
>
> uriloader/exthandler/ExternalHelperAppParent.h:  DebugOnly mDiverted;
> layout/style/CSSVariableResolver.h:  DebugOnly mResolved;
> layout/base/DisplayListClipState.h:  DebugOnly mClipUsed;
> layout/base/DisplayListClipState.h:  DebugOnly mRestored;
> layout/base/DisplayListClipState.h:  DebugOnly mExtraClipUsed;
> gfx/layers/Layers.h:  DebugOnly mDebugColorIndex;
> ipc/glue/FileDescriptor.h:  mutable DebugOnly
> mHandleCreatedByOtherProcessWasUsed;
> ipc/glue/MessageChannel.cpp:DebugOnly mMoved;
> ipc/glue/BackgroundImpl.cpp:  DebugOnly mActorDestroyed;
> content/media/MediaDecoderStateMachine.h:  DebugOnly
> mInRunningStateMachine;
> dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnly mRequestType;
> dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnly mRequestType;
> dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnly mRequestType;
> dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnly mRequestType;
> dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnly mRequestType;
> dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnly mRequestType;
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox heap-textures usage

2014-07-03 Thread Benoit Jacob
Please file a bug on bugzilla, product: Core, component: Graphics, and CC
bas.schouten. This about:memory report says you have 4G of textures; that
seems too much; and the fact that 'explicit' is above 4G suggests that that
is for real and not just a bug in this counter-based memory reporter.

Benoit


2014-07-03 15:13 GMT-04:00 Wesley Hardman :

> I was wondering why I was running low on memory, then noticed Firefox.
>
> Heap textures seems rather large (can't drill down any).  I don't have
> that many tabs open (Window1 3 + Window2 2) for a total of 5.  CC/GC didn't
> help any.  I also closed the only tab that might be heavy on the graphics.
>  I did have the dev tools open for a while with Log Request and Response
> Bodies turned on.
>
> Any ideas?
>
> Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:30.0) Gecko/20100101
> Firefox/30.0
>
> 4,783.33 MB (100.0%) -- explicit
> ├──4,073.86 MB (85.17%) -- gfx
> │  ├──4,073.18 MB (85.15%) ── heap-textures
> │  └──0.68 MB (00.01%) ++ (5 tiny)
> ├185.48 MB (03.88%) -- js-non-window
> │├──145.96 MB (03.05%) -- zones
> ││  ├──102.89 MB (02.15%) ++ zone(0x76cb800)
> ││  └───43.07 MB (00.90%) ++ (31 tiny)
> │└───39.52 MB (00.83%) ++ (2 tiny)
> ├182.37 MB (03.81%) -- window-objects
> │├───67.81 MB (01.42%) -- top(none)/detached
> ││   ├──63.47 MB (01.33%) ++
> window(chrome://browser/content/browser.xul)
> ││   └───4.34 MB (00.09%) ++ (5 tiny)
> │├───60.18 MB (01.26%) ++ (10 tiny)
> │└───54.39 MB (01.14%) ++ top([URL], id=1954)
> ├134.19 MB (02.81%) -- heap-overhead
> │├──131.69 MB (02.75%) ── waste
> │└2.50 MB (00.05%) ++ (2 tiny)
> ├114.44 MB (02.39%) ++ (21 tiny)
> └─92.99 MB (01.94%) ── heap-unclassified
>
>
> 0.22 MB ── canvas-2d-pixels
> 0.00 MB ── gfx-d2d-surface-cache
> 4.00 MB ── gfx-d2d-surface-vram
>   208.90 MB ── gfx-d2d-vram-draw-target
> 5.34 MB ── gfx-d2d-vram-source-surface
> 1.38 MB ── gfx-surface-win32
> 0.00 MB ── gfx-textures
>   0 ── ghost-windows
> 4,394.14 MB ── heap-allocated
> 4,528.33 MB ── heap-committed
>   3.05% ── heap-overhead-ratio
>   0 ── host-object-urls
> 0.00 MB ── imagelib-surface-cache
>24.58 MB ── js-main-runtime-temporary-peak
> 5,082.21 MB ── private
> 5,221.40 MB ── resident
> 6,651.32 MB ── vsize
> 8,376,692.50 MB ── vsize-max-contiguous
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are the most important new APIs to document in Q3/Q4?

2014-06-26 Thread Benoit Jacob
2014-06-26 9:09 GMT-04:00 Eric Shepherd :

> Hi! The docs team is trying to build our schedule for the next quarter or
> two, and part of that is deciding which APIs to spend lots of our time
> writing about. I'd like to know what y'all think the most important APIs
> are for docs attention in the next few months.
>
> Here are a few possibilities we've heard of. I'd like your opinions on
> which of these are the most important -- for Mozilla, the open Web, and of
> course for Firefox OS. PLEASE feel free to suggest others. I'm sure there
> are APIs we don't know about at all, or aren't on this list.
>
> DO NOT ASSUME WE KNOW YOUR API EXISTS. Not even if it should be obvious.
> Especially not then. :)
>
> * WebRTC
> * WebGL (our current docs are very weak and out of date)
>

Not expressing any opinion on whether WebGL should be prioritized, but
recently I had to teach some WebGL and, not being fully satisfied with
existing tutorials, I made a code-only "tutorial" made of 12 increasingly
involved WebGL examples,

http://bjacob.github.io/webgl-tutorial/

and I would be happy to work a bit with someone to turn it into proper
documentation.

Benoit



> * Service Workers
> * Shared Workers
> * Web Activities
> * ??
>
> What are your top five APIs that you think need documentation attention?
> For the purposes of this discussion, consider any that you know aren't
> already documented (you don't have to search MDN -- if there happen to be
> any you're annoyed by lack of/sucky docs, list 'em). Also consider any that
> will land in Q3 or Q4.
>
> We will collate the input we get to build our plan for the next quarter
> and to start a rough sketch for Q4!
>
> Thanks in advance!
>
> --
> Eric Shepherd
> Developer Documentation Lead
> Mozilla
> Blog: http://www.bitstampede.com/
> Twitter: @sheppy
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor & should be MOZ_FINAL where possible

2014-06-20 Thread Benoit Jacob
Here's an update on this front.

In Bug 1027251  we
added a static_assert as discussed in this thread, which discovered all
remaining instances, and we fixed the easy ones, which were the majority.

The harder ones have been temporarily whitelisted. See
HasDangerousPublicDestructor.

There are 11 such classes. Follow-up bugs have been filed for each of them,
blocking the tracking Bug 1028132
.

Help is very welcome to fix these 11 classes! I won't have more time to
work on this for now.

The trickiest one is probably going to be mozilla::ipc::SharedMemory, which
is refcounted but to which IPDL-generated code takes nsAutoPtr's... so if
you have data that you care about, don't put it in a SharedMemory, for
now... the bug for this one is Bug 1028148
.

This is only about nsISupportsImpl.h refcounting. We considered doing the
same for MFBT RefCounted (Bug 1028122
) but we can't,
because as C++ base classes have no access to protected derived class
members, RefCounted inherently forces making destructors public (unless we
befriend everywhere), which is also the reason why we had concluded earlier
that RefCounted is a bad idea.

We haven't started checking final-ness yet. It's an open question AFAIK how
we would enforce that, as there are legitimate and widespread uses for
non-final refcounting. We would probably have to offer separate _NONFINAL
refcounting macros, or something like that.

Thanks,
Benoit




2014-05-28 16:24 GMT-04:00 Daniel Holbert :

> Hi dev-platform,
>
> PSA: if you are adding a concrete class with AddRef/Release
> implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
> of the following best-practices:
>
>  (a) Your class should have an explicitly-declared non-public
> destructor. (should be 'private' or 'protected')
>
>  (b) Your class should be labeled as MOZ_FINAL (or, see below).
>
>
> WHY THIS IS A GOOD IDEA
> ===
> We'd like to ensure that refcounted objects are *only* deleted via their
> ::Release() methods.  Otherwise, we're potentially susceptible to
> double-free bugs.
>
> We can go a long way towards enforcing this rule at compile-time by
> giving these classes non-public destructors.  This prevents a whole
> category of double-free bugs.
>
> In particular: if your class has a public destructor (the default), then
> it's easy for you or someone else to accidentally declare an instance on
> the stack or as a member-variable in another class, like so:
> MyClass foo;
> This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
> (say, if some function that we pass 'foo' or '&foo' into declares a
> nsRefPtr to it for some reason), then we'll get a double-free. The
> object will be freed when the nsRefPtr goes out of scope, and then again
> when the MyClass instance goes out of scope. But if we give MyClass a
> non-public destructor, then it'll make it a compile error (in most code)
> to declare a MyClass instance on the stack or as a member-variable.  So
> we'd catch this bug immediately, at compile-time.
>
> So, that explains why a non-public destructor is a good idea. But why
> MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
> route to trigger the same sort of bug -- someone can come along and add
> a subclass, perhaps not realizing that they're subclassing a refcounted
> class, and the subclass will (by default) have a public destructor,
> which means then that anyone can declare
>   MySubclass foo;
> and run into the exact same problem with the subclass.  A MOZ_FINAL
> annotation will prevent that by keeping people from naively adding
> subclasses.
>
> BUT WHAT IF I NEED SUBCLASSES
> =
> First, if your class is abstract, then it shouldn't have AddRef/Release
> implementations to begin with.  Those belong on the concrete subclasses
> -- not on your abstract base class.
>
> But if your class is concrete and refcounted and needs to have
> subclasses, then:
>  - Your base class *and each of its subclasses* should have virtual,
> protected destructors, to prevent the "MySubclass foo;" problem
> mentioned above.
>  - Your subclasses themselves should also probably be declared as
> "MOZ_FINAL", to keep someone from naively adding another subclass
> without recognizing the above.
>  - Your subclasses should definitely *not* declare their own
> AddRef/Release methods. (They should share the base class's methods &
> refcount.)
>
> For more information, see
> https://bugzilla.mozilla.org/show_bug.cgi?id=984786 , where I've fixed
> this sort of thing in a bunch of existing classes.  I definitely didn't
> catch everything there, so please feel encouraged to continue this work
> in other bugs. (And if you catch any cases that look like potential
> double-frees, mark them as security-

Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 16:27 GMT-04:00 Jet Villegas :

> It seems healthy for the core C++ language to explore new territory here.
> Modern primitives for things like pixels and colors would be a good thing,
> I think. Let the compiler vendors compete to boil it down to the CPU/GPU.


In the Web world, we have such an API, Canvas 2D, and the "compiler
vendors" are the browser vendors. After years of intense competition
between browser vendors, and very high cost to all browser vendors, nobody
has figured yet how to make Canvas2D efficiently utilize GPUs. There are
basically two kinds of Canvas2D applications: those for which GPUs have
been useless so far, and those which have benefited much more from getting
ported to WebGL, than they did from accelerated Canvas 2D.

Benoit





> There will always be the argument for keeping such things out of Systems
> languages, but that school of thought won't use those features anyway. I
> was taught to not divide by 2 because bit-shifting is how you do fast
> graphics in C/C++. I sure hope the compilers have caught up and such
> trickery is no longer required--Graphics shouldn't be such a black art.
>
> --Jet
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 16:12 GMT-04:00 Benoit Jacob :

>
>
>
> 2014-06-09 15:56 GMT-04:00 Botond Ballo :
>
> - Original Message -
>> > From: "Benoit Jacob" 
>> > To: "Botond Ballo" 
>> > Cc: "dev-platform" 
>> > Sent: Monday, June 9, 2014 3:45:20 PM
>> > Subject: Re: C++ standards proposals of potential interest, and
>> upcoming committee meeting
>> >
>> > 2014-06-09 15:31 GMT-04:00 Botond Ballo :
>> >
>> > > Cairo-based 2D drawing API (latest revision):
>> > >   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4021.pdf
>> > >
>> >
>> > I would like the C++ committee's attention to be drawn to the dangers,
>> for
>> > committee, to try to make decisions outside of its domain of expertise.
>> I
>> > see more potential for harm than for good in having the C++ committee
>> join
>> > the ranks of non graphics specialists thinking they know how to do
>> > graphics...
>>
>> Does this caution apply even if the explicit goal of this API is to allow
>> people learning C++ and/or creating simple graphical applications to be
>> able to do so with minimal overhead (setting up third-party libraries and
>> such), rather than necessarily provide a tool for expert-level/heavy-duty
>> graphics work?
>>
>
> That would ease my concerns a lot, if that were the case, but skimming
> through the proposal, it explicitly seems not to be the case.
>
> The "Motivation and Scope" section shows that this aims to target drawing
> GUIs and cover other needs of graphical applications, so it's not just
> about learning or tiny use cases.
>
> Even more worryingly, the proposal talks about GPUs and Direct3D and
> OpenGL and even Mantle, and that scares me, given what we know about how
> sad it is to have to take an API like Cairo (or Skia, or Moz2D, or Canvas
> 2D, it doesn't matter) and try to make it efficiently utilize GPUs. The
> case of a Cairo-like or Skia-like API could totally be made, but the only
> mention of GPUs should be to say that they are mostly outside of its scope;
> anything more enthusiastic than that confirms fears that the proposal's
> authors are not talking out of experience.
>

It's actually even worse than I realized: the proposal is peppered with
performance-related comments about GPUs. Just search for "GPU" in it, there
are 42 matches, most of them scarily talking about GPU performance
characteristics (a typical one is "GPU resources are expensive to copy").

This proposal should either not care at all about GPU details, which would
be totally fine for a basic software 2D renderer, which could already cover
the needs of many applications; or, if it were to seriously care about
running fast on GPUs, it would not use Cairo as its starting point and it
would look totally different (it would try to lend itself ot seamlessly
batching and reordering drawing primitives; typically, a declarative /
scene-graph API would be a better starting point).

Benoit




>
> Benoit
>
>
>
>
>>
>> Cheers,
>> Botond
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 15:56 GMT-04:00 Botond Ballo :

> - Original Message -
> > From: "Benoit Jacob" 
> > To: "Botond Ballo" 
> > Cc: "dev-platform" 
> > Sent: Monday, June 9, 2014 3:45:20 PM
> > Subject: Re: C++ standards proposals of potential interest, and upcoming
> committee meeting
> >
> > 2014-06-09 15:31 GMT-04:00 Botond Ballo :
> >
> > > Cairo-based 2D drawing API (latest revision):
> > >   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4021.pdf
> > >
> >
> > I would like the C++ committee's attention to be drawn to the dangers,
> for
> > committee, to try to make decisions outside of its domain of expertise. I
> > see more potential for harm than for good in having the C++ committee
> join
> > the ranks of non graphics specialists thinking they know how to do
> > graphics...
>
> Does this caution apply even if the explicit goal of this API is to allow
> people learning C++ and/or creating simple graphical applications to be
> able to do so with minimal overhead (setting up third-party libraries and
> such), rather than necessarily provide a tool for expert-level/heavy-duty
> graphics work?
>

That would ease my concerns a lot, if that were the case, but skimming
through the proposal, it explicitly seems not to be the case.

The "Motivation and Scope" section shows that this aims to target drawing
GUIs and cover other needs of graphical applications, so it's not just
about learning or tiny use cases.

Even more worryingly, the proposal talks about GPUs and Direct3D and OpenGL
and even Mantle, and that scares me, given what we know about how sad it is
to have to take an API like Cairo (or Skia, or Moz2D, or Canvas 2D, it
doesn't matter) and try to make it efficiently utilize GPUs. The case of a
Cairo-like or Skia-like API could totally be made, but the only mention of
GPUs should be to say that they are mostly outside of its scope; anything
more enthusiastic than that confirms fears that the proposal's authors are
not talking out of experience.

Benoit




>
> Cheers,
> Botond
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 15:31 GMT-04:00 Botond Ballo :

> Cairo-based 2D drawing API (latest revision):
>   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4021.pdf
>

I would like the C++ committee's attention to be drawn to the dangers, for
committee, to try to make decisions outside of its domain of expertise. I
see more potential for harm than for good in having the C++ committee join
the ranks of non graphics specialists thinking they know how to do
graphics...

If that helps, we can give a pile of evidence for how having generalist Web
circles trying to standardize graphics APIs has repeatedly given
unnecessarily poor APIs...

Benoit





>
> Reflection proposals (these are very early-stage proposals, but they
> give an idea of the directions people are exploring):
>   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3987.pdf
>   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3996.pdf
>   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4027.pdf
>
>
> The Committee is meeting next week in Rapperswil, Switzerland. I will be
> attending.
>
> If anyone has any feedback on the above proposals, or any other proposals,
> or anything else you'd like me to communicate at the meeting, or anything
> I can find out for you at the meeting, please let me know!
>
> Shortly after the meeting I will blog about what happened there - stay
> tuned!
>
> Cheers,
> Botond
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-08 Thread Benoit Jacob
2014-06-08 8:56 GMT-04:00 :

> On Monday, June 2, 2014 12:11:29 AM UTC+2, Benoit Jacob wrote:
> > My ROI for arguing on standards mailing on matrix math topics lists has
> > been very low, presumably because these are specialist topics outside of
> > the area of expertise of these groups.
> >
> > Here are a couple more objections by the way:
> >
> > [...]
> >
> > Benoit
>
> Benoit, would you mind producing a strawman for ES7, or advising someone
> who can? Brendan Eich is doing some type stuff which is probably relevant
> to this (also for SIMD etc.). I firmly believe proper Matrix handling &
> APIs for JS are wanted by quite a few people. DOMMatrix-using APIs may then
> be altered to accept JS matrices (or provide a way to translate from
> JSMatrix to DOMMatrix and back again). This may help in the long term while
> the platform can have the proposed APIs. Thanks!
>
>
Don't put matrix arithmetic concepts directly in a generalist language like
JS, or in its standard library. That's too much of a specialist topic and
with too many compromises to decide on.

Instead, at the language level, simply make sure that the language offers
the right features to allow third parties to build good matrix classes on
top of it.

For example, C++'s templates, OO concepts, alignment/SIMD extensions, etc,
make it a decent language to implement matrix libraries on top of, and as a
result, C++ programmers are much better serve by the offering of
independent matrix libraries, than they would be by a standard library
attempt at matrix library design. Another example is Fortran, which IIRC
has specific features enabling fast array arithmetic, but lets the actual
matrix arithmetic up to 3rd-party libraries (BLAS, LAPACK). I think that
all the history shows that leaving matrix arithmetic up to 3rd parties is
best, but there are definitely language-level issues to discuss to enable
3rd parties to do that well.

Benoit





> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-07 Thread Benoit Jacob
2014-06-07 12:49 GMT-04:00 L. David Baron :

> On Monday 2014-06-02 20:45 -0700, Rik Cabanier wrote:
> > - change isIdentity() so it's a flag.
>
> I'm a little worried about this one at first glance.
>
> I suspect isIdentity is going to be used primarily for optimization.
> But we want optimizations on the Web to be good -- we should care
> about making it easy for authors to care about performance.  And I'm
> worried that a flag-based isIdentity will not be useful for
> optimization because it won't hit many of the cases that authors
> care about, e.g., translating and un-translating, or scaling and
> un-scaling.
>

Note that the current way that isIdentity() works also fails to offer that
characteristic, outside of accidental cases, due to how floating point
works.

The point of this optimizations is not so much to detect when a generic
transformation happens to be of a special form, it is rather to represent
transformations as a kind of variant type where "matrix transformation" is
one possible variant type, and exists alongside the default, more optimized
type, "identity transformation".

Earlier in this thread I pleaded for the removal of isIdentity(). What I
mean is that as it only is defensible as a "variant" optimization as
described above, it doesn't make sense in a _matrix_ class. If we want to
have such a variant type, we should call it a name that does not contain
the word "matrix", and we should have it one level above where we actually
do matrix arithmetic.

Strawman class diagram:

  Transformation
  /  |  \
 /   |   \
/|\
   / | \
Identity   MatrixOther transform types
   e.g. Translation

In such a world, the class containing the word "Matrix" in its name would
not have a isIdentity() method; and for use cases where having a "variant
type" that can avoid being a full blown matrix is meaningful, we would have
such a variant type, like "Transformation" in the above diagram, and the
isIdentity() method there would be merely asking the variant type for its
type field.

Benoit



>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Benoit Jacob
2014-06-05 18:59 GMT-04:00 Matt Woodrow :

> On 6/06/14 12:05 am, Benoit Jacob wrote:
>
>>
>> The situation isn't symmetric: radians are inherently simpler to implement
>> (thus slightly faster), basically because only in radians is it true that
>> sin(x) ~= x for small x.
>>
>> I also doubt that degrees are simpler to understand, and if anything you
>> might just want to provide a simple name for the constant 2*pi:
>>
>> var turn = Math.PI * 2;
>>
>> Now, what is easier to understand:
>>
>> rotate(turn / 5)
>>
>> or
>>
>> rotate(72)
>>
>> ?
>>
>>
>>
> I don't think this is a fair comparison, you used a fraction of a constant
> for one and a raw number for the other.
>
> Which is easier to understand:
>
> var turn = 360;
>
> rotate(turn / 5)
>
> or
>
> rotate(1.25663706143592)
>
> ?
>
>
I just meant that neither radians nor degrees are significantly easier than
the other, since in practice this is just changing the value for the "turn"
constant that people shouldn't be writing manually, i.e. even in degrees
people should IMHO write turn/4 instead of 90.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Benoit Jacob
2014-06-05 9:08 GMT-04:00 Rik Cabanier :

>
>
>
> On Thu, Jun 5, 2014 at 5:05 AM, Benoit Jacob 
> wrote:
>
>>
>>
>>
>> 2014-06-05 2:48 GMT-04:00 Rik Cabanier :
>>
>>
>>>
>>>
>>> On Wed, Jun 4, 2014 at 2:20 PM, Milan Sreckovic 
>>> wrote:
>>>
>>>> In general, is “this is how it worked with SVGMatrix” one of the design
>>>> principles?
>>>>
>>>> I was hoping this would be the time matrix rotate() method goes to
>>>> radians, like the canvas rotate, and unlike SVGMatrix version that takes
>>>> degrees...
>>>>
>>>
>>> "degrees" is easier to understand for authors.
>>> With the new DOMMatrix constructor, you can specify radians:
>>>
>>> var m = new DOMMatrix('rotate(1.75rad)' ;
>>>
>>> Not specifying the unit will make it default to degrees (like angles in
>>> SVG)
>>>
>>
>>
>> The situation isn't symmetric: radians are inherently simpler to
>> implement (thus slightly faster), basically because only in radians is it
>> true that sin(x) ~= x for small x.
>>
>> I also doubt that degrees are simpler to understand, and if anything you
>> might just want to provide a simple name for the constant 2*pi:
>>
>> var turn = Math.PI * 2;
>>
>> Now, what is easier to understand:
>>
>> rotate(turn / 5)
>>
>> or
>>
>> rotate(72)
>>
>
> The numbers don't lie :-)
> Just do a google search for "CSS transform rotate". I went over 20 pages
> of results and they all used "deg".
>

The other problem is that outside of SVG, other parts of the platform that
are being proposed to use SVGMatrix were using radians. For example, the
Canvas 2D context uses radians

http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-rotate

Not to mention that JavaScript also uses radians, e.g. in Math.cos().

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Benoit Jacob
2014-06-05 2:48 GMT-04:00 Rik Cabanier :

>
>
>
> On Wed, Jun 4, 2014 at 2:20 PM, Milan Sreckovic 
> wrote:
>
>> In general, is “this is how it worked with SVGMatrix” one of the design
>> principles?
>>
>> I was hoping this would be the time matrix rotate() method goes to
>> radians, like the canvas rotate, and unlike SVGMatrix version that takes
>> degrees...
>>
>
> "degrees" is easier to understand for authors.
> With the new DOMMatrix constructor, you can specify radians:
>
> var m = new DOMMatrix('rotate(1.75rad)' ;
>
> Not specifying the unit will make it default to degrees (like angles in
> SVG)
>


The situation isn't symmetric: radians are inherently simpler to implement
(thus slightly faster), basically because only in radians is it true that
sin(x) ~= x for small x.

I also doubt that degrees are simpler to understand, and if anything you
might just want to provide a simple name for the constant 2*pi:

var turn = Math.PI * 2;

Now, what is easier to understand:

rotate(turn / 5)

or

rotate(72)

?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-04 Thread Benoit Jacob
2014-06-04 20:28 GMT-04:00 Cameron McCormack :

> On 05/06/14 07:20, Milan Sreckovic wrote:
>
>> In general, is “this is how it worked with SVGMatrix” one of the
>> design principles?
>>
>> I was hoping this would be the time matrix rotate() method goes to
>> radians, like the canvas rotate, and unlike SVGMatrix version that
>> takes degrees...
>>
>
> By the way, in the SVG Working Group we have been discussing (but haven't
> decided yet) whether to perform a wholesale overhaul of the SVG DOM.
>
> http://dev.w3.org/SVG/proposals/improving-svg-dom/
>
> If we go through with that, then we could drop SVGMatrix and use DOMMatrix
> (which wouldn't then need to be compatible with SVGMatrix) for all the SVG
> DOM methods we wanted to retain that deal with matrices. I'm hoping we'll
> resolve whether to go ahead with this at our next meeting, in August.
>

Thanks, that's very interesting input in this thread, as the entire
conversation here has been based on the axiom that we have to keep
compatibility with SVGMatrix

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 18:29 GMT-04:00 Robert O'Callahan :

> On Wed, Jun 4, 2014 at 10:26 AM, Rik Cabanier  wrote:
>
>> That would require try/catch around all the "invert()" calls. This is ugly
>> but more importantly, it will significantly slow down javascript
>> execution.
>> I'd prefer that we don't throw at all but we have to because SVGMatrix
>> did.
>>
>
> Are you sure that returning a special value (e.g. all NaNs) would not fix
> more code than it would break?
>
> I think returning all NaNs instead of throwing would be much better
> behavior.
>

FWIW, I totally agree! That is exaclty what NaN is there for, and floating
point would be a nightmare if division-by-zero threw.

To summarize, my order of preference is:

  1. (my first choice) have no inverse() / invert() / isInvertible()
methods at all.

  2. (second choice) have inverse() returning NaN on non-invertible
matrices and possibly somehow returning a second boolean return value (e.g.
an out-parameter or a structured return value) to indicate whether the
matrix was invertible. Do not have a separate isInvertible().

  3. (worst case #1) keep inverse() throwing. Do not have a separate
isInvertible().

  4. (worst case #2) offer isInvertible() method separate from inverse().

Benoit


>
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 18:26 GMT-04:00 Rik Cabanier :

>
>
>
> On Tue, Jun 3, 2014 at 2:40 PM, Benoit Jacob 
> wrote:
>
>>
>>
>>
>> 2014-06-03 17:34 GMT-04:00 Benoit Jacob :
>>
>>
>>>
>>>
>>> 2014-06-03 16:20 GMT-04:00 Rik Cabanier :
>>>
>>>
>>>>
>>>>
>>>> On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> 2014-06-03 3:34 GMT-04:00 Dirk Schulze :
>>>>>
>>>>>
>>>>>> On Jun 2, 2014, at 12:11 AM, Benoit Jacob 
>>>>>> wrote:
>>>>>>
>>>>>> > Objection #6:
>>>>>> >
>>>>>> > The determinant() method, being in this API the only easy way to get
>>>>>> > something that looks roughly like a measure of invertibility, will
>>>>>> probably
>>>>>> > be (mis-)used as a measure of invertibility. So I'm quite confident
>>>>>> that it
>>>>>> > has a strong mis-use case. Does it have a strong good use case?
>>>>>> Does it
>>>>>> > outweigh that? Note that if the intent is precisely to offer some
>>>>>> kind of
>>>>>> > measure of invertibility, then that is yet another thing that would
>>>>>> be best
>>>>>> > done with a singular values decomposition (along with solving, and
>>>>>> with
>>>>>> > computing a polar decomposition, useful for interpolating
>>>>>> matrices), by
>>>>>> > returning the ratio between the lowest and the highest singular
>>>>>> value.
>>>>>>
>>>>>> Looking at use cases, then determinant() is indeed often used for:
>>>>>>
>>>>>> * Checking if a matrix is invertible.
>>>>>> * Part of actually inverting the matrix.
>>>>>> * Part of some decomposing algorithms as the one in CSS Transforms.
>>>>>>
>>>>>> I should note that the determinant is the most common way to check
>>>>>> for invertibility of a matrix and part of actually inverting the matrix.
>>>>>> Even Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use
>>>>>> the determinant for these operations.
>>>>>>
>>>>>
>>>>> I didn't say that determinant had no good use case. I said that it had
>>>>> more bad use cases than it had good ones. If its only use case if checking
>>>>> whether the cofactors formula will succeed in computing the inverse, then
>>>>> make that part of the inversion API so you don't compute the determinant
>>>>> twice.
>>>>>
>>>>> Here is a good use case of determinant, except it's bad because it
>>>>> computes the determinant twice:
>>>>>
>>>>>   if (matrix.determinant() != 0) {// once
>>>>> result = matrix.inverse(); // twice
>>>>>   }
>>>>>
>>>>> If that's the only thing we use the determinant for, then we're better
>>>>> served by an API like this, allowing to query success status:
>>>>>
>>>>>   var matrixInversionResult = matrix.inverse();   // once
>>>>>   if (matrixInversionResult.invertible()) {
>>>>> result = matrixInversionResult.inverse();
>>>>>   }
>>>>>
>>>>
>>>> This seems to be the main use case for Determinant(). Any objections if
>>>> we add isInvertible to DOMMatrixReadOnly?
>>>>
>>>
>>> Can you give an example of how this API would be used and how it would
>>> *not* force the implementation to compute the determinant twice if people
>>> call isInvertible() and then inverse() ?
>>>
>>
>> Actually, inverse() is already spec'd to throw if the inversion fails. In
>> that case (assuming we keep it that way) there is no need at all for any
>> isInvertible kind of method. Note that in floating-point arithmetic there
>> is no absolute notion of invertibility; there just are different matrix
>> inversion algorithms each failing on different matrices, so "invertibility"
>> only makes sense with respect to one inversion algorithm, so it is actually
>> better to keep the curren

Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 17:34 GMT-04:00 Benoit Jacob :

>
>
>
> 2014-06-03 16:20 GMT-04:00 Rik Cabanier :
>
>
>>
>>
>> On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob 
>> wrote:
>>
>>>
>>>
>>>
>>> 2014-06-03 3:34 GMT-04:00 Dirk Schulze :
>>>
>>>
>>>> On Jun 2, 2014, at 12:11 AM, Benoit Jacob 
>>>> wrote:
>>>>
>>>> > Objection #6:
>>>> >
>>>> > The determinant() method, being in this API the only easy way to get
>>>> > something that looks roughly like a measure of invertibility, will
>>>> probably
>>>> > be (mis-)used as a measure of invertibility. So I'm quite confident
>>>> that it
>>>> > has a strong mis-use case. Does it have a strong good use case? Does
>>>> it
>>>> > outweigh that? Note that if the intent is precisely to offer some
>>>> kind of
>>>> > measure of invertibility, then that is yet another thing that would
>>>> be best
>>>> > done with a singular values decomposition (along with solving, and
>>>> with
>>>> > computing a polar decomposition, useful for interpolating matrices),
>>>> by
>>>> > returning the ratio between the lowest and the highest singular value.
>>>>
>>>> Looking at use cases, then determinant() is indeed often used for:
>>>>
>>>> * Checking if a matrix is invertible.
>>>> * Part of actually inverting the matrix.
>>>> * Part of some decomposing algorithms as the one in CSS Transforms.
>>>>
>>>> I should note that the determinant is the most common way to check for
>>>> invertibility of a matrix and part of actually inverting the matrix. Even
>>>> Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
>>>> determinant for these operations.
>>>>
>>>
>>> I didn't say that determinant had no good use case. I said that it had
>>> more bad use cases than it had good ones. If its only use case if checking
>>> whether the cofactors formula will succeed in computing the inverse, then
>>> make that part of the inversion API so you don't compute the determinant
>>> twice.
>>>
>>> Here is a good use case of determinant, except it's bad because it
>>> computes the determinant twice:
>>>
>>>   if (matrix.determinant() != 0) {// once
>>> result = matrix.inverse(); // twice
>>>   }
>>>
>>> If that's the only thing we use the determinant for, then we're better
>>> served by an API like this, allowing to query success status:
>>>
>>>   var matrixInversionResult = matrix.inverse();   // once
>>>   if (matrixInversionResult.invertible()) {
>>> result = matrixInversionResult.inverse();
>>>   }
>>>
>>
>> This seems to be the main use case for Determinant(). Any objections if
>> we add isInvertible to DOMMatrixReadOnly?
>>
>
> Can you give an example of how this API would be used and how it would
> *not* force the implementation to compute the determinant twice if people
> call isInvertible() and then inverse() ?
>

Actually, inverse() is already spec'd to throw if the inversion fails. In
that case (assuming we keep it that way) there is no need at all for any
isInvertible kind of method. Note that in floating-point arithmetic there
is no absolute notion of invertibility; there just are different matrix
inversion algorithms each failing on different matrices, so "invertibility"
only makes sense with respect to one inversion algorithm, so it is actually
better to keep the current exception-throwing API than to introduce a
separate isInvertible getter.

Benoit


>
> Benoit
>
>
>>
>>
>>> Typical bad uses of the determinant as "measures of invertibility"
>>> typically occur in conjunction with people thinking they do the right thing
>>> with "fuzzy compares", like this typical bad pattern:
>>>
>>>   if (matrix.determinant() < 1e-6) {
>>> return error;
>>>   }
>>>   result = matrix.inverse();
>>>
>>> Multiple things are wrong here:
>>>
>>>  1. First, as mentioned above, the determinant is being computed twice
>>> here.
>>>
>>>  2. Second, floating-point scale invariance is broken: floating point
>>> computations should generally work for all values across the whole exponent
>&g

Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 16:20 GMT-04:00 Rik Cabanier :

>
>
>
> On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob 
> wrote:
>
>>
>>
>>
>> 2014-06-03 3:34 GMT-04:00 Dirk Schulze :
>>
>>
>>> On Jun 2, 2014, at 12:11 AM, Benoit Jacob 
>>> wrote:
>>>
>>> > Objection #6:
>>> >
>>> > The determinant() method, being in this API the only easy way to get
>>> > something that looks roughly like a measure of invertibility, will
>>> probably
>>> > be (mis-)used as a measure of invertibility. So I'm quite confident
>>> that it
>>> > has a strong mis-use case. Does it have a strong good use case? Does it
>>> > outweigh that? Note that if the intent is precisely to offer some kind
>>> of
>>> > measure of invertibility, then that is yet another thing that would be
>>> best
>>> > done with a singular values decomposition (along with solving, and with
>>> > computing a polar decomposition, useful for interpolating matrices), by
>>> > returning the ratio between the lowest and the highest singular value.
>>>
>>> Looking at use cases, then determinant() is indeed often used for:
>>>
>>> * Checking if a matrix is invertible.
>>> * Part of actually inverting the matrix.
>>> * Part of some decomposing algorithms as the one in CSS Transforms.
>>>
>>> I should note that the determinant is the most common way to check for
>>> invertibility of a matrix and part of actually inverting the matrix. Even
>>> Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
>>> determinant for these operations.
>>>
>>
>> I didn't say that determinant had no good use case. I said that it had
>> more bad use cases than it had good ones. If its only use case if checking
>> whether the cofactors formula will succeed in computing the inverse, then
>> make that part of the inversion API so you don't compute the determinant
>> twice.
>>
>> Here is a good use case of determinant, except it's bad because it
>> computes the determinant twice:
>>
>>   if (matrix.determinant() != 0) {// once
>> result = matrix.inverse(); // twice
>>   }
>>
>> If that's the only thing we use the determinant for, then we're better
>> served by an API like this, allowing to query success status:
>>
>>   var matrixInversionResult = matrix.inverse();   // once
>>   if (matrixInversionResult.invertible()) {
>> result = matrixInversionResult.inverse();
>>   }
>>
>
> This seems to be the main use case for Determinant(). Any objections if we
> add isInvertible to DOMMatrixReadOnly?
>

Can you give an example of how this API would be used and how it would
*not* force the implementation to compute the determinant twice if people
call isInvertible() and then inverse() ?

Benoit


>
>
>> Typical bad uses of the determinant as "measures of invertibility"
>> typically occur in conjunction with people thinking they do the right thing
>> with "fuzzy compares", like this typical bad pattern:
>>
>>   if (matrix.determinant() < 1e-6) {
>> return error;
>>   }
>>   result = matrix.inverse();
>>
>> Multiple things are wrong here:
>>
>>  1. First, as mentioned above, the determinant is being computed twice
>> here.
>>
>>  2. Second, floating-point scale invariance is broken: floating point
>> computations should generally work for all values across the whole exponent
>> range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
>> matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
>> The determinant of that matrix is 1e-8, so that matrix is incorrectly
>> treated as non-invertible here.
>>
>>  3. Third, if the primary use for the determinant is invertibility and
>> inversion is implemented by cofactors (as it would be for 4x4 matrices)
>> then in that case only an exact comparison of the determinant to 0 is
>> relevant. That's a case where no fuzzy comparison is meaningful. If one
>> wanted to guard against cancellation-induced imprecision, one would have to
>> look at cofactors themselves, not just at the determinant.
>>
>> In full generality, the determinant is just the volume of the unit cube
>> under the matrix transformation. It is exactly zero if and only if the
>> matrix is singular. That doesn't by itself give any interpretation of other
>> nonzero values of the determinant, not even "very small" ones.
>>
>> For special classes of matrices, things are different. Some classes of
>> matrices have a specific determinant, for example rotations have
>> determinant one, which can be used to do useful things. So in a
>> sufficiently advanced or specialized matrix API, the determinant is useful
>> to expose. DOMMatrix is special in that it is not advanced and not
>> specialized.
>>
>> Benoit
>>
>>
>>> Greetings,
>>> Dirk
>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
I also think that now that determinant() is being removed, is a good time
to revisit my Objection #4 which I don't think has been addressed at all:
please remove inverse() too.

Indeed, without its companion determinant() method, the inverse() method is
now standing out as by far the most advanced and footgun-ish feature in
this API, and the concerns that I expressed about it in Objection #4
earlier in this thread still stand.

With inverse() removed, the feature set would look more consistent as it
would then purely be about creating and composing (multiplying)
transformations.

Benoit


2014-06-03 13:23 GMT-04:00 Rik Cabanier :

>
>
>
> On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob 
> wrote:
>
>>
>>
>>
>> 2014-06-03 3:34 GMT-04:00 Dirk Schulze :
>>
>>
>>> On Jun 2, 2014, at 12:11 AM, Benoit Jacob 
>>> wrote:
>>>
>>> > Objection #6:
>>> >
>>> > The determinant() method, being in this API the only easy way to get
>>> > something that looks roughly like a measure of invertibility, will
>>> probably
>>> > be (mis-)used as a measure of invertibility. So I'm quite confident
>>> that it
>>> > has a strong mis-use case. Does it have a strong good use case? Does it
>>> > outweigh that? Note that if the intent is precisely to offer some kind
>>> of
>>> > measure of invertibility, then that is yet another thing that would be
>>> best
>>> > done with a singular values decomposition (along with solving, and with
>>> > computing a polar decomposition, useful for interpolating matrices), by
>>> > returning the ratio between the lowest and the highest singular value.
>>>
>>> Looking at use cases, then determinant() is indeed often used for:
>>>
>>> * Checking if a matrix is invertible.
>>> * Part of actually inverting the matrix.
>>> * Part of some decomposing algorithms as the one in CSS Transforms.
>>>
>>> I should note that the determinant is the most common way to check for
>>> invertibility of a matrix and part of actually inverting the matrix. Even
>>> Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
>>> determinant for these operations.
>>>
>>
>> I didn't say that determinant had no good use case. I said that it had
>> more bad use cases than it had good ones. If its only use case if checking
>> whether the cofactors formula will succeed in computing the inverse, then
>> make that part of the inversion API so you don't compute the determinant
>> twice.
>>
>> Here is a good use case of determinant, except it's bad because it
>> computes the determinant twice:
>>
>>   if (matrix.determinant() != 0) {// once
>> result = matrix.inverse(); // twice
>>   }
>>
>> If that's the only thing we use the determinant for, then we're better
>> served by an API like this, allowing to query success status:
>>
>>   var matrixInversionResult = matrix.inverse();   // once
>>   if (matrixInversionResult.invertible()) {
>> result = matrixInversionResult.inverse();
>>   }
>>
>> Typical bad uses of the determinant as "measures of invertibility"
>> typically occur in conjunction with people thinking they do the right thing
>> with "fuzzy compares", like this typical bad pattern:
>>
>>   if (matrix.determinant() < 1e-6) {
>> return error;
>>   }
>>   result = matrix.inverse();
>>
>> Multiple things are wrong here:
>>
>>  1. First, as mentioned above, the determinant is being computed twice
>> here.
>>
>>  2. Second, floating-point scale invariance is broken: floating point
>> computations should generally work for all values across the whole exponent
>> range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
>> matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
>> The determinant of that matrix is 1e-8, so that matrix is incorrectly
>> treated as non-invertible here.
>>
>>  3. Third, if the primary use for the determinant is invertibility and
>> inversion is implemented by cofactors (as it would be for 4x4 matrices)
>> then in that case only an exact comparison of the determinant to 0 is
>> relevant. That's a case where no fuzzy comparison is meaningful. If one
>> wanted to guard against cancellation-induced imprecision, one would have to
>> look at cofactors themselves, not just at the determinant.
>>
>> In full generality, the det

Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-02 23:45 GMT-04:00 Rik Cabanier :

> To recap I think the following points have been resolved:
> - remove determinant (unless someone comes up with a strong use case)
> - change is2D() so it's a flag instead of calculated on the fly
> - change isIdentity() so it's a flag.
> - update constructors so they set/copy the flags appropriately
>
> Still up for discussion:
> - rename isIdentity
> - come up with better way for the in-place transformations as opposed to
> "by"
> - is premultiply needed?
>
>

This list misses some of the points that I care more about:
 - Should DOMMatrix really try to be both 3D projective transformations and
2D affine transformations or should that be split into separate classes?
 - Should we really take SVG's matrix and other existing bad matrix APIs
and bless them and engrave them in the marble of The New HTML5 That Is Good
By Definition?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 3:34 GMT-04:00 Dirk Schulze :

>
> On Jun 2, 2014, at 12:11 AM, Benoit Jacob 
> wrote:
>
> > Objection #6:
> >
> > The determinant() method, being in this API the only easy way to get
> > something that looks roughly like a measure of invertibility, will
> probably
> > be (mis-)used as a measure of invertibility. So I'm quite confident that
> it
> > has a strong mis-use case. Does it have a strong good use case? Does it
> > outweigh that? Note that if the intent is precisely to offer some kind of
> > measure of invertibility, then that is yet another thing that would be
> best
> > done with a singular values decomposition (along with solving, and with
> > computing a polar decomposition, useful for interpolating matrices), by
> > returning the ratio between the lowest and the highest singular value.
>
> Looking at use cases, then determinant() is indeed often used for:
>
> * Checking if a matrix is invertible.
> * Part of actually inverting the matrix.
> * Part of some decomposing algorithms as the one in CSS Transforms.
>
> I should note that the determinant is the most common way to check for
> invertibility of a matrix and part of actually inverting the matrix. Even
> Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
> determinant for these operations.
>

I didn't say that determinant had no good use case. I said that it had more
bad use cases than it had good ones. If its only use case if checking
whether the cofactors formula will succeed in computing the inverse, then
make that part of the inversion API so you don't compute the determinant
twice.

Here is a good use case of determinant, except it's bad because it computes
the determinant twice:

  if (matrix.determinant() != 0) {// once
result = matrix.inverse(); // twice
  }

If that's the only thing we use the determinant for, then we're better
served by an API like this, allowing to query success status:

  var matrixInversionResult = matrix.inverse();   // once
  if (matrixInversionResult.invertible()) {
result = matrixInversionResult.inverse();
  }

Typical bad uses of the determinant as "measures of invertibility"
typically occur in conjunction with people thinking they do the right thing
with "fuzzy compares", like this typical bad pattern:

  if (matrix.determinant() < 1e-6) {
return error;
  }
  result = matrix.inverse();

Multiple things are wrong here:

 1. First, as mentioned above, the determinant is being computed twice here.

 2. Second, floating-point scale invariance is broken: floating point
computations should generally work for all values across the whole exponent
range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
The determinant of that matrix is 1e-8, so that matrix is incorrectly
treated as non-invertible here.

 3. Third, if the primary use for the determinant is invertibility and
inversion is implemented by cofactors (as it would be for 4x4 matrices)
then in that case only an exact comparison of the determinant to 0 is
relevant. That's a case where no fuzzy comparison is meaningful. If one
wanted to guard against cancellation-induced imprecision, one would have to
look at cofactors themselves, not just at the determinant.

In full generality, the determinant is just the volume of the unit cube
under the matrix transformation. It is exactly zero if and only if the
matrix is singular. That doesn't by itself give any interpretation of other
nonzero values of the determinant, not even "very small" ones.

For special classes of matrices, things are different. Some classes of
matrices have a specific determinant, for example rotations have
determinant one, which can be used to do useful things. So in a
sufficiently advanced or specialized matrix API, the determinant is useful
to expose. DOMMatrix is special in that it is not advanced and not
specialized.

Benoit


> Greetings,
> Dirk
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-02 17:13 GMT-04:00 Rik Cabanier :

>
>
>
> On Mon, Jun 2, 2014 at 11:08 AM, Benoit Jacob 
> wrote:
>
>>
>>
>>
>> 2014-06-02 14:06 GMT-04:00 Benoit Jacob :
>>
>>
>>>
>>>
>>> 2014-06-02 13:56 GMT-04:00 Nick Alexander :
>>>
>>> On 2014-06-02, 9:59 AM, Rik Cabanier wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander >>>> <mailto:nalexan...@mozilla.com>> wrote:
>>>>>
>>>>> On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:
>>>>>
>>>>> On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier <
>>>>> caban...@gmail.com
>>>>> <mailto:caban...@gmail.com>> wrote:
>>>>>
>>>>> isIdentity() indeed suffers from rounding errors but since
>>>>> it's useful, I'm
>>>>> hesitant to remove it.
>>>>> In our rendering libraries at Adobe, we check if a matrix
>>>>> is
>>>>> *almost*
>>>>> identity. Maybe we can do the same here?
>>>>>
>>>>>
>>>>> One option would be to make "isIdentity" and "is2D" state bits
>>>>> in the
>>>>> object rather than predicates on the matrix coefficients. Then
>>>>> for each
>>>>> matrix operation, we would define how it affects the isIdentity
>>>>> and is2D
>>>>> bits. For example we could say translate(tx, ty, tz)'s result
>>>>> isIdentity if
>>>>> and only if the source matrix isIdentity and tx, ty and tz are
>>>>> all exactly
>>>>> 0.0, and the result is2D if and only if the source matrix is2D
>>>>> and tz is
>>>>> exactly 0.0.
>>>>>
>>>>> With that approach, isIdentity and is2D would be much less
>>>>> sensitive to
>>>>> precision issues. In particular they'd be independent of the
>>>>> precision used
>>>>> to compute and store store matrix elements, which would be
>>>>> helpful I think.
>>>>>
>>>>>
>>>>> I agree that most mathematical ways of determining a matrix (as a
>>>>> rotation, or a translation, etc) come with isIdentity for free; but
>>>>> are most matrices derived from some underlying transformation, or
>>>>> are they given as a list of coefficients?
>>>>>
>>>>>
>>>>> You can do it either way. Here are the constructors:
>>>>> http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix
>>>>>
>>>>> So you can do:
>>>>>
>>>>> var m = new DOMMatrix(); // identity = true, 2d = true
>>>>> var m = new DOMMatrix("translate(20 20) scale(4 4) skewX"); //
>>>>> identity = depends, 2d = depends
>>>>> var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d
>>>>> =
>>>>> inherited
>>>>> var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d =
>>>>> true
>>>>> var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d
>>>>> =
>>>>> depends
>>>>>
>>>>> If the latter, the isIdentity flag needs to be determined by the
>>>>> constructor, or fed as a parameter.  Exactly how does the
>>>>> constructor determine the parameter?  Exactly how does the user?
>>>>>
>>>>>
>>>>> The constructor would check the incoming parameters as defined:
>>>>>
>>>>> http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
>>>>> http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity
>>>>>
>>>>
>>>> Thanks for providing these references.  As an aside -- it worries me
>>>> that these are defined rather differently:  is2d says "are equal to 0",
>>>> while isIdentity says "are '0'".  Is this a syntactic or a semantic
>>>> difference?
>>>>
>>>> But, 

Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-02 13:56 GMT-04:00 Nick Alexander :

> On 2014-06-02, 9:59 AM, Rik Cabanier wrote:
>
>>
>>
>>
>> On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander > > wrote:
>>
>> On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:
>>
>> On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier > > wrote:
>>
>> isIdentity() indeed suffers from rounding errors but since
>> it's useful, I'm
>> hesitant to remove it.
>> In our rendering libraries at Adobe, we check if a matrix is
>> *almost*
>> identity. Maybe we can do the same here?
>>
>>
>> One option would be to make "isIdentity" and "is2D" state bits
>> in the
>> object rather than predicates on the matrix coefficients. Then
>> for each
>> matrix operation, we would define how it affects the isIdentity
>> and is2D
>> bits. For example we could say translate(tx, ty, tz)'s result
>> isIdentity if
>> and only if the source matrix isIdentity and tx, ty and tz are
>> all exactly
>> 0.0, and the result is2D if and only if the source matrix is2D
>> and tz is
>> exactly 0.0.
>>
>> With that approach, isIdentity and is2D would be much less
>> sensitive to
>> precision issues. In particular they'd be independent of the
>> precision used
>> to compute and store store matrix elements, which would be
>> helpful I think.
>>
>>
>> I agree that most mathematical ways of determining a matrix (as a
>> rotation, or a translation, etc) come with isIdentity for free; but
>> are most matrices derived from some underlying transformation, or
>> are they given as a list of coefficients?
>>
>>
>> You can do it either way. Here are the constructors:
>> http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix
>>
>> So you can do:
>>
>> var m = new DOMMatrix(); // identity = true, 2d = true
>> var m = new DOMMatrix("translate(20 20) scale(4 4) skewX"); //
>> identity = depends, 2d = depends
>> var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d =
>> inherited
>> var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d = true
>> var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d =
>> depends
>>
>> If the latter, the isIdentity flag needs to be determined by the
>> constructor, or fed as a parameter.  Exactly how does the
>> constructor determine the parameter?  Exactly how does the user?
>>
>>
>> The constructor would check the incoming parameters as defined:
>>
>> http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
>> http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity
>>
>
> Thanks for providing these references.  As an aside -- it worries me that
> these are defined rather differently:  is2d says "are equal to 0", while
> isIdentity says "are '0'".  Is this a syntactic or a semantic difference?
>
> But, to the point, the idea of "carrying around the isIdentity flag" is
> looking bad, because we either have that A*A.inverse() will never have
> isIdentity() == true; or we promote the idiom that to check for identity,
> one always creates a new DOMMatrix, so that the constructor determines
> isIdentity, and then we query it.  This is no better than just having
> isIdentity do the (badly-rounded) check.
>

The way that propagating an "is identity" flag is better than determining
that from the matrix coefficients, is that it's predictable. People are
going to have matrices that are the result of various arithmetic
operations, that are close to identity but most of the time not exactly
identity. On these matrices, I would like isIdentity() to consistently
return false, instead of returning false 99.99% of the time and then
suddenly accidentally returning true when a little miracle happens and a
matrix happens to be exactly identity.

Benoit



>
> Nick
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-02 14:06 GMT-04:00 Benoit Jacob :

>
>
>
> 2014-06-02 13:56 GMT-04:00 Nick Alexander :
>
> On 2014-06-02, 9:59 AM, Rik Cabanier wrote:
>>
>>>
>>>
>>>
>>> On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander >> <mailto:nalexan...@mozilla.com>> wrote:
>>>
>>> On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:
>>>
>>> On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier >> <mailto:caban...@gmail.com>> wrote:
>>>
>>> isIdentity() indeed suffers from rounding errors but since
>>> it's useful, I'm
>>> hesitant to remove it.
>>> In our rendering libraries at Adobe, we check if a matrix is
>>> *almost*
>>> identity. Maybe we can do the same here?
>>>
>>>
>>> One option would be to make "isIdentity" and "is2D" state bits
>>> in the
>>> object rather than predicates on the matrix coefficients. Then
>>> for each
>>> matrix operation, we would define how it affects the isIdentity
>>> and is2D
>>> bits. For example we could say translate(tx, ty, tz)'s result
>>> isIdentity if
>>> and only if the source matrix isIdentity and tx, ty and tz are
>>> all exactly
>>> 0.0, and the result is2D if and only if the source matrix is2D
>>> and tz is
>>> exactly 0.0.
>>>
>>> With that approach, isIdentity and is2D would be much less
>>> sensitive to
>>> precision issues. In particular they'd be independent of the
>>> precision used
>>> to compute and store store matrix elements, which would be
>>> helpful I think.
>>>
>>>
>>> I agree that most mathematical ways of determining a matrix (as a
>>> rotation, or a translation, etc) come with isIdentity for free; but
>>> are most matrices derived from some underlying transformation, or
>>> are they given as a list of coefficients?
>>>
>>>
>>> You can do it either way. Here are the constructors:
>>> http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix
>>>
>>> So you can do:
>>>
>>> var m = new DOMMatrix(); // identity = true, 2d = true
>>> var m = new DOMMatrix("translate(20 20) scale(4 4) skewX"); //
>>> identity = depends, 2d = depends
>>> var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d =
>>> inherited
>>> var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d =
>>> true
>>> var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d =
>>> depends
>>>
>>> If the latter, the isIdentity flag needs to be determined by the
>>> constructor, or fed as a parameter.  Exactly how does the
>>> constructor determine the parameter?  Exactly how does the user?
>>>
>>>
>>> The constructor would check the incoming parameters as defined:
>>>
>>> http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
>>> http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity
>>>
>>
>> Thanks for providing these references.  As an aside -- it worries me that
>> these are defined rather differently:  is2d says "are equal to 0", while
>> isIdentity says "are '0'".  Is this a syntactic or a semantic difference?
>>
>> But, to the point, the idea of "carrying around the isIdentity flag" is
>> looking bad, because we either have that A*A.inverse() will never have
>> isIdentity() == true; or we promote the idiom that to check for identity,
>> one always creates a new DOMMatrix, so that the constructor determines
>> isIdentity, and then we query it.  This is no better than just having
>> isIdentity do the (badly-rounded) check.
>>
>
> The way that propagating an "is identity" flag is better than determining
> that from the matrix coefficients, is that it's predictable. People are
> going to have matrices that are the result of various arithmetic
> operations, that are close to identity but most of the time not exactly
> identity. On these matrices, I would like isIdentity() to consistently
> return false, instead of returning false 99.99% of the time and then
> suddenly accidentally returning true when a little miracle happens and a
> matrix happens to be exactly identity.
>

...but, to not lose sight of what I really want:  I am still not convinced
that we should have a isIdentity() method at all, and by default I would
prefer no such method to exist. I was only saying the above _if_ we must
have a isIdentity method.

Benoit


>
> Benoit
>
>
>
>>
>> Nick
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-02 7:59 GMT-04:00 Robert O'Callahan :

> On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier  wrote:
>
>> isIdentity() indeed suffers from rounding errors but since it's useful,
>> I'm
>> hesitant to remove it.
>> In our rendering libraries at Adobe, we check if a matrix is *almost*
>> identity. Maybe we can do the same here?
>>
>
> One option would be to make "isIdentity" and "is2D" state bits in the
> object rather than predicates on the matrix coefficients. Then for each
> matrix operation, we would define how it affects the isIdentity and is2D
> bits. For example we could say translate(tx, ty, tz)'s result isIdentity if
> and only if the source matrix isIdentity and tx, ty and tz are all exactly
> 0.0, and the result is2D if and only if the source matrix is2D and tz is
> exactly 0.0.
>
> With that approach, isIdentity and is2D would be much less sensitive to
> precision issues. In particular they'd be independent of the precision used
> to compute and store store matrix elements, which would be helpful I think.
>

+1 ! If we do want to keep the isIdentity and is2D methods, then indeed
this is the right way do implement them.

Benoit


>
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-01 23:19 GMT-04:00 Rik Cabanier :

>
>
>
> On Sun, Jun 1, 2014 at 3:11 PM, Benoit Jacob 
> wrote:
>
>>
>>
>>
>> 2014-05-31 0:40 GMT-04:00 Rik Cabanier :
>>
>>  Objection #3:
>>>>
>>>> I dislike the way that this API exposes multiplication order. It's not
>>>> obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B
>>>> and which is doing A=B*A.
>>>>
>>>
>>> The "by" methods do the transformation in-place. In this case, both are
>>> A = A * B
>>> Maybe you're thinking of preMultiply?
>>>
>>
>> Ah, I was totally confused by the method names. "Multiply" is already a
>> verb, and the method name "multiply" already implicitly means "multiply
>> *by*". So it's very confusing that there is another method named multiplyBy.
>>
>
> Yeah, we had discussion on that. 'by' is not ideal, but it is much shorter
> than 'InPlace'. Do you have a suggestion to improve the name?
>

My suggestion was the one below that part (multiply->product,
multiplyBy->multiply) but it seems that that's moot because:


>
>
>> Methods on DOMMatrixReadOnly are inconsistently named: some, like
>> "multiply", are named after the /verb/ describing what they /do/, while
>> others, like "inverse", are named after the /noun/ describing what they
>> /return/.
>>
>> Choose one and stick to it; my preference goes to the latter, i.e. rename
>> "multiply" to "product" in line with the existing "inverse" and then the
>> DOMMatrix.multiplyBy method can drop the "By" and become "multiply".
>>
>> If you do rename "multiply" to "product" that leads to the question of
>> what "preMultiply" should become.
>>
>> In an ideal world (not commenting on whether that's a thing we can get on
>> the Web), "product" would be a global function, not a class method, so you
>> could let people write product(X, Y) or product(Y, X) and not have to worry
>> about naming differently the two product orders.
>>
>
> Unfortunately, we're stuck with the API names that SVG gave to its matrix.
> The only way to fix this is to duplicate the API and support both old and
> new names which is very confusing,
>

Sounds like the naming is not even up for discussion, then? In that case,
what is up for discussion?

That's basically the core disagreement here: I'm not convinced that just
because something is in SVG implies that it should be propagated as a
"blessed" abstraction for the rest of the Web. Naming and branding matter:
something named "SVGMatrix" clearly suggests "should be used for dealing
with SVG" while something named "DOMMatrix" sounds like it's recommended
for use everywhere on the Web.

I would rather have SVG keep its own matrix class while the rest of the Web
gets something nicer.



>
>
>>  Objection #4:
>>>>
>>>> By exposing a inverse() method but no solve() method, this API will
>>>> encourage people who have to solve linear systems to do so by doing
>>>> matrix.inverse().transformPoint(...), which is inefficient and can be
>>>> numerically unstable.
>>>>
>>>> But then of course once we open the pandora box of exposing solvers,
>>>> the API grows a lot more. My point is not to suggest to grow the API more.
>>>> My point is to discourage you and the W3C from getting into the matrix API
>>>> design business. Matrix APIs are bound to either grow big or be useless. I
>>>> believe that the only appropriate Matrix interface at the Web API level is
>>>> a plain storage class, with minimal getters (basically a thin wrapper
>>>> around a typed array without any nontrivial arithmetic built in).
>>>>
>>>
>>> We already went over this at length about a year ago.
>>> Dirk's been asking for feedback on this interface on www-style and
>>> public-fx so can you raise your concerns there? Just keep in mind that we
>>> have to support the SVGMatrix and CSSMatrix interfaces.
>>>
>>
>> My ROI for arguing on standards mailing on matrix math topics lists has
>> been very low, presumably because these are specialist topics outside of
>> the area of expertise of these groups.
>>
>
> It is a constant struggle. We need to strike a balance between
> mathematicians and average authors. Stay with it and prepare to repeat
> y

Re: Intent to implement: DOMMatrix

2014-06-01 Thread Benoit Jacob
2014-05-31 0:40 GMT-04:00 Rik Cabanier :

> Objection #3:
>>
>> I dislike the way that this API exposes multiplication order. It's not
>> obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B
>> and which is doing A=B*A.
>>
>
> The "by" methods do the transformation in-place. In this case, both are A
> = A * B
> Maybe you're thinking of preMultiply?
>

Ah, I was totally confused by the method names. "Multiply" is already a
verb, and the method name "multiply" already implicitly means "multiply
*by*". So it's very confusing that there is another method named multiplyBy.

Methods on DOMMatrixReadOnly are inconsistently named: some, like
"multiply", are named after the /verb/ describing what they /do/, while
others, like "inverse", are named after the /noun/ describing what they
/return/.

Choose one and stick to it; my preference goes to the latter, i.e. rename
"multiply" to "product" in line with the existing "inverse" and then the
DOMMatrix.multiplyBy method can drop the "By" and become "multiply".

If you do rename "multiply" to "product" that leads to the question of what
"preMultiply" should become.

In an ideal world (not commenting on whether that's a thing we can get on
the Web), "product" would be a global function, not a class method, so you
could let people write product(X, Y) or product(Y, X) and not have to worry
about naming differently the two product orders.




>
>> Objection #4:
>>
>> By exposing a inverse() method but no solve() method, this API will
>> encourage people who have to solve linear systems to do so by doing
>> matrix.inverse().transformPoint(...), which is inefficient and can be
>> numerically unstable.
>>
>> But then of course once we open the pandora box of exposing solvers, the
>> API grows a lot more. My point is not to suggest to grow the API more. My
>> point is to discourage you and the W3C from getting into the matrix API
>> design business. Matrix APIs are bound to either grow big or be useless. I
>> believe that the only appropriate Matrix interface at the Web API level is
>> a plain storage class, with minimal getters (basically a thin wrapper
>> around a typed array without any nontrivial arithmetic built in).
>>
>
> We already went over this at length about a year ago.
> Dirk's been asking for feedback on this interface on www-style and
> public-fx so can you raise your concerns there? Just keep in mind that we
> have to support the SVGMatrix and CSSMatrix interfaces.
>

My ROI for arguing on standards mailing on matrix math topics lists has
been very low, presumably because these are specialist topics outside of
the area of expertise of these groups.

Here are a couple more objections by the way:

Objection #5:

The isIdentity() method has the same issue as was described about is2D()
above: as matrices get computed, they are going to jump unpredicably
between being exactly identity and not. People using isIdentity() to jump
between code paths are going to get unexpected jumps between code paths
i.e. typically performance cliffs, or worse if they start asserting that a
matrix should or should not be exactly identity. For that reason, I would
remove the isIdentity method.

Objection #6:

The determinant() method, being in this API the only easy way to get
something that looks roughly like a measure of invertibility, will probably
be (mis-)used as a measure of invertibility. So I'm quite confident that it
has a strong mis-use case. Does it have a strong good use case? Does it
outweigh that? Note that if the intent is precisely to offer some kind of
measure of invertibility, then that is yet another thing that would be best
done with a singular values decomposition (along with solving, and with
computing a polar decomposition, useful for interpolating matrices), by
returning the ratio between the lowest and the highest singular value.

Either that, or explain how tricky it is to correctly use the determinant
in a measure of invertibility, and integrate a code example about that.

Benoit



>
>
>
>> 2014-05-30 20:02 GMT-04:00 Rik Cabanier :
>>
>>> Primary eng emails
>>> caban...@adobe.com, dschu...@adobe.com
>>>
>>> *Proposal*
>>> *http://dev.w3.org/fxtf/geometry/#DOMMatrix
>>> *
>>>
>>> *Summary*
>>> Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that
>>> offer a matrix abstraction.
>>>
>>> *Motivation*
>>>
>>> The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical
>>> matrix with the purpose of describing transformations in a graphical
>>> context. The following sections describe the details of the interface.
>>> The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix
>>> interface from SVG.
>>>
>>> In addition, DOMMatrix will be part of CSSOM where it will simplify
>>> getting
>>> and setting CSS transforms.
>>>
>>> *Mozilla bug*
>>>
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=1018497
>>> I will implement this behind the flag: layout.css.DOMMatrix

Re: Intent to implement: DOMMatrix

2014-05-30 Thread Benoit Jacob
I never seem to be able to discourage people from dragging the W3C into
specialist topics that are outside its area of expertise. Let me try again.

Objection #1:

The skew* methods are out of place there, because, contrary to the rest,
they are not geometric transformations, they are just arithmetic on matrix
coefficients whose geometric impact depends entirely on the choice of a
coordinate system. I'm afraid of leaving them there will propagate
widespread confusion about "skews" --- see e.g. the authors of
http://dev.w3.org/csswg/css-transforms/#matrix-interpolation who seemed to
think that decomposing a matrix into a product of things including a skew
would have geometric significance, leading to clearly unwanted behavior as
demonstrated in
http://people.mozilla.org/~bjacob/transform-animation-not-covariant.html

Objection #2:

This DOMMatrix interface tries to be simultaneously a 4x4 matrices
representing projective 3D transformations, and about 2x3 matrices
representing affine 2D transformations; this mode switch corresponds to the
is2D() getter. I have a long list of objections to this mode switch:
 - I believe that, being based on exact floating point comparisons, it is
going to be fragile. For example, people will assert that !is2D() when they
expect a 3D transformation, and that will intermittently fail when for
whatever reason their 3D matrix is going to be exactly 2D.
 - I believe that these two classes of transformations (projective 3D and
affine 2D) should be separate classes entirely, that that will make the API
simpler and more efficiently implementable and that forcing authors to
think about that choice more explicitly is doing them a favor.
 - I believe that that feature set, with this choice of two classes of
transformations (projective 3D and affine 2D), is arbitrary and
inconsistent. Why not support affine 3D or projective 2D, for instance?

Objection #3:

I dislike the way that this API exposes multiplication order. It's not
obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B
and which is doing A=B*A.

Objection #4:

By exposing a inverse() method but no solve() method, this API will
encourage people who have to solve linear systems to do so by doing
matrix.inverse().transformPoint(...), which is inefficient and can be
numerically unstable.

But then of course once we open the pandora box of exposing solvers, the
API grows a lot more. My point is not to suggest to grow the API more. My
point is to discourage you and the W3C from getting into the matrix API
design business. Matrix APIs are bound to either grow big or be useless. I
believe that the only appropriate Matrix interface at the Web API level is
a plain storage class, with minimal getters (basically a thin wrapper
around a typed array without any nontrivial arithmetic built in).

Benoit





2014-05-30 20:02 GMT-04:00 Rik Cabanier :

> Primary eng emails
> caban...@adobe.com, dschu...@adobe.com
>
> *Proposal*
> *http://dev.w3.org/fxtf/geometry/#DOMMatrix
> *
>
> *Summary*
> Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that
> offer a matrix abstraction.
>
> *Motivation*
> The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical
> matrix with the purpose of describing transformations in a graphical
> context. The following sections describe the details of the interface.
> The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix
> interface from SVG.
>
> In addition, DOMMatrix will be part of CSSOM where it will simplify getting
> and setting CSS transforms.
>
> *Mozilla bug*
> https://bugzilla.mozilla.org/show_bug.cgi?id=1018497
> I will implement this behind the flag: layout.css.DOMMatrix
>
> *Concerns*
> None.
> Mozilla already implemented DOMPoint and DOMQuad
>
> *Compatibility Risk*
> Blink: unknown
> WebKit: in development [1]
> Internet Explorer: No public signals
> Web developers: unknown
>
> 1: https://bugs.webkit.org/show_bug.cgi?id=110001
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor & should be MOZ_FINAL where possible

2014-05-28 Thread Benoit Jacob
Actually that test program contradicts what I said --- my
IsDestructorPrivateOrDeleted produces exactly the same result as
!is_destructible,  and is_destructible does return 0 for the class with
private destructor. So you could just use that!

Benoit


2014-05-28 16:51 GMT-04:00 Benoit Jacob :

> Awesome work!
>
> By the way, I just figured a way that you could static_assert so that at
> least on supporting C++11 compilers, we would automatically catch this.
>
> The basic C++11 tool here is std::is_destructible from , but
> it has a problem: it only returns false if the destructor is deleted, it
> doesn't return false if the destructor is private. However, the example
> below shows how we can still achieve what we want by using wrapping the
> class that we are interested in as a member of a helper templated struct:
>
>
>
> #include 
> #include 
>
> class ClassWithDeletedDtor {
>   ~ClassWithDeletedDtor() = delete;
> };
>
> class ClassWithPrivateDtor {
>   ~ClassWithPrivateDtor() {}
> };
>
> class ClassWithPublicDtor {
> public:
>   ~ClassWithPublicDtor() {}
> };
>
> template 
> class IsDestructorPrivateOrDeletedHelper {
>   T x;
> };
>
> template 
> struct IsDestructorPrivateOrDeleted
> {
>   static const bool value =
> !std::is_destructible>::value;
> };
>
> int main() {
> #define PRINT(x) std::cerr << #x " = " << (x) << std::endl;
>
>   PRINT(std::is_destructible::value);
>   PRINT(std::is_destructible::value);
>   PRINT(std::is_destructible::value);
>
>   std::cerr << std::endl;
>
>   PRINT(IsDestructorPrivateOrDeleted::value);
>   PRINT(IsDestructorPrivateOrDeleted::value);
>   PRINT(IsDestructorPrivateOrDeleted::value);
> }
>
>
> Output:
>
>
> std::is_destructible::value = 0
> std::is_destructible::value = 0
> std::is_destructible::value = 1
>
> IsDestructorPrivateOrDeleted::value = 1
> IsDestructorPrivateOrDeleted::value = 1
> IsDestructorPrivateOrDeleted::value = 0
>
>
> If you also want to require classes to be final, C++11  also
> has std::is_final for that.
>
> Cheers,
> Benoit
>
>
> 2014-05-28 16:24 GMT-04:00 Daniel Holbert :
>
> Hi dev-platform,
>>
>> PSA: if you are adding a concrete class with AddRef/Release
>> implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
>> of the following best-practices:
>>
>>  (a) Your class should have an explicitly-declared non-public
>> destructor. (should be 'private' or 'protected')
>>
>>  (b) Your class should be labeled as MOZ_FINAL (or, see below).
>>
>>
>> WHY THIS IS A GOOD IDEA
>> ===
>> We'd like to ensure that refcounted objects are *only* deleted via their
>> ::Release() methods.  Otherwise, we're potentially susceptible to
>> double-free bugs.
>>
>> We can go a long way towards enforcing this rule at compile-time by
>> giving these classes non-public destructors.  This prevents a whole
>> category of double-free bugs.
>>
>> In particular: if your class has a public destructor (the default), then
>> it's easy for you or someone else to accidentally declare an instance on
>> the stack or as a member-variable in another class, like so:
>> MyClass foo;
>> This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
>> (say, if some function that we pass 'foo' or '&foo' into declares a
>> nsRefPtr to it for some reason), then we'll get a double-free. The
>> object will be freed when the nsRefPtr goes out of scope, and then again
>> when the MyClass instance goes out of scope. But if we give MyClass a
>> non-public destructor, then it'll make it a compile error (in most code)
>> to declare a MyClass instance on the stack or as a member-variable.  So
>> we'd catch this bug immediately, at compile-time.
>>
>> So, that explains why a non-public destructor is a good idea. But why
>> MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
>> route to trigger the same sort of bug -- someone can come along and add
>> a subclass, perhaps not realizing that they're subclassing a refcounted
>> class, and the subclass will (by default) have a public destructor,
>> which means then that anyone can declare
>>   MySubclass foo;
>> and run into the exact same problem with the subclass.  A MOZ_FINAL
>> annotation will prevent that by keeping people from naively adding
>> subclasses.
>>
>> BUT WHAT IF I NEED SUBCLASSES
>> =
>> First, i

Re: PSA: Refcounted classes should have a non-public destructor & should be MOZ_FINAL where possible

2014-05-28 Thread Benoit Jacob
Awesome work!

By the way, I just figured a way that you could static_assert so that at
least on supporting C++11 compilers, we would automatically catch this.

The basic C++11 tool here is std::is_destructible from , but
it has a problem: it only returns false if the destructor is deleted, it
doesn't return false if the destructor is private. However, the example
below shows how we can still achieve what we want by using wrapping the
class that we are interested in as a member of a helper templated struct:



#include 
#include 

class ClassWithDeletedDtor {
  ~ClassWithDeletedDtor() = delete;
};

class ClassWithPrivateDtor {
  ~ClassWithPrivateDtor() {}
};

class ClassWithPublicDtor {
public:
  ~ClassWithPublicDtor() {}
};

template 
class IsDestructorPrivateOrDeletedHelper {
  T x;
};

template 
struct IsDestructorPrivateOrDeleted
{
  static const bool value =
!std::is_destructible>::value;
};

int main() {
#define PRINT(x) std::cerr << #x " = " << (x) << std::endl;

  PRINT(std::is_destructible::value);
  PRINT(std::is_destructible::value);
  PRINT(std::is_destructible::value);

  std::cerr << std::endl;

  PRINT(IsDestructorPrivateOrDeleted::value);
  PRINT(IsDestructorPrivateOrDeleted::value);
  PRINT(IsDestructorPrivateOrDeleted::value);
}


Output:


std::is_destructible::value = 0
std::is_destructible::value = 0
std::is_destructible::value = 1

IsDestructorPrivateOrDeleted::value = 1
IsDestructorPrivateOrDeleted::value = 1
IsDestructorPrivateOrDeleted::value = 0


If you also want to require classes to be final, C++11  also
has std::is_final for that.

Cheers,
Benoit


2014-05-28 16:24 GMT-04:00 Daniel Holbert :

> Hi dev-platform,
>
> PSA: if you are adding a concrete class with AddRef/Release
> implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
> of the following best-practices:
>
>  (a) Your class should have an explicitly-declared non-public
> destructor. (should be 'private' or 'protected')
>
>  (b) Your class should be labeled as MOZ_FINAL (or, see below).
>
>
> WHY THIS IS A GOOD IDEA
> ===
> We'd like to ensure that refcounted objects are *only* deleted via their
> ::Release() methods.  Otherwise, we're potentially susceptible to
> double-free bugs.
>
> We can go a long way towards enforcing this rule at compile-time by
> giving these classes non-public destructors.  This prevents a whole
> category of double-free bugs.
>
> In particular: if your class has a public destructor (the default), then
> it's easy for you or someone else to accidentally declare an instance on
> the stack or as a member-variable in another class, like so:
> MyClass foo;
> This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
> (say, if some function that we pass 'foo' or '&foo' into declares a
> nsRefPtr to it for some reason), then we'll get a double-free. The
> object will be freed when the nsRefPtr goes out of scope, and then again
> when the MyClass instance goes out of scope. But if we give MyClass a
> non-public destructor, then it'll make it a compile error (in most code)
> to declare a MyClass instance on the stack or as a member-variable.  So
> we'd catch this bug immediately, at compile-time.
>
> So, that explains why a non-public destructor is a good idea. But why
> MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
> route to trigger the same sort of bug -- someone can come along and add
> a subclass, perhaps not realizing that they're subclassing a refcounted
> class, and the subclass will (by default) have a public destructor,
> which means then that anyone can declare
>   MySubclass foo;
> and run into the exact same problem with the subclass.  A MOZ_FINAL
> annotation will prevent that by keeping people from naively adding
> subclasses.
>
> BUT WHAT IF I NEED SUBCLASSES
> =
> First, if your class is abstract, then it shouldn't have AddRef/Release
> implementations to begin with.  Those belong on the concrete subclasses
> -- not on your abstract base class.
>
> But if your class is concrete and refcounted and needs to have
> subclasses, then:
>  - Your base class *and each of its subclasses* should have virtual,
> protected destructors, to prevent the "MySubclass foo;" problem
> mentioned above.
>  - Your subclasses themselves should also probably be declared as
> "MOZ_FINAL", to keep someone from naively adding another subclass
> without recognizing the above.
>  - Your subclasses should definitely *not* declare their own
> AddRef/Release methods. (They should share the base class's methods &
> refcount.)
>
> For more information, see
> https://bugzilla.mozilla.org/show_bug.cgi?id=984786 , where I've fixed
> this sort of thing in a bunch of existing classes.  I definitely didn't
> catch everything there, so please feel encouraged to continue this work
> in other bugs. (And if you catch any cases that look like potential
> double-frees, mark them as security-sensitive.)
>
> Thanks!
> ~Daniel
> _

Re: Do we still need Trace Malloc?

2014-05-20 Thread Benoit Jacob
2014-05-19 23:19 GMT-04:00 L. David Baron :

> On Monday 2014-05-19 20:09 -0700, Nicholas Nethercote wrote:
> > On Mon, May 19, 2014 at 5:32 PM, L. David Baron 
> wrote:
> > > Another is being able to find the root strongly connected components
> > > of the memory graph, which is useful for finding leaks in other
> > > systems (e.g., leaks of trees of GTK widget objects) that aren't
> > > hooked up to cycle collection.  It's occasionally even a faster way
> > > of debugging non-CC but nsTraceRefcnt-logged reference counted
> > > objects.
> >
> > How does trace-malloc do that? It sounds like it would need to know
> > about object and struct layout.
>
> Roughly the same way a conservative collector would -- assuming any
> word-aligned memory in one object in the heap that contains
> something that's the address of something else in the heap
> (including in the interior of the allocation) is a pointer to that
> object in the heap.
>
> (It's actually done in the leaksoup tool outside of trace-malloc.)
>

For that, I believe that the right approach at this point would be to use
DMD's memory/replace tool (maybe evolving it to suit your needs),

http://hg.mozilla.org/mozilla-central/file/cb9f34f73ebe/memory/replace/dmd

You could also write your own memory/replace tool sitting next to that one,
but it seems that every such tool needs to do roughly the same things, i.e.
store metadata around heap blocks and allow iterating over them, so they
might as well be the same tool.

Notice that I needed to do the same things in refgraph (
https://github.com/bjacob/mozilla-central/wiki/Refgraph, a tool to
investigate the graph of strong references between heap blocks) and at that
time wrote my own memory/replace tool (memory/replace/refgraph in that
fork). But it's been the most expensive part of that fork, in terms of
maintenability and portability. If I had time to continue work on this, the
first thing I'd do would be to drop that custom memory/replace tool and
instead just use DMD's.

Benoit


>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-20 Thread Benoit Jacob
2014-05-19 23:37 GMT-04:00 Rik Cabanier :

>
>
>
> On Mon, May 19, 2014 at 6:46 PM, Benoit Jacob wrote:
>
>> +1000! Thanks for articulating so clearly the difference between the
>> Web-as-an-application-platform and other application platforms.
>>
>
> It really surprises me that you would make this objection.
>

I didn't think of this as specifically an objection to the
hardwareConcurrency proposal, but rather as a criticism of the argument
that "the Web should have this feature because other application platforms
have this feature".


> WebGL certainly would *not* fall into this
> "Web-as-an-application-platform" category since it exposes machine
> information [1] and is generally insecure [2] according to Apple and (in
> the past) Microsoft.
>
> Please note that I really like WebGL and not worried about these issues.
> Just pointing out your double standard.
>
> 1: http://renderingpipeline.com/webgl-extension-viewer/
> 2: http://lists.w3.org/Archives/Public/public-fx/2012JanMar/0136.html
>

I'm not going to reply here to all of the things stated or implied in your
above sentence, as that would be off-topic in this thread.

The problem in the present conversation, that Jonas objected to, is that
the present proposal is being pushed with, among other arguments, the one
that "native platforms already have this API, so the Web should too".

I don't remember WebGL being pushed with this argument.

WebGL had a much more basic and solid argument for itself: the Web needed
_some_ way to do serious, general-purpose, realtime graphics. Among the
existing prior art, OpenGL ES 2 appeared as a reasonable starting point.
Web-ification ensued, and involved dropping or adapting parts of the API
that weren't good fits for the Web. As a result, WebGL exposes some machine
information when that's deemed the right trade-off, but the bar for that is
much higher than in OpenGL. Privacy is not the only issue at hand. More
basic issues include "is this the right API in the first place?", and "is
this the right balance between features and portability/sustainability" ?

Likewise here. I don't think anyone is saying that "hardwareConcurrency" is
failing on the grounds of exposing too much system information alone. The
way I read this thread, people either aren't convinced that it's the right
compromise given its usefulness, or that it's the right API for the task at
hand in the first place.

Benoit


>
>
>> 2014-05-19 21:35 GMT-04:00 Jonas Sicking :
>>
>>>  On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier 
>>> wrote:
>>> > I don't see why the web platform is special here and we should trust
>>> that
>>> > authors can do the right thing.
>>>
>>> I'm fairly sure people have already pointed this out to you. But the
>>> reason the web platform is different is that because we allow
>>> arbitrary application logic to run on the user's device without any
>>> user opt-in.
>>>
>>> I.e. the web is designed such that it is safe for a user to go to any
>>> website without having to consider the risks of doing so.
>>>
>>> This is why we for example don't allow websites to have arbitrary
>>> read/write access to the user's filesystem. Something that all the
>>> other platforms that you have pointed out do.
>>>
>>> Those platforms instead rely on that users make a security decision
>>> before allowing any code to run. This has both advantages (easier to
>>> design APIs for those platforms) and disadvantages (malware is pretty
>>> prevalent on for example Windows).
>>>
>>> / Jonas
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Benoit Jacob
+1000! Thanks for articulating so clearly the difference between the
Web-as-an-application-platform and other application platforms.

Benoit




2014-05-19 21:35 GMT-04:00 Jonas Sicking :

> On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier  wrote:
> > I don't see why the web platform is special here and we should trust that
> > authors can do the right thing.
>
> I'm fairly sure people have already pointed this out to you. But the
> reason the web platform is different is that because we allow
> arbitrary application logic to run on the user's device without any
> user opt-in.
>
> I.e. the web is designed such that it is safe for a user to go to any
> website without having to consider the risks of doing so.
>
> This is why we for example don't allow websites to have arbitrary
> read/write access to the user's filesystem. Something that all the
> other platforms that you have pointed out do.
>
> Those platforms instead rely on that users make a security decision
> before allowing any code to run. This has both advantages (easier to
> design APIs for those platforms) and disadvantages (malware is pretty
> prevalent on for example Windows).
>
> / Jonas
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Benoit Jacob
Also note that even some popular desktop APIs that in practice expose the
"hardware" thread count, choose not to call it that way. For example, Qt
calls it the "ideal" thread count.
  http://qt-project.org/doc/qt-4.8/qthread.html#idealThreadCount

IMO this suggests that we're not the only ones feeling uncomfortable about
committing to "hardware thread count" as being forever a well-defined and
useful thing to expose to applications.

Benoit


2014-05-13 13:58 GMT-04:00 Joshua Cranmer 🐧 :

> On 5/13/2014 12:35 PM, Eli Grey wrote:
>
>> Can you back that up with a real-world example desktop application
>> that behaves as such?
>>
>
> The OpenMP framework?
>
>
> --
> Joshua Cranmer
> Thunderbird and DXR developer
> Source code archæologist
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsRefPtr vs RefPtr

2014-05-13 Thread Benoit Jacob
Oh and regarding 3rd-party code, that is not a different problem from what
we have already discussed above:

The only hard constraint here is that some 3rd-party code does not want to
depend on XPCOM. That's fine; as discussed above, we could easily move out
of XPCOM (presumably into MFBT) the parts of nsRefPtr that they need. They
don't need the CC parts, since they're not currently using it. The CC parts
are the only parts that have a real hard dependency on XPCOM. The CC parts
(ImplCycleCollection{Unlink|Traverse}) are very easy to isolate from the
rest, as they are just overloads of global functions.

Benoit


2014-05-13 8:04 GMT-04:00 Benoit Jacob :

> If the semantics difference between TemporaryRef and already_AddRefed are
> the main factor blocking us here, could we at least make progress by
> temporarily having the two coexist, i.e. a transition plan roughly as
> follows:
>
>  1) Introduce a nsTemporaryRef behaving like TemporaryRef but cooperating
> with nsRefPtr.
>  2) Mechanically port from RefPtr+TemporaryRef to nsRefPtr+nsTemporaryRef
>  3) Remove RefPtr and TemporaryRef
>  4) Be left with some nsTemporaryRef usage; maybe manually go over it and
> switch to already_AddRefed, or maybe live with it.
>
> Benoit
>
>
> 2014-05-12 19:48 GMT-04:00 Mike Hommey :
>
> On Mon, May 12, 2014 at 04:46:18PM -0700, Kyle Huey wrote:
>> > On Mon, May 12, 2014 at 2:46 PM, Mike Hommey  wrote:
>> > > On Mon, May 12, 2014 at 09:36:22AM -0700, Kyle Huey wrote:
>> > >> We should get rid of RefPtr, just like we did the MFBT refcounting
>> classes.
>> > >>
>> > >> The main thing stopping a mechanical search and replace is that the
>> > >> two smart pointers have different semantics around
>> > >> already_AddRefed/TemporaryRef :(
>> > >
>> > > Another part of the problem is third party code that uses RefPtr.
>> > >
>> > > Mike
>> >
>> > Are you referring to moz2d or something else?
>>
>> I'm referring to the webkit/chromium-imported code that needs it as a
>> replacement for wtf::RefPtr (aiui).
>>
>> Mike
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsRefPtr vs RefPtr

2014-05-13 Thread Benoit Jacob
If the semantics difference between TemporaryRef and already_AddRefed are
the main factor blocking us here, could we at least make progress by
temporarily having the two coexist, i.e. a transition plan roughly as
follows:

 1) Introduce a nsTemporaryRef behaving like TemporaryRef but cooperating
with nsRefPtr.
 2) Mechanically port from RefPtr+TemporaryRef to nsRefPtr+nsTemporaryRef
 3) Remove RefPtr and TemporaryRef
 4) Be left with some nsTemporaryRef usage; maybe manually go over it and
switch to already_AddRefed, or maybe live with it.

Benoit


2014-05-12 19:48 GMT-04:00 Mike Hommey :

> On Mon, May 12, 2014 at 04:46:18PM -0700, Kyle Huey wrote:
> > On Mon, May 12, 2014 at 2:46 PM, Mike Hommey  wrote:
> > > On Mon, May 12, 2014 at 09:36:22AM -0700, Kyle Huey wrote:
> > >> We should get rid of RefPtr, just like we did the MFBT refcounting
> classes.
> > >>
> > >> The main thing stopping a mechanical search and replace is that the
> > >> two smart pointers have different semantics around
> > >> already_AddRefed/TemporaryRef :(
> > >
> > > Another part of the problem is third party code that uses RefPtr.
> > >
> > > Mike
> >
> > Are you referring to moz2d or something else?
>
> I'm referring to the webkit/chromium-imported code that needs it as a
> replacement for wtf::RefPtr (aiui).
>
> Mike
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsRefPtr vs RefPtr

2014-05-12 Thread Benoit Jacob
2014-05-11 23:40 GMT-04:00 Boris Zbarsky :

> On 5/11/14, 7:50 PM, Chris Pearce wrote:
>
>> Should we be preferring mozilla::RefPtr in new code?
>>
>> Should we be replacing nsRefPtr with mozilla::RefPtr?
>>
>
> I would err on "no" for both, given https://bugzilla.mozilla.org/
> show_bug.cgi?id=820257
>
>
And to quote the description of this bug:

Since bug 806279  it's
> fairly trivial to extend CC support to new pointer and container types.
> Just implement ImplCycleCollectionUnlink and ImplCycleCollectionTraverse.
> The possibly bigger difficulty here is not so much with RefPtr but with
> RefCounted, as it provides its own AddRef and Release, and for cycle
> collection we need custom AddRef and Release.
>

Now that we have deprecated RefCounted in favor of nsISupportsImpl.h's
refcounting-implementing macros, that bigger difficulty is going away.

We could easily: either add CC support for MFBT RefPtr, or, on the
contrary, remove MFBT RefPtr in favor of nsRefPtr, if needed by isolating
the CC bits of nsRefPtr (the overloads of ImplCycleCollectionUnlink and
ImplCycleCollectionTraverse for it) to make it independent of the rest of
XPCOM.

It's only a matter of knowing which end result we would prefer :)

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: nsTArray lengths and indices are now size_t (were uint32_t)

2014-05-11 Thread Benoit Jacob
Yes, absolutely, it is worth catching this at compile-time.

But what you describe above wouldn't catch that at compile-time, because
the compiler would use this operator to get an implicit conversion to
size_t followed by implicit conversion from size_t to uint32_t.

You could use MFBT's new MOZ_EXPLICIT_CONVERSION to annotate this
conversion operator with 'explicit' on supporting C++11 compilers. But
instead, I would not bother trying to make this a conversion operator, I
would just make this a plain old getter method.

I would also not call that Size, but IndexOfResult. The problem is that
what IndexOf() returns is not necessarily a "size" or "index", it may also
be this special thing called "NoIndex". So I don't think that its return
type should "feel like" it _is_ just an index. Instead, its return type
should be something that is _either_ an index or NoIndex.

Something like this:

class IndexOfResult {

  const index_type mIndex;
  static const index_type NoIndex = index_type(-1);

public:

  IndexOfResult(index_type index) : mIndex(index) {}
  IndexOfResult(const IndexOfResult& other) : mIndex(other.mIndex) {}

  bool Found() const {
return mIndex != NoIndex;
  }

  index_type Value() const {
MOZ_ASSERT(Found());
return mIndex;
  }
};

Benoit



2014-05-11 10:25 GMT-04:00 Ehsan Akhgari :

> On 2014-05-11, 10:15 AM, Benoit Jacob wrote:
>
>> Hi,
>>
>> Since Bug 1004098 landed, the type of nsTArray lengths and indices is now
>> size_t.
>>
>> Code using nsTArrays is encouraged to use size_t for indexing them; in
>> most
>> cases, this does not really matter; however there is one case where this
>> does matter, which is when user code stores the result of
>> nsTArray::IndexOf().
>>
>> Indeed, nsTArray::NoIndex used to be uint32_t(-1), which has the value
>> 2^32
>> - 1.  Now, nsTArray::NoIndex is size_t(-1) which, on x86-64, has the value
>> 2^64 - 1.
>>
>> This means that code like this is no longer correct:
>>
>>uint32_t index = array.IndexOf(thing);
>>
>> Such code should be changed do:
>>
>>size_t index = array.IndexOf(thing);
>>
>> Or, better still (slightly pedantic but would have been correct all
>> along):
>>
>>ArrayType::index_type index = array.IndexOf(thing);
>>
> >
>
>> Where ArrayType is the type of that 'array' variable (one could use
>> decltype(array) too).
>>
>
> Do you think it's worth trying to make the bad code above not compile, by
> returning an object from IndexOf which only provides an implicit conversion
> to size_t and not uint32_t?  Like:
>
> template 
> class nsTArray {
>   // ...
>   struct Size {
> Size(size_t);
> operator size_t() const;
>   };
>   Size IndexOf(...);
> };
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: nsTArray lengths and indices are now size_t (were uint32_t)

2014-05-11 Thread Benoit Jacob
Hi,

Since Bug 1004098 landed, the type of nsTArray lengths and indices is now
size_t.

Code using nsTArrays is encouraged to use size_t for indexing them; in most
cases, this does not really matter; however there is one case where this
does matter, which is when user code stores the result of
nsTArray::IndexOf().

Indeed, nsTArray::NoIndex used to be uint32_t(-1), which has the value 2^32
- 1.  Now, nsTArray::NoIndex is size_t(-1) which, on x86-64, has the value
2^64 - 1.

This means that code like this is no longer correct:

  uint32_t index = array.IndexOf(thing);

Such code should be changed do:

  size_t index = array.IndexOf(thing);

Or, better still (slightly pedantic but would have been correct all along):

  ArrayType::index_type index = array.IndexOf(thing);

Where ArrayType is the type of that 'array' variable (one could use
decltype(array) too).

Thanks,
Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Time to revive the "require SSE2" discussion

2014-05-09 Thread Benoit Jacob
2014-05-09 13:24 GMT-04:00 Rik Cabanier :

>
>
>
> On Fri, May 9, 2014 at 10:14 AM, Benoit Jacob wrote:
>
>> Totally agree that 1% is probably still too much to drop, but the 4x drop
>> over the past two years makes me hopeful that we'll be able to drop
>> non-SSE2, eventually.
>>
>> SSE2 is not just about SIMD. The most important thing it buys us IMHO is
>> to
>> be able to not use x87 instructions anymore and instead use SSE2 (scalar)
>> instructions. That removes entire classes of bugs caused by x87 being
>> non-IEEE754-compliant with its crazy 80-bit registers.
>>
>
> Out of interest, do you have links to bugs for this issue?
>

No: there are the bugs we probably have but don't know about; and there are
the things that we caught in time and caused us to give up on approaches
before they would become patches... I don't have an example of a bug we
found after the fact.


>
> Also, can't you ask the compiler to produce both sse and non-sse code and
> make a decision at runtime?
>

Not that I know of. At least GCC documentation does no list anything about
that here, http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86-64-Options.html

-mfpmath=both or -mfpmath=sse+387 does not seem to be doing that; instead
it seems to be about using both in the same code path.

Benoit


>
>
>> 2014-05-09 13:01 GMT-04:00 Chris Peterson :
>>
>> > What does requiring SSE2 buy us? 1% of hundreds of millions of Firefox
>> > users is still millions of people.
>> >
>> > chris
>> >
>> >
>> >
>> > On 5/8/14, 5:42 PM, matthew.br...@gmail.com wrote:
>> >
>> >> On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
>> >>
>> >>> 2012/1/3 Jeff Muizelaar :
>> >>>
>> >>>
>> >>>>
>> >>>  On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  2012/1/2 Robert Kaiser :
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  Jean-Marc Desperrier schrieb:
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>  According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6,
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  the Raw Dump tab on crash-stats.mozilla.com shows the needed
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  information, you need to sort out from the info on the second line
>> CPU
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  maker, family, model, and stepping information whether SSE2 is there
>> or
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  not (With a little search, I can find that info again, bug 593117
>> gives
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  a formula that's correct for most of the cases).
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>  https://crash-analysis.mozilla.com/crash_analysis/ holds
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  *-pub-crashdata.csv.gz files that have that info from all Firefox
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  desktop/mobile crashes on a given day, you should be able to analyze
>> >>>> that
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  for this info - with a bias, of course, as it's only people having
>> >>>> crashes
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  that you see there. No idea if the less biased telemetry samples have
>> >>>> that
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  info as well.
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>  On yesterday's crash data, assuming t

Re: Time to revive the "require SSE2" discussion

2014-05-09 Thread Benoit Jacob
Again (see my previous email) I dont think that performance is the primary
factor here. I care more about not having to worry about two different
flavors of floating point semantics.

Just 2 days ago a colleague had a clever implementation of something he
needed to do in gecko gfx code, and had to back out from that because it
would give the wrong result on x87. I don't know how many other things we
already do, that silently fail on x87 without us realizing. That's what I
worry about.

Benoit


2014-05-09 13:19 GMT-04:00 Bobby Holley :

> Can somebody get us less-circumstantial evidence that the stuff from
> http://www.palemoon.org/technical.shtml#speed , which AFAICT are the only
> perf numbers that have been cited in this thread?
>
>
> On Fri, May 9, 2014 at 10:14 AM, Benoit Jacob wrote:
>
>> Totally agree that 1% is probably still too much to drop, but the 4x drop
>> over the past two years makes me hopeful that we'll be able to drop
>> non-SSE2, eventually.
>>
>> SSE2 is not just about SIMD. The most important thing it buys us IMHO is
>> to
>> be able to not use x87 instructions anymore and instead use SSE2 (scalar)
>> instructions. That removes entire classes of bugs caused by x87 being
>> non-IEEE754-compliant with its crazy 80-bit registers.
>>
>> Benoit
>>
>>
>> 2014-05-09 13:01 GMT-04:00 Chris Peterson :
>>
>> > What does requiring SSE2 buy us? 1% of hundreds of millions of Firefox
>> > users is still millions of people.
>> >
>> > chris
>> >
>> >
>> >
>> > On 5/8/14, 5:42 PM, matthew.br...@gmail.com wrote:
>> >
>> >> On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
>> >>
>> >>> 2012/1/3 Jeff Muizelaar :
>> >>>
>> >>>
>> >>>>
>> >>>  On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  2012/1/2 Robert Kaiser :
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  Jean-Marc Desperrier schrieb:
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>  According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6,
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  the Raw Dump tab on crash-stats.mozilla.com shows the needed
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  information, you need to sort out from the info on the second line
>> CPU
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  maker, family, model, and stepping information whether SSE2 is there
>> or
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  not (With a little search, I can find that info again, bug 593117
>> gives
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  a formula that's correct for most of the cases).
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>  https://crash-analysis.mozilla.com/crash_analysis/ holds
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  *-pub-crashdata.csv.gz files that have that info from all Firefox
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  desktop/mobile crashes on a given day, you should be able to analyze
>> >>>> that
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  for this info - with a bias, of course, as it's only people having
>> >>>> crashes
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  that you see there. No idea if the less biased telemetry samples have
>> >>>> that
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>  info as well.
>> >>>>
>> >>>
>> >>>
>> >>>>
>> >>>
>> >>>>
>> >>>  On yesterday's crash data, assuming that Auth

Re: Time to revive the "require SSE2" discussion

2014-05-09 Thread Benoit Jacob
Totally agree that 1% is probably still too much to drop, but the 4x drop
over the past two years makes me hopeful that we'll be able to drop
non-SSE2, eventually.

SSE2 is not just about SIMD. The most important thing it buys us IMHO is to
be able to not use x87 instructions anymore and instead use SSE2 (scalar)
instructions. That removes entire classes of bugs caused by x87 being
non-IEEE754-compliant with its crazy 80-bit registers.

Benoit


2014-05-09 13:01 GMT-04:00 Chris Peterson :

> What does requiring SSE2 buy us? 1% of hundreds of millions of Firefox
> users is still millions of people.
>
> chris
>
>
>
> On 5/8/14, 5:42 PM, matthew.br...@gmail.com wrote:
>
>> On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
>>
>>> 2012/1/3 Jeff Muizelaar :
>>>
>>>
>>>>
>>>  On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:
>>>>
>>>
>>>
>>>>
>>>  2012/1/2 Robert Kaiser :
>>>>
>>>
>>>
>>>>
>>>  Jean-Marc Desperrier schrieb:
>>>>
>>>
>>>
>>>>
>>>
>>>>
>>>  According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6 ,
>>>>
>>>
>>>
>>>>
>>>  the Raw Dump tab on crash-stats.mozilla.com shows the needed
>>>>
>>>
>>>
>>>>
>>>  information, you need to sort out from the info on the second line CPU
>>>>
>>>
>>>
>>>>
>>>  maker, family, model, and stepping information whether SSE2 is there or
>>>>
>>>
>>>
>>>>
>>>  not (With a little search, I can find that info again, bug 593117 gives
>>>>
>>>
>>>
>>>>
>>>  a formula that's correct for most of the cases).
>>>>
>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>  https://crash-analysis.mozilla.com/crash_analysis/ holds
>>>>
>>>
>>>
>>>>
>>>  *-pub-crashdata.csv.gz files that have that info from all Firefox
>>>>
>>>
>>>
>>>>
>>>  desktop/mobile crashes on a given day, you should be able to analyze
>>>> that
>>>>
>>>
>>>
>>>>
>>>  for this info - with a bias, of course, as it's only people having
>>>> crashes
>>>>
>>>
>>>
>>>>
>>>  that you see there. No idea if the less biased telemetry samples have
>>>> that
>>>>
>>>
>>>
>>>>
>>>  info as well.
>>>>
>>>
>>>
>>>>
>>>
>>>>
>>>  On yesterday's crash data, assuming that AuthenticAMD\ family\
>>>>
>>>
>>>  [1-6][^0-9]  is the proper way to identify these old AMD CPUs (I
>>>>
>>>
>>>  didn't check that very well), I get these results:
>>>>
>>>
>>>
>>>>
>>>
>>>>
>>>  The measurement I have used in the past was:
>>>>
>>>
>>>
>>>>
>>>  CPUs have sse2 if:
>>>>
>>>
>>>
>>>>
>>>  if vendor == AuthenticAMD and family >= 15
>>>>
>>>
>>>  if vendor == GenuineIntel and family >= 15 or (family == 6 and (model
>>>> == 9
>>>>
>>>
>>>  or model > 11))
>>>>
>>>
>>>  if vendor == CentaurHauls and family >= 6 and model >= 10
>>>>
>>>
>>>
>>>>
>>>
>>>
>>> Thanks.
>>>
>>>
>>>
>>> AMD and Intel CPUs amount to 296362 crashes:
>>>
>>>
>>>
>>> bjacob@cahouette:~$ egrep AuthenticAMD\|GenuineIntel
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 296362
>>>
>>>
>>>
>>> Counting SSE2-capable CPUs:
>>>
>>>
>>>
>>> bjacob@cahouette:~$ egrep GenuineIntel\ family\ 1[5-9]
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 58490
>>>
>>> bjacob@cahouette:~$ egrep GenuineIntel\ family\ [2-9][0-9]
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 0
>>>
>>> bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 9
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 792
>>>
>>> bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 1[2-9]
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 52473
>>>
>>> bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ [2-9][0-9]
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 103655
>>>
>>> bjacob@cahouette:~$ egrep AuthenticAMD\ family\ 1[5-9]
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 59463
>>>
>>> bjacob@cahouette:~$ egrep AuthenticAMD\ family\ [2-9][0-9]
>>>
>>> 20120102-pub-crashdata.csv | wc -l
>>>
>>> 8120
>>>
>>>
>>>
>>> Total SSE2 capable CPUs:
>>>
>>>
>>>
>>> 58490 + 792 + 52473 + 103655 + 59463 + 8120 = 282993
>>>
>>>
>>>
>>> 1 - 282993 / 296362 = 0.045
>>>
>>>
>>>
>>> So the proportion of non-SSE2-capable CPUs among crash reports is 4.5 %.
>>>
>>
>> Just for the record, I coded this analysis up here:
>> https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
>>
>> SSE2 apparently now at about one percent:
>>
>>  20120102-pub-crashdata.csv.gz: 4.53
>>  20120401-pub-crashdata.csv.gz: 4.24
>>  20120701-pub-crashdata.csv.gz: 2.77
>>  20121001-pub-crashdata.csv.gz: 2.83
>>  20130101-pub-crashdata.csv.gz: 2.66
>>  20130401-pub-crashdata.csv.gz: 2.59
>>  20130701-pub-crashdata.csv.gz: 2.20
>>  20131001-pub-crashdata.csv.gz: 1.92
>>  20140101-pub-crashdata.csv.gz: 1.86
>>  20140401-pub-crashdata.csv.gz: 1.12
>>
>> Cheers,
>>
>> Matthew
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Time to revive the "require SSE2" discussion

2014-05-08 Thread Benoit Jacob
Wonderful, thanks Matthew!

@Stability-team: ^^^ see the value of public crashdata CSV files in action!
Thanks!

Benoit


2014-05-08 20:42 GMT-04:00 :

> On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
> > 2012/1/3 Jeff Muizelaar :
> >
> > >
> >
> > > On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:
> >
> > >
> >
> > > 2012/1/2 Robert Kaiser :
> >
> > >
> >
> > > Jean-Marc Desperrier schrieb:
> >
> > >
> >
> > >
> >
> > > According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6 ,
> >
> > >
> >
> > > the Raw Dump tab on crash-stats.mozilla.com shows the needed
> >
> > >
> >
> > > information, you need to sort out from the info on the second line CPU
> >
> > >
> >
> > > maker, family, model, and stepping information whether SSE2 is there or
> >
> > >
> >
> > > not (With a little search, I can find that info again, bug 593117 gives
> >
> > >
> >
> > > a formula that's correct for most of the cases).
> >
> > >
> >
> > >
> >
> > >
> >
> > > https://crash-analysis.mozilla.com/crash_analysis/ holds
> >
> > >
> >
> > > *-pub-crashdata.csv.gz files that have that info from all Firefox
> >
> > >
> >
> > > desktop/mobile crashes on a given day, you should be able to analyze
> that
> >
> > >
> >
> > > for this info - with a bias, of course, as it's only people having
> crashes
> >
> > >
> >
> > > that you see there. No idea if the less biased telemetry samples have
> that
> >
> > >
> >
> > > info as well.
> >
> > >
> >
> > >
> >
> > > On yesterday's crash data, assuming that AuthenticAMD\ family\
> >
> > > [1-6][^0-9]  is the proper way to identify these old AMD CPUs (I
> >
> > > didn't check that very well), I get these results:
> >
> > >
> >
> > >
> >
> > > The measurement I have used in the past was:
> >
> > >
> >
> > > CPUs have sse2 if:
> >
> > >
> >
> > > if vendor == AuthenticAMD and family >= 15
> >
> > > if vendor == GenuineIntel and family >= 15 or (family == 6 and (model
> == 9
> >
> > > or model > 11))
> >
> > > if vendor == CentaurHauls and family >= 6 and model >= 10
> >
> > >
> >
> >
> >
> > Thanks.
> >
> >
> >
> > AMD and Intel CPUs amount to 296362 crashes:
> >
> >
> >
> > bjacob@cahouette:~$ egrep AuthenticAMD\|GenuineIntel
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 296362
> >
> >
> >
> > Counting SSE2-capable CPUs:
> >
> >
> >
> > bjacob@cahouette:~$ egrep GenuineIntel\ family\ 1[5-9]
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 58490
> >
> > bjacob@cahouette:~$ egrep GenuineIntel\ family\ [2-9][0-9]
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 0
> >
> > bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 9
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 792
> >
> > bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 1[2-9]
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 52473
> >
> > bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ [2-9][0-9]
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 103655
> >
> > bjacob@cahouette:~$ egrep AuthenticAMD\ family\ 1[5-9]
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 59463
> >
> > bjacob@cahouette:~$ egrep AuthenticAMD\ family\ [2-9][0-9]
> >
> > 20120102-pub-crashdata.csv | wc -l
> >
> > 8120
> >
> >
> >
> > Total SSE2 capable CPUs:
> >
> >
> >
> > 58490 + 792 + 52473 + 103655 + 59463 + 8120 = 282993
> >
> >
> >
> > 1 - 282993 / 296362 = 0.045
> >
> >
> >
> > So the proportion of non-SSE2-capable CPUs among crash reports is 4.5 %.
>
> Just for the record, I coded this analysis up here:
> https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
>
> SSE2 apparently now at about one percent:
>
> 20120102-pub-crashdata.csv.gz: 4.53
> 20120401-pub-crashdata.csv.gz: 4.24
> 20120701-pub-crashdata.csv.gz: 2.77
> 20121001-pub-crashdata.csv.gz: 2.83
> 20130101-pub-crashdata.csv.gz: 2.66
> 20130401-pub-crashdata.csv.gz: 2.59
> 20130701-pub-crashdata.csv.gz: 2.20
> 20131001-pub-crashdata.csv.gz: 1.92
> 20140101-pub-crashdata.csv.gz: 1.86
> 20140401-pub-crashdata.csv.gz: 1.12
>
> Cheers,
>
> Matthew
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Benoit Jacob
2014-05-08 5:53 GMT-04:00 Anne van Kesteren :

> It seems like you want to be able to do that going forward so you
> don't have to maintain a large matrix forever, but at some point say
> you drop the idea that people will want 1 and simply return N if they
> ask for 1.


Yes, that's what we agreed on in the last conversation mentioned by Ehsan
yesterday. In the near future (for the next decade), there will be
webgl-1-only devices around, so allowing getContext("webgl") to
automatically give webgl2 would create accidentaly compatibility problems.
But in the longer term, there will (probably) eventually be a time when
webgl-1-only devices won't exist anymore, and then, we could decide to
allow that.

2014-05-08 5:53 GMT-04:00 Anne van Kesteren :

> Are we forever going to mint new version strings or are we going to
> introduce a version parameter which is observed (until we decide to
> prune the matrix a bit), this time around?


Agreed: if we still think that a version parameter would have been
desirable if not for the issue noted above, then now would be a good time
to fix it.

If we're doing the latter,
> maybe we should call the context id "3d" this time around...
>

WebGL is low-level and generalistic enough that it is not specifically a
"3d" graphics API. I prefer to call it a low-level or generalistic graphics
API.

(*plug*) this might be useful reading:
https://hacks.mozilla.org/2013/04/the-concepts-of-webgl/

Benoit




>
>
> --
> http://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Benoit Jacob
2014-05-07 15:09 GMT-04:00 Ehsan Akhgari :

> We had a meeting about this today, and there is one big issue with my
> proposal above.  Because of the fact that extra dictionary members in the
> contextOptions arguments are ignored, this means that UA engines which have
> already shipped their implementation will happily accept
> |canvas.getContext("webgl", {version: 2})| and give you a context object
> which doesn't support what the author would expect, which would fail
> requirement 1 above.
>
> After going through the options a bit, it seems like the only sensible
> thing to do would be to use a new context name string, so that code which
> is written against WebGL2 will not work against an implementation which is
> unaware of this.  We seemed to agree that "webgl2" would probably be as
> good of an option as any.  So basically the current state of the proposal
> is to accept "webgl2" as the name of the context, return a
> WebGLRenderingContext, and extend that interface in the spec through a
> partial interface, making those methods throw "NotSupportedError" if you
> have received the context with the name "webgl".
>
> Sorry for the back and forth on this!  What do people think of this
> proposal version N?  :-)


I agree with this plan.

The change back to a "webgl2" context id is mostly a detail --- the bigger
part of the plan, which is basically to avoid introducing separate Web
interfaces for each new WebGL version or feature, remains unchanged.

Benoit



>
>
> Cheers,
> Ehsan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Benoit Jacob
2014-05-07 14:14 GMT-04:00 Boris Zbarsky :

> On 5/7/14, 2:00 PM, Benoit Jacob wrote:
>
>> The idea is that if getContext("webgl", {version : N}) returns non-null,
>> then the resulting context is guaranteed to be WebGL version N, so that
>> no other versioning mechanism is needed.
>>
>
> Sure, but say some code calls getContext("webgl", { version: 1 }) and then
> passes the context to other code (from a library, say).
>
> How is that other code supposed to know whether it can use the webgl2
> bits?  The methods are there no matter what, so it can't detect based on
> that.
>

Right, so there is a mechanism for that. The second parameter to getContext
is called "context creation parameters". After creation, you can still
query context attributes, and that has to return the "actual context
parameters", which are not necessarily the same as what you requested at
getContext time. For example, that's how you do today if you want to query
whether your WebGL context actually has antialiasing  (the antialias
attribute defaults to true, but actual antialiasing support is
non-mandatory).

See:
http://www.khronos.org/registry/webgl/specs/latest/1.0/#2.1

Benoit



>
> -Boris
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Benoit Jacob
2014-05-07 13:41 GMT-04:00 Boris Zbarsky :

> On 5/7/14, 12:34 PM, Ehsan Akhgari wrote:
>
>> Implementations are free to return a context that implements a higher
>> version, should that be appropriate in the future, but never lower.
>>
>
> As pointed out, this fails the explicit opt-in bit.
>
> There is also another problem here.  If we go with this setup but drop the
> "may return higher version" bit, how can a consumer handed a
> WebGLRenderingContext tell whether v2 APIs are OK to use with it short of
> trying one and having it throw?  Do we want to expose the context version
> on the context somehow?
>

The idea is that if getContext("webgl", {version : N}) returns non-null,
then the resulting context is guaranteed to be WebGL version N, so that no
other versioning mechanism is needed.

Benoit



>
> This is only an issue if the code being handed the context and the code
> that did the getContext() call are not tightly cooperating, of course.
>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Benoit Jacob
This looks great, modulo one detail:


2014-05-07 12:34 GMT-04:00 Ehsan Akhgari :

>
> I discussed the requirements of this API in person with vlad and bjacob.
>  Here are two key things to keep in mind:
>
> 1. The WebGL working group wants web pages to opt in to the WebGL2
> specific parts of the functionality explicitly.  The issue is that some of
> the functionality exposed by WebGL2 is not currently available on mobile
> GPUs, and we don't want to design the API in a way that enables people to
> write code oblivious to this which works in their desktop browsers and then
> have the code break when people view those pages on mobile devices which do
> not provide hardware support for this.
>
> 2. Because of the way that the underlying OpenGL API works, we need to
> have the information on what kind of OpenGL context options we want to pass
> to the underlying platform APIs at the point that we create the underlying
> OpenGL context.  That place is HTMLCanvasElement.getContext.  Because the
> existing semantics of getContext mandates the implementation to return null
> if it doesn't support the functionality requested by the web page.
>
> I think both of these requirements are reasonable.  We discussed something
> that might be a solution:
>
> 1. The context name remains "webgl".
>
> 2. We add a |version| member to the contextOptions dictionary, that
> specifies the minimum required version of WebGL that's requested.
> Implementations are free to return a context that implements a higher
> version, should that be appropriate in the future, but never lower.
>


The "Implementations are free to return a context that implements a higher
version" part violates the above requirement 1. in your email, "The WebGL
working group wants web pages to opt in to the WebGL2 specific parts of the
functionality explicitly."

For that reason, a necessary modification to this plan is to remove the
sentence, "Implementations are free to return a context that implements a
higher version [...]".

I agree with everything else.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 18:41 GMT-04:00 Jonas Sicking :

> I disagree with several points of this email.
>
> On Tue, May 6, 2014 at 2:18 PM, Benoit Jacob 
> wrote:
> > So, thinking more about this, here's what I think is the deeper concern
> > here.
> >
> > If we make a feature available, we have to expect that people will write
> > code assuming support for it. We can't honestly believe that all Web
> > developers write careful feature checks for all the features they depend
> on.
>
> I agree that not everyone will properly check that a feature is
> available before using it. But I don't think that's much different
> from people that simply use a feature wrong. I.e. it's a bug. It's
> going to result in some users having a broken experience.
>

Sure! I'm not disputing that.

All I'm saying is people will forget to feature-check things that just work
for them.


> > What is keeping compatibility problems tractable with other Web APIs, in
> > presence of code that doesn't do careful feature checks, is probably that
> > support for most Web APIs depends only on the browser and browser
> version.
>
> This is something we're working hard to get away from as it's
> something that has caused a lot of trouble on the web.


Indeed! But if we make WebGL2 features automatically available on
getContext("webgl"), then we're not getting away from it. People will rely
on it as it works for them locally.



> I.e. websites
> checking what browser or browser version you have, and then making
> assumptions about what that browser can or can't do.
>
> There's been a strong movement lately towards "test for features, not
> for browsers". This is a good change for the web.
>

Agree! And we're not discussing changing that here.


>
> > That's where WebGL is different: support for WebGL features also depends
> on
> > hardware details.
>
> WebGL is not unique in this. The vibration API and the
> DeviceOrientation events are two other examples of this. In FirefoxOS
> we have a whole host more.
>

That's why I didn't say it's unique in this. Just different from the
majority of Web APIs that have seen an increase of their feature set over
time.


>
> For each of these APIs we always design the API such that a page can
> check if the API is available, so that it can fall back to something
> else if the API is not available.
>

Likewise with WebGL. WebGL has a core feature set, and the rest are
extensions, that can be queried like you describe.

It just happens that we have a bulk of new features coming from OpenGL ES
3.0, that
  1) have complex inter-dependencies
  2) are available all at once on current/new hardware

So we're thinking that it makes sense to just offer them all at once as
"WebGL2". The alternative that was discussed above would be to offer them
as separate WebGL 1 extensions. That would require a lot of spec and
conformance-testing work, because of the interactions between these
features. That work wouldn't benefit many people in the long term, because:
 1) The WebGL 2 features that are the most anticipated and that can run on
the most non-OpenGL-ES-3.0 devices, are already available as WebGL 1
extensions (EXT_draw_buffers, OES_texture_float...).
 2) The other WebGL 2 features tend to be available on devices that support
OpenGL ES 3.0 fully.
So the target audience for using many more of WebGL2's features as
individual extensions to WebGL 1, is small. If a particular extension comes
up that has a compelling case for being exposed to WebGL 1, we can
definitely consider it.

It just seems that consistently slicing all of WebGL2's feature set into
WebGL 1 extensions would not be worth the effort.



>
> > That's why if we just expose different features on the object returned by
> > getContext("webgl") depending on client hardware details, we will create
> a
> > compatibility mess, unlike other Web APIs.
>
> The main probably that you have is that you haven't designed the API
> as to allow authors to test if the API is available.
>
> If you had, this discussion would be moot.
>
> But since you haven't, you're now stuck having to find some other way
> of detecting if these features are implemented or not.
>
> What you are proposing is that people use
>
> canvas.getContext("webgl2") != null
>
> What Anne is requesting is that we use
>
> canvas.getContext("webgl").someAPI != undefined
>

That's exactly

  canvas.getContext("webgl").getExtension("someAPI") != null

That's what we've discussed earlier in this thread, that I explained was
considered not worth the nontrivi

Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
So, thinking more about this, here's what I think is the deeper concern
here.

If we make a feature available, we have to expect that people will write
code assuming support for it. We can't honestly believe that all Web
developers write careful feature checks for all the features they depend on.

What is keeping compatibility problems tractable with other Web APIs, in
presence of code that doesn't do careful feature checks, is probably that
support for most Web APIs depends only on the browser and browser version.

That's where WebGL is different: support for WebGL features also depends on
hardware details.

That's why if we just expose different features on the object returned by
getContext("webgl") depending on client hardware details, we will create a
compatibility mess, unlike other Web APIs.

That's why so far the WebGL WG has been very careful with
hardware-dependent feature availability. For example, in WebGL, to rely on
an available extension, you have to first call getExtension to explicitly
enable that extension. That's unlike OpenGL where all available extensions
are readily usable.

Requiring people to write explicitly getContext("webgl2") is similar. It
prevents a situation where in 2016 most application developers have WebGL2
available on their own machine and start writing code depending on it
without really needing it. That matters, because non-WebGL2-capable devices
are going to remain common for a decade.

Benoit




2014-05-06 13:07 GMT-04:00 Boris Zbarsky :

> On 5/6/14, 12:53 PM, Benoit Jacob wrote:
>
>> Ah, I see the confusion now. So the first reason why what you're
>> suggesting
>> wouldn't work for WebGL is that WebGL extension my add functionality
>> without changing any IDL at all.
>>
>
> Sure, but we're not talking about arbitrary WebGL extensions.  We're
> talking about specifically the set of things we want to expose in WebGL2,
> which do include new methods.
>
> In particular, the contract would be that if any of the new methods are
> supported, then FLOAT texture upload is also supported.
>
> The fact that these may be extensions under the hood doesn't seem really
> relevant, as long as the contract is that the support is all-or-nothing.
>
>
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 13:15 GMT-04:00 Ralph Giles :

> On 2014-05-06 9:53 AM, Benoit Jacob wrote:
>
> > By default, WebGL does not allow FLOAT to be passed for
> > the type parameter of the texImage2D method. The OES_texture_float
> > extension make that allowed.
>
> I have trouble seeing how this could break current implementations. If a
> page somehow looks for the error as a feature or version check it should
> still get the correct answer.
>

I didn't say it would break anything. I just commented on why enabling this
feature wasn't just a switch at the DOM interface level.


>
> > There are more examples. Even when OES_texture_float is supported, FLOAT
> > textures don't support linear filtering by default. That is, in turn,
> > enabled by the OES_texture_float_linear extension,
> >
> http://www.khronos.org/registry/webgl/extensions/OES_texture_float_linear/
>
> This looks similar. Are there extensions which cause rendering
> differences merely by enabling them?
>

No. WebGL2 does not break any WebGL 1 API. WebGL extensions do not break
any API.


>
> E.g. everything in webgl2 is exposed on a 'webgl' context, and calling
> getExtension to enable extensions which are also webgl2 core features is
> a no-op? I guess the returned interface description would need new spec
> language in webgl2 if there are ever extensions with the same name
> written against different versions of the spec.
>

No, there just won't be different extensions for WebGL2 vs WebGL1 with the
same name.

> Is this what you mean
about considering (and writing tests for) all the interactions?

No, I meant something different. Different extensions add different spec
language, and sometimes it's nontrivial to work out the details of how
these additions to the spec interplay. For example, if you start with an
API that allows doing only additions, and only supports integers; if you
then specify an extension for floating-point numbers, and another extension
for multiplication, then you need to work out the interaction between the
two: are you allowing multiplication of floating-point numbers? Do you
specify it in the multiplication spec or in the floating-point spec?



>
> It looks like doing so would violate to webgl1 spec. "An attempt to use
> any features of an extension without first calling getExtension to
> enable it must generate an appropriate GL error and must not make use of
> the feature." https://www.khronos.org/registry/webgl/specs/1.0/#5.14.14
> It would be like getExtension was silently called on context creation.
>

That is indeed the only difference between a WebGL2 rendering context and a
WebGL1 rendering context. (In a theoretical world where all of WebGl2 would
be specced as WebGL1 extensions).

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 13:07 GMT-04:00 Boris Zbarsky :

> On 5/6/14, 12:53 PM, Benoit Jacob wrote:
>
>> Ah, I see the confusion now. So the first reason why what you're
>> suggesting
>> wouldn't work for WebGL is that WebGL extension my add functionality
>> without changing any IDL at all.
>>
>
> Sure, but we're not talking about arbitrary WebGL extensions.  We're
> talking about specifically the set of things we want to expose in WebGL2,
> which do include new methods.
>
> In particular, the contract would be that if any of the new methods are
> supported, then FLOAT texture upload is also supported.
>
> The fact that these may be extensions under the hood doesn't seem really
> relevant, as long as the contract is that the support is all-or-nothing.


Our last emails crossed, obviously :)

My point is it would be a clunky API if, in order to test for feature X
that does not affect the DOM interface, one had to test for another
unrelated feature Y.

Anyway I've shared what I think I know on this topic; I'll let other people
(who contrary to me are working on WebGL at the moment) give their own
answers.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 12:53 GMT-04:00 Benoit Jacob :

>
>
>
> 2014-05-06 12:32 GMT-04:00 Boris Zbarsky :
>
> On 5/6/14, 12:25 PM, Benoit Jacob wrote:
>>
>>> To what extent does what I wrote in my previous email, regarding
>>> interactions between different extensions, answer your question?
>>>
>>
>> I'm not sure it answers it at all.
>>
>>
>>  With the example approach you suggested above, one would have to specify
>>> extensions separately and for each of them, their possible interactions
>>> with other extensions.
>>>
>>
>> Why?  This is the part I don't get.
>>
>> The approach I suggest is that a UA is allowed to expose the new methods
>> on the return value of getContext("webgl"), but only if it exposes all of
>> them.  In other words, it's functionally equivalent to the WebGL2 spec,
>> except the way you get a context continues to be getContext("webgl") (which
>> is the part that Anne was concerned about, afaict).
>
>
> Ah, I see the confusion now. So the first reason why what you're
> suggesting wouldn't work for WebGL is that WebGL extension my add
> functionality without changing any IDL at all.
>
> A good example (that is a WebGL 1 extension and that is part of WebGL 2)
> is float textures.
> http://www.khronos.org/registry/webgl/extensions/OES_texture_float/
>
> WebGL has a method, texImage2D, that allows uploading texture data; and it
> has various enum values, like BYTE and INT and FLOAT, that allow specifying
> the type of data. By default, WebGL does not allow FLOAT to be passed for
> the type parameter of the texImage2D method. The OES_texture_float
> extension make that allowed. So this adds real functionality (enables using
> textures with floating-point RGB components) without changing anything at
> the DOM interface level.
>
> There are more examples. Even when OES_texture_float is supported, FLOAT
> textures don't support linear filtering by default. That is, in turn,
> enabled by the OES_texture_float_linear extension,
> http://www.khronos.org/registry/webgl/extensions/OES_texture_float_linear/
>
> Both these WebGL extensions are part of core WebGL 2, so they are relevant
> examples.
>

Well, I guess the "but only if it exposes all of them" part of your
proposal would make this work, because other parts of WebGL2 do add new
methods and constants.

But, suppose that an application relies on WebGL2 features that don't
change IDL (there are many more, besides the two I mentioned). In your
proposal, they would have to either check for unrelated things on the WebGL
interface, which seems clunky, or try to go ahead and try to use the
feature and use WebGL.getError to check for errors if that's unsupported,
which would be slow and error-prone.

Benoit


>
> Benoit
>
>
>
>
>>
>>
>> -Boris
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 12:32 GMT-04:00 Boris Zbarsky :

> On 5/6/14, 12:25 PM, Benoit Jacob wrote:
>
>> To what extent does what I wrote in my previous email, regarding
>> interactions between different extensions, answer your question?
>>
>
> I'm not sure it answers it at all.
>
>
>  With the example approach you suggested above, one would have to specify
>> extensions separately and for each of them, their possible interactions
>> with other extensions.
>>
>
> Why?  This is the part I don't get.
>
> The approach I suggest is that a UA is allowed to expose the new methods
> on the return value of getContext("webgl"), but only if it exposes all of
> them.  In other words, it's functionally equivalent to the WebGL2 spec,
> except the way you get a context continues to be getContext("webgl") (which
> is the part that Anne was concerned about, afaict).


Ah, I see the confusion now. So the first reason why what you're suggesting
wouldn't work for WebGL is that WebGL extension my add functionality
without changing any IDL at all.

A good example (that is a WebGL 1 extension and that is part of WebGL 2) is
float textures.
http://www.khronos.org/registry/webgl/extensions/OES_texture_float/

WebGL has a method, texImage2D, that allows uploading texture data; and it
has various enum values, like BYTE and INT and FLOAT, that allow specifying
the type of data. By default, WebGL does not allow FLOAT to be passed for
the type parameter of the texImage2D method. The OES_texture_float
extension make that allowed. So this adds real functionality (enables using
textures with floating-point RGB components) without changing anything at
the DOM interface level.

There are more examples. Even when OES_texture_float is supported, FLOAT
textures don't support linear filtering by default. That is, in turn,
enabled by the OES_texture_float_linear extension,
http://www.khronos.org/registry/webgl/extensions/OES_texture_float_linear/

Both these WebGL extensions are part of core WebGL 2, so they are relevant
examples.

Benoit




>
>
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 12:11 GMT-04:00 Boris Zbarsky :

> On 5/6/14, 12:03 PM, Benoit Jacob wrote:
>
>> Indeed, the alternative to doing WebGL2
>> is to expose the same functionality as a collection of WebGL 1 extensions
>>
>
> I think Anne's question, if I understood it right, is why this requires a
> new context ID.
>
> I assume the argument is that if you ask for the WebGL2 context id and get
> something back that guarantees that all the new methods are implemented.
>  But one could do something similar via implementations simply guaranteeing
> that if you ask for the WebGL context ID and get back an object and it has
> any of the new methods on it, then they're all present and work.
>
> Are there other reasons there's a separate context id for WebGL2?
>

To what extent does what I wrote in my previous email, regarding
interactions between different extensions, answer your question?

With the example approach you suggested above, one would have to specify
extensions separately and for each of them, their possible interactions
with other extensions.

Moreover, most of the effort spent doing that would be of little use in
practice as current desktop hardware / newer mobile hardware supports all
of that functionality. And realistically, the primary target audience there
is games, and games already have their code paths written for ES2 and/or
for ES3 i.e. they already expect the mode switch.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 11:04 GMT-04:00 Anne van Kesteren :

> On Tue, May 6, 2014 at 3:57 PM, Thomas Zimmermann
>  wrote:
> > I think Khronos made a bad experience with backwards compatible APIs
> > during OpenGL's history. They maintained a compatible API for OpenGL for
> > ~15 years until it was huge and crufty. Mode switches are their solution
> > to the problem.
>
> Yes, but Khronos can say that at some point v1 is no longer supported.
> Or particular GPU vendors can say that. But this API is for the web,
> where we can't and where we've learned repeatedly that mode switches
> are terrible in the long run.
>

For the record, the only time the Khronos broke compatibility with a new GL
API version, was with the release of OpenGL ES 2.0 (on which WebGL 1 is
based), which dropped support for OpenGL ES 1.0 API. That's the only ever
instance: newer ES versions (like ES 3.0, on which WebGL 2 is based) are
strict supersets, and regular non-ES OpenGL versions are always supersets
--- all the way from 1992 OpenGL 1.0 to the latest OpenGL 4.4.

This is just to provide a data point that OpenGL has a long track record of
strictly preserving with long-term API compatibility.

The other point I'm reading above is about mode switches. I think you're
making a valid point here. I also think that the particulars of WebGL2
still make it a decent trade-off. Indeed, the alternative to doing WebGL2
is to expose the same functionality as a collection of WebGL 1 extensions
(1) (In fact, some of that functionality is already exposed (2)). We could
take that route. However, that would require figuring out the interactions
for all possible subsets of that set of extensions. There would be
nontrivial details to sort out in writing the specs, and in writing
conformance tests. To get a feel of the complexity of interactions between
different OpenGL extensions (3). Just exposing this entire set of
extensions at once as "WebGL2" reduces a lot of the complexity of
interactions.

Some more particulars of WebGL2 may be useful to spell out here to clarify
why this is a reasonable thing for us to implement.

WebGL2 follows ES 3.0 which loosely follows OpenGL 3.2 from 2009, and most
of it is OpenGL 3.0 from 2008. So this API has been well adopted and tested
in the real world for five years now.

ES 3.0 functionality is universally supported on current desktop hardware,
and is the standard for newer mobile hardware too, even in the low end (for
example, all Adreno 300 mobile GPUs support it).

We have received consistent feedback from game developers that WebGL 2
would make it much easier for them to port their newer rendering paths to
the Web.

The spec process is already well on its way with other browser vendors on
board (Google, Apple) as one can see from public_webgl mailing list
archives.

Benoit

(1) http://www.khronos.org/registry/webgl/extensions/
(2) E.g.
http://www.khronos.org/registry/webgl/extensions/WEBGL_draw_buffers/ and
http://www.khronos.org/registry/webgl/extensions/ANGLE_instanced_arrays/
(3) Search for the phrase "affects the definition of this extension" in the
language of OpenGL extension specs such as
http://www.khronos.org/registry/gles/extensions/EXT/EXT_draw_buffers.txt to
mention just one extension that's become a WebGL extension and part of
WebGL 2.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: NS_IMPL_ISUPPORTS and friends are now variadic

2014-04-28 Thread Benoit Jacob
2014-04-28 14:18 GMT-04:00 Trevor Saunders :

> On Mon, Apr 28, 2014 at 02:07:07PM -0400, Benoit Jacob wrote:
> > 2014-04-28 12:17 GMT-04:00 Birunthan Mohanathas <
> birunt...@mohanathas.com>:
> >
> > > On 28 April 2014 14:18, Benoit Jacob  wrote:
> > > > Question: is there a plan to switch to an implementation based on
> > > variadic
> > > > templates when we will stop supporting compilers that don't support
> > > them? Do
> > > > you know when that would be (of the compilers that we currently
> support,
> > > > which ones don't support variadic templates?)
> > >
> > > I don't think a purely variadic template based solution is possible
> > > (e.g. due to argument stringification employed by NS_IMPL_ADDREF and
> > > others).
> > >
> >
> > Would it be possible to have a variadic macro that takes N arguments,
> > stringifies them, and passes all 2N resulting values (the original N
> > arguments and their N stringifications) to a variadic template?
>
> Well, the bigger problem is that those macros are defining member
> functions, so I don't see how you could do that with a variatic
> template, accept perhaps for cycle collection if we can have a struct
> that takes a variatic template, and then use the variatic template args
> in member functions.
>

Right, NS_IMPL_CYCLE_COLLECTION and its variants are what I have in mind
here.

Benoit


>
> Trev
>
>
> >
> > Benoit
> >
> >
> > >
> > > As for compiler support, I believe our current MSVC version is the
> > > only one lacking variadic templates. I don't know if/when we are going
> > > to switch to VS2013.
> > >
> > > On 28 April 2014 12:07, Henri Sivonen  wrote:
> > > > Cool. Is there a script that rewrites mq patches whose context has
> > > > numbered macros to not expect numbered macros?
> > >
> > > Something like this should work (please use with caution because it's
> > > Perl and because I only did a quick test):
> > >
> > > perl -i.bak -0777 -pe '
> > > $names = join("|", (
> > > "NS_IMPL_CI_INTERFACE_GETTER#",
> > > "NS_IMPL_CYCLE_COLLECTION_#",
> > > "NS_IMPL_CYCLE_COLLECTION_INHERITED_#",
> > > "NS_IMPL_ISUPPORTS#",
> > > "NS_IMPL_ISUPPORTS#_CI",
> > > "NS_IMPL_ISUPPORTS_INHERITED#",
> > > "NS_IMPL_QUERY_INTERFACE#",
> > > "NS_IMPL_QUERY_INTERFACE#_CI",
> > > "NS_IMPL_QUERY_INTERFACE_INHERITED#",
> > > "NS_INTERFACE_TABLE#",
> > > "NS_INTERFACE_TABLE_INHERITED#",
> > > )) =~ s/#/[1-9]\\d?/gr;
> > >
> > > sub rep {
> > > my ($name, $args) = @_;
> > > my $unnumbered_name = $name =~ s/_?\d+//r;
> > > my $spaces_to_remove = length($name) -
> length($unnumbered_name);
> > > $args =~ s/^(. {16}) {$spaces_to_remove}/\1/gm;
> > > return $unnumbered_name . $args;
> > > }
> > >
> > > s/($names)(\(.*?\))/rep($1, $2)/ges;
> > > ' some-patch.diff
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: NS_IMPL_ISUPPORTS and friends are now variadic

2014-04-28 Thread Benoit Jacob
2014-04-28 12:17 GMT-04:00 Birunthan Mohanathas :

> On 28 April 2014 14:18, Benoit Jacob  wrote:
> > Question: is there a plan to switch to an implementation based on
> variadic
> > templates when we will stop supporting compilers that don't support
> them? Do
> > you know when that would be (of the compilers that we currently support,
> > which ones don't support variadic templates?)
>
> I don't think a purely variadic template based solution is possible
> (e.g. due to argument stringification employed by NS_IMPL_ADDREF and
> others).
>

Would it be possible to have a variadic macro that takes N arguments,
stringifies them, and passes all 2N resulting values (the original N
arguments and their N stringifications) to a variadic template?

Benoit


>
> As for compiler support, I believe our current MSVC version is the
> only one lacking variadic templates. I don't know if/when we are going
> to switch to VS2013.
>
> On 28 April 2014 12:07, Henri Sivonen  wrote:
> > Cool. Is there a script that rewrites mq patches whose context has
> > numbered macros to not expect numbered macros?
>
> Something like this should work (please use with caution because it's
> Perl and because I only did a quick test):
>
> perl -i.bak -0777 -pe '
> $names = join("|", (
> "NS_IMPL_CI_INTERFACE_GETTER#",
> "NS_IMPL_CYCLE_COLLECTION_#",
> "NS_IMPL_CYCLE_COLLECTION_INHERITED_#",
> "NS_IMPL_ISUPPORTS#",
> "NS_IMPL_ISUPPORTS#_CI",
> "NS_IMPL_ISUPPORTS_INHERITED#",
> "NS_IMPL_QUERY_INTERFACE#",
> "NS_IMPL_QUERY_INTERFACE#_CI",
> "NS_IMPL_QUERY_INTERFACE_INHERITED#",
> "NS_INTERFACE_TABLE#",
> "NS_INTERFACE_TABLE_INHERITED#",
> )) =~ s/#/[1-9]\\d?/gr;
>
> sub rep {
> my ($name, $args) = @_;
> my $unnumbered_name = $name =~ s/_?\d+//r;
> my $spaces_to_remove = length($name) - length($unnumbered_name);
> $args =~ s/^(. {16}) {$spaces_to_remove}/\1/gm;
> return $unnumbered_name . $args;
> }
>
> s/($names)(\(.*?\))/rep($1, $2)/ges;
> ' some-patch.diff
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: NS_IMPL_ISUPPORTS and friends are now variadic

2014-04-28 Thread Benoit Jacob
2014-04-28 0:18 GMT-04:00 Birunthan Mohanathas :

> Bugs 900903 and 900908 introduced variadic variants of
> NS_IMPL_ISUPPORTS, NS_IMPL_QUERY_INTERFACE, NS_IMPL_CYCLE_COLLECTION,
> etc. and removed the old numbered macros. So, instead of e.g.
> NS_IMPL_ISUPPORTS2(nsFoo, nsIBar, nsIBaz), simply use
> NS_IMPL_ISUPPORTS(nsFoo, nsIBar, nsIBaz) instead. Right now, the new
> macros support up to 50 variadic arguments.
>

Awesome, congrats, and thanks!

Question: is there a plan to switch to an implementation based on variadic
templates when we will stop supporting compilers that don't support them?
Do you know when that would be (of the compilers that we currently support,
which ones don't support variadic templates?)

Benoit


>
> Note that due to technical details, the new macros will reject uses
> with zero variadic arguments. In such cases, you will want to continue
> to use the zero-numbered macro, e.g. NS_IMPL_ISUPPORTS0(nsFoo).
>
> Cheers,
> Biru
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Policing dead/zombie code in m-c

2014-04-25 Thread Benoit Jacob
2014-04-25 3:31 GMT-04:00 Henri Sivonen :

> On Thu, Apr 24, 2014 at 4:20 PM, Benoit Jacob 
> wrote:
> > 2014-04-24 8:31 GMT-04:00 Henri Sivonen :
> >
> >> I have prepared a queue of patches that removes Netscape-era (circa
> >> 1999) internationalization code that efforts to implement the Encoding
> >> Standard have shown unnecessary to have in Firefox. This makes libxul
> >> on ARMv7 smaller by 181 KB, so that's a win.
> >
> > Have we measured the impact of this change on actual memory usage (as
> > opposed to virtual address space size) ?
>
> No, we haven't. I don't have a B2G phone, but I could give my whole
> patch queue in one diff to someone who wants to try.
>
> > Have we explored how much this problem could be automatically helped by
> the
> > linker being smart about locality?
>
> Not to my knowledge, but I'm very skeptical about getting these
> benefits by having the linker be smart so that the dead code ends up
> on memory pages that  aren't actually mapped to real RAM.
>
> The code that is no longer in use is sufficiently intermingled with
> code that's still is in use. Useful and useless plain old C data is
> included side-by-side. Useful and useless classes are included next to
> each other in unified compilation units. Since the classes are
> instantiated via XPCOM, a linker that's unaware of XPCOM couldn't tell
> that some classes are in use and some aren't via static analysis. All
> of them would look equally dead or alive depending on what we do you
> take on the root of the caller chain being function pointers in a
> contract ID table.
>
> Using PGO to determine what's dead code and what's not wouldn't work,
> either, if the profiling run was "load mozilla.org", because the run
> would exercise too little code, or if the profiling run was "all the
> unit tests", because the profiling run would exercise too much code.
>

Thanks for this answer, it does totally make sense (and shed light on the
specifics here that make this hard to solve automatically).

Benoit



>
> On Fri, Apr 25, 2014 at 2:03 AM, Ehsan Akhgari 
> wrote:
> >> * Are we building and shipping dead code in ICU on B2G?
> >
> > No.  That is at least partly covered by bug 864843.
>
> Using system ICU seems wrong in terms of correctness. That's the
> reason why we don't use system ICU on Mac and desktop Linux, right?
>
> For a given phone, the Android base system practically never updates,
> so for a given Firefox version, the Web-exposed APIs would have as
> many behaviors as there are differing ICU snapshots on different
> Android versions out there.
>
> As for B2G, considering that Gonk is supposed to update less often
> than Gecko, it seems like a bad idea to have ICU be part of Gonk
> rather than part of Gecko on B2G.
>
> > In my experience, ICU is unfortunately a hot potato. :(  The real blocker
> > there is finding someone who can tell us what bits of ICU _are_ used in
> the
> > JS engine.
>
> Apart from ICU initialization/shutdown, the callers seem to be
> http://mxr.mozilla.org/mozilla-central/source/js/src/builtin/Intl.cpp
> and http://mxr.mozilla.org/mozilla-central/source/js/src/jsstr.cpp#852
> .
>
> So the JS engine uses:
>  * Collation
>  * Number formatting
>  * Date and time formatting
>  * Normalization
>
> It looks like the JS engine has its own copy of the Unicode database
> for other purposes. It seems like that should be unified with ICU so
> that there'd be only one copy of the Unicode database.
>
> Additionally, we should probably rewrite nsCollation users to use ICU
> collation and delete nsCollation.
>
> Therefore, it looks like we should turn off (if we haven't already):
>  * The ICU LayoutEngine.
>  * Ustdio
>  * ICU encoding converters and their mapping tables.
>  * ICU break iterators and their data.
>  * ICU transliterators and their data.
>
> http://apps.icu-project.org/datacustom/ gives a good idea of what
> there is to turn off.
>
> > The parts used in Gecko for  are pretty
> > small.  And of course someone needs to figure out the black magic of
> > conveying the information to the ICU build system.
>
> So it looks like we already build with UCONFIG_NO_LEGACY_CONVERSION:
>
> http://mxr.mozilla.org/mozilla-central/source/intl/icu/source/common/unicode/uconfig.h#264
>
> However, that flag is misdesigned in the sense that it considers
> US-ASCII, ISO-8859-1, UTF-7, UTF-32, CESU-8, SCSU and BOCU-1 as
> non-legacy, even though, frankly, those are legacy, too. (UTF-16 is
> legacy also, but it's legacy we need, since b

Re: Policing dead/zombie code in m-c

2014-04-24 Thread Benoit Jacob
2014-04-24 8:31 GMT-04:00 Henri Sivonen :

> I have prepared a queue of patches that removes Netscape-era (circa
> 1999) internationalization code that efforts to implement the Encoding
> Standard have shown unnecessary to have in Firefox. This makes libxul
> on ARMv7 smaller by 181 KB, so that's a win.
>

Have we measured the impact of this change on actual memory usage (as
opposed to virtual address space size) ?

Have we explored how much this problem could be automatically helped by the
linker being smart about locality?

I totally agree about the value of removing dead code (if only to make the
codebase easier to read and maintain), I just wonder if there might be a
shortcut to get the short-term memory usage benefits that you mention.

Benoit



>
> However, especially in the context of slimming down our own set of
> encoding converters, it's rather demotivating to see that at least on
> desktop, we are building ICU encoding converters that we don't use.
> See https://bugzilla.mozilla.org/show_bug.cgi?id=944348 . This isn't
> even a matter of building code that some might argue we maybe should
> use (https://bugzilla.mozilla.org/show_bug.cgi?id=724540). We are even
> building ICU encoding converters that we should never use even if we
> gave up on our own converters. We're building stuff like BOCU-1 that's
> explicitly banned by the HTML spec! (In general, it's uncool that
> abandoned researchy stuff like BOCU-1 is included by default in a
> widely-used production library like ICU.)
>
> Questions:
>  * Are we building and shipping dead code in ICU on B2G?
>  * The bug about building useless code in ICU has been open since
> November. Whose responsibility is it to make sure we stop building
> stuff that we don't use in ICU?
>  * Do we have any mechanisms in place for preventing stuff like the
> ICU encoding converters becoming part of the building the future? When
> people propose to import third-party code, do reviewers typically ask
> if we are importing more than we intend to use? Clearly, considering
> that it is hard to get people to remove unused code from the build
> after the code has landed, we shouldn't have allowed code like the ICU
> encoding converters to become part of the build in the first place?
>  * How should we identify code that we build but that isn't used anywhere?
>  * How should we identify code that we build as part of Firefox but is
> used only in other apps (Thunderbird, SeaMonkey, etc.)?
>  * Are there obvious places that people should inspect for code that's
> being built but not used? Some libs that got imported for WebRTC
> maybe?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-15 Thread Benoit Jacob
2014-04-15 18:28 GMT-04:00 Andreas Gal :

>
> You can’t beat the competition by fast following the competition. Our
> competition are native, closed, proprietary ecosystems. To beat them, the
> Web has to be on the bleeding edge of technology. I would love to see VR
> support in the Web platform before its available as a builtin capability in
> any major native platform.
>

Can't we?   (referring to: "You can’t beat the competition by fast
following the competition.")

The Web has a huge advantage over the competition ("native, closed,
proprietary ecosystems"):

The web only needs to be good enough.

Look at all the wins that we're currently scoring with Web games. (I
mention games because that's relevant to this thread). My understanding of
this year's GDC announcements is that we're winning. To achieve that, we
didn't really give the web any technical superiority over other platforms;
in fact, we didn't even need to achieve parity. We merely made it good
enough. For example, the competition is innovating with a completely new
platform to "run native code on the web", but with asm.js and emscripten
we're showing that javascript is in fact good enough, so we end up winning
anyway.

What we need to ensure to keep winning is 1) that the Web remains good
enough and 2) that it remains true, that the Web only needs to be good
enough.

In this respect, more innovation is not necessarily better, and in fact,
the cost of innovating in the wrong direction could be particularly high
for the Web compared to other platforms. We need to understand the above 2)
point and make sure that we don't regress it. 2) probably has something to
do with the fact that the Web is the one "write once, run anywhere"
platform and, on top of that, also offers "run forever". Indeed, compared
to other platforms, we care much more about portability and we are much
more serious about committing to long-term platform stability. Now my point
is that we can only do that by being picky with what we support. There's no
magic here; we don't get the above 2) point for free.

Benoit


>
> Andreas
>
> On Apr 15, 2014, at 2:57 PM, Robert O'Callahan 
> wrote:
>
> > On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob  >wrote:
> >
> >> If VR is not yet a thing on the Web, could you elaborate on why you
> think
> >> it should be?
> >>
> >> I'm asking because the Web has so far mostly been a common denominator,
> >> conservative platform. For example, WebGL stays at a distance behind the
> >> forefront of OpenGL innovation. I thought of that as being intentional.
> >>
> >
> > That is not intentional. There are historical and pragmatic reasons why
> the
> > Web operates well in "fast follow" mode, but there's no reason why we
> can't
> > lead as well. If the Web is going to be a strong platform it can't always
> > be the last to get shiny things. And if Firefox is going to be strong we
> > need to lead on some shiny things.
> >
> > So we need to solve Vlad's problem.
> >
> > Rob
> > --
> > Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> > le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids
>  teoa
> > stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg
> iyvoeunr,
> > 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt
>  uIp
> > waanndt  wyeonut  thoo mken.o w
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-15 Thread Benoit Jacob
2014-04-14 18:41 GMT-04:00 Vladimir Vukicevic :

> 3. We do nothing.  This option won't happen: I'm tired of not having Gecko
> and Firefox at the forefront of web technology in all aspects.
>

Is VR already "Web technology" i.e. is another browser vendor already
exposing this, or would we be the first to?

If VR is not yet a thing on the Web, could you elaborate on why you think
it should be?

I'm asking because the Web has so far mostly been a common denominator,
conservative platform. For example, WebGL stays at a distance behind the
forefront of OpenGL innovation. I thought of that as being intentional. Is
VR a departure from this, or is it already much more mainstream than I
thought it was?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Atomic considered harmful

2014-04-02 Thread Benoit Jacob
2014-04-02 11:03 GMT-04:00 Honza Bambas :

> On 4/2/2014 11:33 AM, Nicolas B. Pierron wrote:
>
>>
>> --lock(mRefCnt);
>> if (lock(mRefCnt) == 0) {
>>delete this;
>> }
>>
>> This way, this is more obvious that we might not be doing the right
>> things, as long as we are careful to refuse AtomicHandler references in
>> reviews.
>>
>>
> I personally don't think this will save us.  This can easily slip through
> review as well.
>
> Also, I'm using our mozilla::Atomic<> for not just refcounting but as an
> easy lock-less t-s counters.  If I had to change the code from mMyCounter
> += something; to mozilla::Unused << AtomicFetchAndAdd(&mMyCounter,
> something); I would not be happy :)
>

I hope that here on dev-platform we all agree that what we're really
interested in is making it easier to *read* code, much more than making it
easier to *write* code!

Assuming that we do, then the above argument weighs very little against the
explicitness of AtomicFetchAdd saving the person reading this code from
missing the "atomic" part!

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Atomic considered harmful

2014-04-01 Thread Benoit Jacob
2014-04-01 18:40 GMT-04:00 Jeff Walden :

> On 04/01/2014 02:32 PM, Ehsan Akhgari wrote:
> > What do people feel about my proposal?  Do you think it improves writing
> > and reviewing thread safe code to be less error prone?
>
> As I said in the bug, not particularly.  I don't think you can program
> with atomics in any sort of brain-off way, and I don't think the
> boilerplate difference of += versus fetch-and-add or whatever really
> affects that.  To the extent things should be done differently, it should
> be that *template* functions that deal with atomic/non-atomic versions of
> the same algorithm deserve extra review and special care, and perhaps even
> should be implemented twice, rather than sharing a single implementation.
>  And I think the cases in question here are flavors of approximately a
> single issue, and we do not have a fundamental problem here to be solved by
> making the API more obtuse in practice.
>

How are we going to enforce (and ensure that future people enforce) that?
(The part about "functions that deal with atomic/non-atomic versions of the
same algorithm deserve extra review and special care") ?

I like Ehsan's proposal because, as far as I am concerned, explicit
function names help me very well to remember to check atomic semantics;
especially if we follow the standard  naming where the functions
start by atomic_ , e.g. std::atomic_fetch_add.

On the other hand, if the function name that we use for that is just
"operator +" then it becomes very hard for me as a reviewer, because I have
to remember checking everytime I see a "+" to check if the variables at
hand are atomics.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
2014-04-01 10:57 GMT-04:00 Benjamin Smedberg :

> On 4/1/2014 10:54 AM, Benoit Jacob wrote:
>
>> Let's see if we can wrap up this conversation soon now. How about:
>>
>>  MOZ_MAKE_COMPILER_BELIEVE_IS_UNREACHABLE
>>
> I counter-propose that we remove the macro entirely. I don't believe that
> the potential performance benefits we've identified are worth the risk.
>

I certainly don't object to that, but I didn't suppose that I could easily
get consensus around that.

This macro is especially heavily used in SpiderMonkey. Maybe SpiderMonkey
developers could weigh in on how large are the benefits brought by
UNREACHABLE there?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
2014-03-28 17:14 GMT-04:00 Benoit Jacob :

>
> 2014-03-28 16:48 GMT-04:00 L. David Baron :
>
> On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote:
>> > My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE.
>> >
>> > It's really handy to have something like MOZ_ASSERT_UNREACHABLE,
>> instead of having a bunch of MOZ_ASSERT(false, "Unreachable.") lines.
>> >
>> > Consider MOZ_ASSERT_UNREACHABLE being the same as
>> MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds.
>>
>> I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't
>> think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the
>> name should make it clear that it's dangerous for the code to be
>> reachable (i.e., the compiler can produce undefined behavior).
>> MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for
>> that, though it's a bit of a mouthful.
>>
>
> I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new
> name of MOZ_ASSUME_UNREACHABLE sound really scary.
>
> I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as
> it should rarely be used. If SpiderMonkey gurus find that they need it
> often, they can always alias it in some local header.
>
> I think that _ASSUME_ is too hard to understand, probably because this
> doesn't explicitly say what would happen if the assumption were violated.
> One has to understand that this is introducing a *compiler* assumption to
> understand that violating it would be Undefined Behavior.
>
> How about  MOZ_ALLOW_COMPILER_TO_GO_CRAZY  ;-) This is technically
> correct, and explicit!
>

Let's see if we can wrap up this conversation soon now. How about:

MOZ_MAKE_COMPILER_BELIEVE_IS_UNREACHABLE

The idea of _COMPILER_ here is to clarify that this macro is tweaking the
compiler's own view of the surrounding code; and the idea of _BELIEVE_ here
is that the compiler is just going to believe us, even if we say something
absurd, which I believe underlines our responsibility. I'm not a native
English speaker so don't hesitate to point out any awkwardness in this
construct...

And as agreed above, we will also introduce a MOZ_ASSERT_UNREACHABLE macro
doing MOZ_ASSERT(false, msg) and will recommend it for most users.

If anyone has a better proposal or a tweak to this one, speak up! I'd like
to be able to proceed with this soon.

Benoit




>
> Benoit
>
>
>
>>
>> -David
>>
>> --
>> 𝄞   L. David Baron http://dbaron.org/   𝄂
>> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>>  Before I built a wall I'd ask to know
>>  What I was walling in or walling out,
>>  And to whom I was like to give offense.
>>- Robert Frost, Mending Wall (1914)
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
2014-04-01 3:58 GMT-04:00 Benoit Jacob :

>   * Remove jump table bounds checks (See a.cpp; allowing to abuse a jump
> table to jump to an arbitrary address);
>

Just got an idea: we could market this as "WebJmp, exposing the jmp
instruction to the Web" ?

We could build a pretty strong case for it, "we already ship it".
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
sult = 1;
else if (x == 1)  result = y;
else if (x == 2)  result = y*y + y;
else if (x == 3)  result = y*y*y;
else if (x == 4)  result = (y*y + 1) / (y + 10);
else if (x == 5)  result = y*y*y + y;
else if (x == 6)  result = 2*y;
else if (x == 7)  result = y*y + 3*y;
else if (x == 8)  result = 5*y*y*y + y*y;
else if (x == 9)  result = 7*y*y + 1;
else if (x == 10) result = 3*y*y*y - y + 1;
else {
  UNREACHABLE();
}

return result;
  }

Results:

Clang3.4 implements this using a jump table and does the range check
even with unreachable. Unreachable still has the effect of merging the
last case with the default case, which allows to generate slightly
smaller code.

GCC4.6 implements this as a dump chain of conditional jumps;
unreachable has the effect of jumping to the end of the function
without a 'ret' instruction, i.e. continuing execution outside of the
function! (I actually verified that that's what happens in GDB).

Conclusion: if we hit the 'unreachable' path, with clang3.4 we're
safe, but with gcc4.6 we end up running code of unrelated functions,
typically crashing, very possibly doing exploitable things first!

*
*

Test program 3/4 - c.cpp

If-branch on a condition already known to be never met due to an
earlier unreachable statement.

Source code:

  #ifdef USE_UNREACHABLE
  #define UNREACHABLE() __builtin_unreachable()
  #else
  #define UNREACHABLE()
  #endif

  unsigned int foo(unsigned int x)
  {
bool b = x > 100;

if (b) {
  UNREACHABLE();
}

if (b) {
  return (x*x*x*x + x*x + 1) / (x*x*x + x + 1234);
}

return x;
  }

Results:

Clang3.4: unreachable has no effect.

GCC4.6: the unreachable statement is fully understood to make this
condition never met, and GCC4.6 uses this to omit it entirely.

Without unreachable:

.cfi_startproc
cmpl$100, %edi
movl%edi, %eax
jbe .L2
movl%edi, %ecx
xorl%edx, %edx
imull   %edi, %ecx
addl$1, %ecx
imull   %edi, %ecx
imull   %ecx, %eax
addl$1234, %ecx
addl$1, %eax
divl%ecx
.L2:
rep
ret
.cfi_endproc

With unreachable:

.cfi_startproc
movl%edi, %eax
ret
.cfi_endproc


*
*

Test program 4/4 - d.cpp

If-branch on a condition already known to be always met due to an
earlier unreachable statement on the opposite condition.

Source code:

  #ifdef USE_UNREACHABLE
  #define UNREACHABLE() __builtin_unreachable()
  #else
  #define UNREACHABLE()
  #endif

  unsigned int foo(unsigned int x)
  {
bool b = x > 100;

if (!b) {
  UNREACHABLE();
}

if (b) {
  return (x*x*x*x + x*x + 1) / (x*x*x + x + 1234);
}

return x;
  }

Clang3.4: unreachable has no effect.

GCC4.6: the unreachable statement is fully understood to make this
condition always met, and GCC4.6 uses this to remove the conditional
branch and unconditionally take this branch.

Without unreachable:

.cfi_startproc
cmpl$100, %edi
movl%edi, %eax
jbe .L2
movl%edi, %ecx
xorl%edx, %edx
imull   %edi, %ecx
addl$1, %ecx
imull   %edi, %ecx
imull   %ecx, %eax
addl$1234, %ecx
addl$1, %eax
divl%ecx
.L2:
rep
ret
.cfi_endproc

With unreachable:

.cfi_startproc
movl%edi, %ecx
xorl%edx, %edx
imull   %edi, %ecx
addl$1, %ecx
imull   %edi, %ecx
movl%ecx, %eax
addl$1234, %ecx
imull   %edi, %eax
addl$1, %eax
divl%ecx
ret
.cfi_endproc







2014-03-28 12:25 GMT-04:00 Benoit Jacob :

> Hi,
>
> Despite a helpful, scary comment above its definition in
> mfbt/Assertions.h, MOZ_ASSUME_UNREACHABLE is being misused. Not pointing
> fingers to anything specific here, but see
> http://dxr.mozilla.org/mozilla-central/search?q=MOZ_ASSUME_UNREACHABLE&case=true.
>
> The only reason why one might want an unreachability marker instead of
> simply doing nothing, is as an optimization --- a rather arcane, dangerous,
> I-know-what-I-am-doing kind of optimization.
>
> How can we help people not misuse?
>
> Should we rename it to something more explicit about what it is doing,
> such as perhaps MOZ_UNREACHABLE_UNDEFINED_BEHAVIOR ?
>
> Should we give typical code a macro that does what they want and sounds
> like what they want? Really, what typical code wants is a no-operation
>

Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-31 Thread Benoit Jacob
2014-03-31 15:22 GMT-04:00 Chris Peterson :

> On 3/28/14, 7:03 PM, Joshua Cranmer 🐧 wrote:
>
>> I included MOZ_ASSUME_UNREACHABLE_MARKER because that macro is the
>>> compiler-specific "optimize me" intrinsic, which I believe was the
>>> whole point of the original MOZ_ASSUME_UNREACHABLE.
>>>
>>> AFAIU, MOZ_ASSUME_UNREACHABLE_MARKER crashes on all Gecko platforms,
>>> but I included MOZ_CRASH to ensure the behavior was consistent for all
>>> platforms.
>>>
>>
>> No, MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler that this code and
>> everything after it can't be reached, so it need do anything. Clang will
>> delete the code after this branch and decide to not emit any control
>> flow. It may crash, but this is in the same vein that reading an
>> uninitialized variable may crash: it can certainly do a lot of wrong and
>> potentially exploitable things first.
>>
>
> So what is an example of an appropriate use of MOZ_ASSUME_UNREACHABLE in
> Gecko today?


That's a very good question to ask at this point!

Good examples are examples where 1) it is totally guaranteed that the
location is unreachable, and 2) the surrounding code is
performance-critical for at least some caller.

Example 1:

Right *after* (not *before* !) a guaranteed crash in generic code, like
this one:

http://hg.mozilla.org/mozilla-central/file/df7b26e90378/build/annotationProcessors/CodeGenerator.java#l329

I'm not familiar with this code, but, being in a code generator, I can
trust that this might be performance critical, and is really unreachable.

Example 2:

In the default case of a performance-critical switch statement that we have
an excellent reason of thinking is completely unreachable. Example:

http://hg.mozilla.org/mozilla-central/file/df7b26e90378/js/src/gc/RootMarking.cpp#l42

Again I'm not familiar with this code, but I can trust that it's
performance-critical, and since that function is static to this cpp file, I
can trust that the callers of this function are only a few local functions
that are aware of the fact that it would be very dangerous to call this
function with a bad 'kind' (though I wish that were said in a big scary
warning). The UNREACHABLE here would typically allow the compiler to skip
checking that 'kind' is in range before implementing this switch statement
with a jump-table, so, if this code is performance-critical to the point
that the cost of checking that 'kind' is in range is significant, then the
UNREACHABLE here is useful.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 17:14 GMT-04:00 Benoit Jacob :

>
> 2014-03-28 16:48 GMT-04:00 L. David Baron :
>
> On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote:
>> > My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE.
>> >
>> > It's really handy to have something like MOZ_ASSERT_UNREACHABLE,
>> instead of having a bunch of MOZ_ASSERT(false, "Unreachable.") lines.
>> >
>> > Consider MOZ_ASSERT_UNREACHABLE being the same as
>> MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds.
>>
>> I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't
>> think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the
>> name should make it clear that it's dangerous for the code to be
>> reachable (i.e., the compiler can produce undefined behavior).
>> MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for
>> that, though it's a bit of a mouthful.
>>
>
> I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new
> name of MOZ_ASSUME_UNREACHABLE sound really scary.
>
> I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as
> it should rarely be used. If SpiderMonkey gurus find that they need it
> often, they can always alias it in some local header.
>
> I think that _ASSUME_ is too hard to understand, probably because this
> doesn't explicitly say what would happen if the assumption were violated.
> One has to understand that this is introducing a *compiler* assumption to
> understand that violating it would be Undefined Behavior.
>
> How about  MOZ_ALLOW_COMPILER_TO_GO_CRAZY  ;-) This is technically
> correct, and explicit!
>

By the way, here is an anecdote. In some very old versions of GCC, when the
compiler identified Undefined Behavior, it emitted system commands to try
launching some video games that might be present on the system (see:
http://feross.org/gcc-ownage/ ). That actually helped more to raise
awareness of what Undefined Behavior means, than any serious explanation...

So... maybe MOZ_MAYBE_PLAY_STARCRAFT?

Benoit




>
> Benoit
>
>
>
>>
>> -David
>>
>> --
>> 𝄞   L. David Baron http://dbaron.org/   𝄂
>> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>>  Before I built a wall I'd ask to know
>>  What I was walling in or walling out,
>>  And to whom I was like to give offense.
>>- Robert Frost, Mending Wall (1914)
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 17:19 GMT-04:00 Mike Habicher :

>   MOZ_UNDEFINED_BEHAVIOUR() ? Undefined behaviour is usually enough to
> get C/++ programmers' attention.
>

I thought about that too; then I remembered that it is only at least a year
_after_ some time at Mozilla working  on Gecko, that I started appreciating
how scary "Undefined Behavior" is. If I remember correctly, before that, I
was misunderstanding this concept as just "Implementation-defined behavior"
i.e. not affecting the behavior of other C++ statements, like Undefined
Behavior does.

Benoit



>
> --m.
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 16:48 GMT-04:00 L. David Baron :

> On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote:
> > My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE.
> >
> > It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead
> of having a bunch of MOZ_ASSERT(false, "Unreachable.") lines.
> >
> > Consider MOZ_ASSERT_UNREACHABLE being the same as
> MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds.
>
> I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't
> think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the
> name should make it clear that it's dangerous for the code to be
> reachable (i.e., the compiler can produce undefined behavior).
> MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for
> that, though it's a bit of a mouthful.
>

I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new name
of MOZ_ASSUME_UNREACHABLE sound really scary.

I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as
it should rarely be used. If SpiderMonkey gurus find that they need it
often, they can always alias it in some local header.

I think that _ASSUME_ is too hard to understand, probably because this
doesn't explicitly say what would happen if the assumption were violated.
One has to understand that this is introducing a *compiler* assumption to
understand that violating it would be Undefined Behavior.

How about  MOZ_ALLOW_COMPILER_TO_GO_CRAZY  ;-) This is technically
correct, and explicit!

Benoit



>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 13:23 GMT-04:00 Chris Peterson :

> On 3/28/14, 12:25 PM, Benoit Jacob wrote:
>
>> Should we give typical code a macro that does what they want and sounds
>> like what they want? Really, what typical code wants is a no-operation
>> instead of undefined-behavior; now, that is exactly the same as
>> MOZ_ASSERT(false, "error"). Maybe this syntax is unnecessarily annoying,
>> and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH
>> but only affecting DEBUG builds? What would be a good name for it? Is it
>> worth keeping a close analogy with the unreachable-marker macro to steer
>> people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even
>> just MOZ_UNREACHABLE? So that people couldn't miss it when they look for
>> "UNREACHABLE" macros?
>>
>
> How about replacing MOZ_ASSUME_UNREACHABLE with two new macros like:
>
> #define MOZ_ASSERT_UNREACHABLE() \
>   MOZ_ASSERT(false, "MOZ_ASSERT_UNREACHABLE")
>
> #define MOZ_CRASH_UNREACHABLE() \
>   do {  \
> MOZ_ASSUME_UNREACHABLE_MARKER();\
> MOZ_CRASH("MOZ_CRASH_UNREACHABLE"); \
>   } while (0)
>

MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler "feel free to arbitrarily
miscompile this, and anything from that point on in this branch, as you may
assume that this code is unreachable". So it doesn't really serve any
purpose to add a MOZ_CRASH after a MOZ_ASSUME_UNREACHABLE_MARKER.

Benoit



>
> chris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
Hi,

Despite a helpful, scary comment above its definition in mfbt/Assertions.h,
MOZ_ASSUME_UNREACHABLE is being misused. Not pointing fingers to anything
specific here, but see
http://dxr.mozilla.org/mozilla-central/search?q=MOZ_ASSUME_UNREACHABLE&case=true.

The only reason why one might want an unreachability marker instead of
simply doing nothing, is as an optimization --- a rather arcane, dangerous,
I-know-what-I-am-doing kind of optimization.

How can we help people not misuse?

Should we rename it to something more explicit about what it is doing, such
as perhaps MOZ_UNREACHABLE_UNDEFINED_BEHAVIOR ?

Should we give typical code a macro that does what they want and sounds
like what they want? Really, what typical code wants is a no-operation
instead of undefined-behavior; now, that is exactly the same as
MOZ_ASSERT(false, "error"). Maybe this syntax is unnecessarily annoying,
and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH
but only affecting DEBUG builds? What would be a good name for it? Is it
worth keeping a close analogy with the unreachable-marker macro to steer
people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even
just MOZ_UNREACHABLE? So that people couldn't miss it when they look for
"UNREACHABLE" macros?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-28 Thread Benoit Jacob
http://en.wikipedia.org/wiki/Plain_Old_Data_Structures


confirms that POD can't have a vptr :-)

Benoit


2014-02-28 7:39 GMT-05:00 Henri Sivonen :

> On Fri, Feb 28, 2014 at 1:04 PM, Neil  wrote:
> > At least under MSVC, they have vtables, so they need to be constructed,
> so
> > they're not static.
>
> So structs that inherit at least one virtual method can't be plain old
> C data? That surprises me. And we still don't want to give the dynamic
> linker initializer code to run, right?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cairo being considered as the basis of a standard C++ drawing API

2014-02-24 Thread Benoit Jacob
P.S. The 1% figure there was a bit harsh, make it 5%...


2014-02-24 7:49 GMT-05:00 Benoit Jacob :

> From your blog post:
>
>> SG 13′s main intended purpose for such an API is to allow people learning
>> C++ and writing graphical applications to do so easily, without having to
>> rely on third-party libraries or learning complex APIs. In the long-term,
>> however, they would like the drawing API to be useful for people writing
>> application frameworks and high-performance applications as well.
>
>
> The first goal --- to make learning and simple development easier ---
> seems like a decent goal for the standard library, and one that could
> indeed conceivably be achieved by an API derived from Cairo (or Moz2D, or
> any other typical imperative 2D API).
>
> But the long-term goal mentioned above, "to be useful for people writing
> [...] high-performance applications as well", assuming that
> "high-performance" means being not too far from hardware limits, isn't
> really compatible with that, and if that is ever to be achieved, it would
> have to be thought about in depth from day one, and in particular, nobody
> really knows an incremental path from "a familiar imperative 2D API like
> Cairo (or Moz2D)" to "enable high-performance applications".
>
> The browser community is stuck with a 2D API that is convenient for casual
> usage and learning --- Canvas2D --- but has now to optimize that as much as
> possible. And with implementations such as Moz2D using Direct2D, or
> Skia/GL, there is no denying that a great deal of effort goes into that,
> with very significant advances being made towards running Canvas2D as fast
> as possible. That might confuse outside observers into believing that that
> means that these 2D APIs can be efficient, which they can't. "As fast as
> possible" here generally means 1% of hardware limits, and already using
> costly trade-offs such as large texture caches which are already a source
> of problems on memory-constrained devices.
>
> For example, people might be confused by browser-based demos using
> Canvas2D to boast "1000 flying kittens at 60 FPS" or some similar feat on a
> typical desktop computer. While that may seem impressive to people, that is
> typically only 1% of what that typical desktop hardware can do, and the way
> to get the remaining 99% of performance is to stop using Canvas2D (or any
> similar 2D API, such as Moz2D or Cairo or Direct2D) and instead use
> directly a lower-level graphics API like WebGL (or OpenGL or Direct3D). See
> e.g. Jeff's post,
> http://muizelaar.blogspot.ca/2011/02/drawing-sprites-canvas-2d-vs-webgl.html.
>
> So we are already at the point where, when people ask "how can I make this
> Canvas2D game run more efficiently" the best answer we can give them is
> "rewrite it to use WebGL instead".
>
> Similarly, if the C++ committee accepts a traditional (e.g.
> Cairo-inspired) 2D API with the eventual goal of enabling high-performance
> applications, they will probably have to give up that goal and tell
> application developers that were hoping for it, that they should rewrite
> their applications using OpenGL instead. Does the C++ committee realize
> that?
>
> To summarize, I believe that for that project to be reasonable, the C++
> committee should at least explicitly make high performance a non-goal, even
> in the long run.
>
> Benoit
>
>
> 2014-02-23 21:53 GMT-05:00 Botond Ballo :
>
> >> I would like to learn more about the tradeoffs between stateful
>> >> and stateless drawing APIs. If anyone can point me to a resource
>> >> about this, I would be grateful.
>>
>> > http://robert.ocallahan.org/2011/09/graphics-api-design.html
>>
>> > FWIW I don't think any of this affects Mozilla. We aren't going to use
>> the
>> > standard C++ graphics library ... unless it ends up being Moz2D or
>> something
>> > exactly like it :-).
>>
>> I summarized what happened at the SG13 meeting in my blog post
>> about the committee meeting [1].
>>
>> One interesting point that came up is that, since the thing being
>> standardized is cairo's interface, not its implementation, the
>> inefficiency that arises by having to go through cairo's public
>> stateful layer, then its internal stateless layer, and then a
>> backend library's stateful layer, can be avoided - an implementer
>> can simply implement the standard (stateful) interface directly
>> in terms of the stateful backend interface.
>>
>> Also, as I mention in my blog post, while the cairo proposal is
>> bein

Re: Cairo being considered as the basis of a standard C++ drawing API

2014-02-24 Thread Benoit Jacob
From your blog post:

> SG 13′s main intended purpose for such an API is to allow people learning
> C++ and writing graphical applications to do so easily, without having to
> rely on third-party libraries or learning complex APIs. In the long-term,
> however, they would like the drawing API to be useful for people writing
> application frameworks and high-performance applications as well.


The first goal --- to make learning and simple development easier --- seems
like a decent goal for the standard library, and one that could indeed
conceivably be achieved by an API derived from Cairo (or Moz2D, or any
other typical imperative 2D API).

But the long-term goal mentioned above, "to be useful for people writing
[...] high-performance applications as well", assuming that
"high-performance" means being not too far from hardware limits, isn't
really compatible with that, and if that is ever to be achieved, it would
have to be thought about in depth from day one, and in particular, nobody
really knows an incremental path from "a familiar imperative 2D API like
Cairo (or Moz2D)" to "enable high-performance applications".

The browser community is stuck with a 2D API that is convenient for casual
usage and learning --- Canvas2D --- but has now to optimize that as much as
possible. And with implementations such as Moz2D using Direct2D, or
Skia/GL, there is no denying that a great deal of effort goes into that,
with very significant advances being made towards running Canvas2D as fast
as possible. That might confuse outside observers into believing that that
means that these 2D APIs can be efficient, which they can't. "As fast as
possible" here generally means 1% of hardware limits, and already using
costly trade-offs such as large texture caches which are already a source
of problems on memory-constrained devices.

For example, people might be confused by browser-based demos using Canvas2D
to boast "1000 flying kittens at 60 FPS" or some similar feat on a typical
desktop computer. While that may seem impressive to people, that is
typically only 1% of what that typical desktop hardware can do, and the way
to get the remaining 99% of performance is to stop using Canvas2D (or any
similar 2D API, such as Moz2D or Cairo or Direct2D) and instead use
directly a lower-level graphics API like WebGL (or OpenGL or Direct3D). See
e.g. Jeff's post,
http://muizelaar.blogspot.ca/2011/02/drawing-sprites-canvas-2d-vs-webgl.html.

So we are already at the point where, when people ask "how can I make this
Canvas2D game run more efficiently" the best answer we can give them is
"rewrite it to use WebGL instead".

Similarly, if the C++ committee accepts a traditional (e.g. Cairo-inspired)
2D API with the eventual goal of enabling high-performance applications,
they will probably have to give up that goal and tell application
developers that were hoping for it, that they should rewrite their
applications using OpenGL instead. Does the C++ committee realize that?

To summarize, I believe that for that project to be reasonable, the C++
committee should at least explicitly make high performance a non-goal, even
in the long run.

Benoit


2014-02-23 21:53 GMT-05:00 Botond Ballo :

> >> I would like to learn more about the tradeoffs between stateful
> >> and stateless drawing APIs. If anyone can point me to a resource
> >> about this, I would be grateful.
>
> > http://robert.ocallahan.org/2011/09/graphics-api-design.html
>
> > FWIW I don't think any of this affects Mozilla. We aren't going to use
> the
> > standard C++ graphics library ... unless it ends up being Moz2D or
> something
> > exactly like it :-).
>
> I summarized what happened at the SG13 meeting in my blog post
> about the committee meeting [1].
>
> One interesting point that came up is that, since the thing being
> standardized is cairo's interface, not its implementation, the
> inefficiency that arises by having to go through cairo's public
> stateful layer, then its internal stateless layer, and then a
> backend library's stateful layer, can be avoided - an implementer
> can simply implement the standard (stateful) interface directly
> in terms of the stateful backend interface.
>
> Also, as I mention in my blog post, while the cairo proposal is
> being actively worked on and SG13 likes its direction, it's not
> too late to bring forward a competing proposal. If someone is
> interesting in writing a proposal based on Moz2D, I would be
> happy to present it at the next meeting.
>
> Cheers,
> Botond
>
> [1]
> http://botondballo.wordpress.com/2014/02/22/trip-report-c-standards-committee-meeting-in-issaquah-february-2014/#sg13
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]

2014-01-07 Thread Benoit Jacob
2014/1/7 Kyle Huey 

> On Tue, Jan 7, 2014 at 11:29 AM, Benoit Jacob wrote:
>
>> For example, if I'm scanning a function for possible early returns (say
>> I'm
>> debugging a bug where we're forgetting to close or delete a thing before
>> returning), I now need to scan for NS_ENSURE_SUCCESS in addition to
>> scanning for return. That's why hiding control flow in macros is, in my
>> opinion, never acceptable.
>>
>
> If you care about that 9 times out of 10 you are failing to use an RAII
> class when you should be.
>

I was talking about reading code, not writing code. I spend more time
reading code that I didn't write, than writing code. Of course I do use
RAII helpers when I write this kind of code myself, in fact just today I
landed two more such helpers in gfx/gl/ScopedGLHelpers.* ...

Benoit


>
> Since we seem to be voting now, I am moderately opposed to making XPCOM
> method calls more boilerplate-y, and very opposed to removing
> NS_ENSURE_SUCCESS without some sort of easy shorthand to test an nsresult
> and print to the console if it is a failure.  I know for sure that some of
> the other DOM peers (smaug and bz come to mind) feel similarly about the
> latter.
>
> - Kyle
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]

2014-01-07 Thread Benoit Jacob
2014/1/7 Neil 

> Benoit Jacob wrote:
>
>  I would like a random voice in favor of deprecating NS_ENSURE_* for the
>> reason that hiding control flow behind macros is IMO one of the most evil
>> usage patterns of macros.
>>
>>  So you're saying that
>
> nsresult rv = Foo();
> NS_ENSURE_SUCCESS(rv, rv);
>
> is hiding the control flow of the equivalent JavaScript
>
> try {
>Foo();
> } catch (e) {
>throw e;
> }
>
> except of course that nobody writes JavaScript like that...
>

All I mean is that NS_ENSURE_SUCCESS hides a 'return' statement.

#define NS_ENSURE_SUCCESS(res, ret)
   \  do {
   \nsresult __rv = res; /* Don't evaluate |res| more than
once */\if (NS_FAILED(__rv)) {
   \  NS_ENSURE_SUCCESS_BODY(res, ret)
   \  return ret;
   \}
   \  } while(0)


For example, if I'm scanning a function for possible early returns (say I'm
debugging a bug where we're forgetting to close or delete a thing before
returning), I now need to scan for NS_ENSURE_SUCCESS in addition to
scanning for return. That's why hiding control flow in macros is, in my
opinion, never acceptable.

Benoit


>
> --
> Warning: May contain traces of nuts.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]

2014-01-07 Thread Benoit Jacob
2014/1/7 L. David Baron 

> On Tuesday 2014-01-07 09:13 +0100, Ms2ger wrote:
> > On 01/07/2014 01:11 AM, Joshua Cranmer 🐧 wrote:
> > >Since Benjamin's message of November 22:
> > > news.mozilla.org/mailman.11861.1385151580.23840.dev-platf...@lists.mozilla.org
> >
> > >(search for "Please use NS_WARN_IF instead of NS_ENSURE_SUCCESS" if you
> > >don't have an NNTP client). Although there wasn't any prior discussion
> > >of the wisdom of this change, whether or not to use
> > >NS_ENSURE_SUCCESS-like patterns has been the subject of very acrimonious
> > >debates in the past and given the voluminous discussion on style guides
> > >in recent times, I'm not particularly inclined to start yet another one.
> >
> > FWIW, I've never seen much support for this change from anyone else
> > than Benjamin, and only in his modules the NS_ENSURE_* macros have
> > been effectively deprecated.
>
> I'm happy about getting rid of NS_ENSURE_*.
>

I would like a random voice in favor of deprecating NS_ENSURE_* for the
reason that hiding control flow behind macros is IMO one of the most evil
usage patterns of macros.

Benoit


>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Benoit Jacob
2013/12/10 Chris Pearce 

> It seems to me that we should be optimizing for developer productivity
> first, and use profiling tools to find code that needs to be optimized.
>
> i.e. we should be able to use STL containers where we need basic ADTs in
> day-to-day coding, and if instances of these containers show up in profiles
> then we should look at moving indivdual instances over to more optimized
> data structures.
>
>
> On 12/11/2013 4:42 AM, Benjamin Smedberg wrote:
>
>> njn already mentioned the memory-reporting issue.
>>
>
> We already have this problem with third party libraries that we use. We
> should work towards having a port of the STL that uses our memory
> reporters, so that we can solve this everywhere, and influence the size of
> generated code for these templates.


I agree with the above.

I would also like to underline an advantage of the STL's design: the API is
very consistent across containers, which allows to most easily switch
containers (e.g. switch between map and unordered_map) and recompile.

This has sometimes been derided as a footgun as one can unintentionally use
a container with an algorithm that isn't efficient with it.

But this also has a really nice, important effect: that one can avoid
worrying too early about optimization details, such as whether a binary
tree is efficient enough for a given use case or whether a hash table is
needed instead.

By contrast, our current Mozilla containers have each their own API and no
equivalent of the STL's iterators, so code using one container becomes
"married" to it. I believe that this circumstance is why optimization
details have been brought up IMHO prematurely in this thread, needlessly
complicating this conversation.

2013/12/10 Robert O'Callahan 

> Keep in mind that proliferation of different types for the same
> functionality hurts developer productivity in various ways, especially when
> they have quite different APIs. That's the main reason I'm not excited
> about widespread usage of a lot of new (to us) container types.
>

For the same reason as described above, I believe that adopting STL
containers is the solution, not the problem! The STL shows how to design
containers that have a sufficiently similar API that, in most cases where
that makes sense (e.g. between a map and an unordered_map), you can switch
containers without having to adapt to a different API.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Benoit Jacob
2013/12/10 smaug 

> On 12/10/2013 11:28 AM, Chris Pearce wrote:
>
>> Hi All,
>>
>> Can we start using C++ STL containers like std::set, std::map, std::queue
>> in Mozilla code please? Many of the STL containers are more convenient to
>> use than our equivalents, and more familiar to new contributors.
>>
>> I understand that we used to have a policy of not using STL in mozilla
>> code since some older compilers we wanted to support didn't have very good
>> support, but I'd assume that that argument no longer holds since already
>> build and ship a bunch of third party code that uses std containers (angle,
>> webrtc, chromium IPC, crashreporter), and the sky hasn't fallen.
>>
>> I'm not proposing a mass rewrite converting nsTArray to std::vector, just
>> that we allow STL in new code.
>>
>> Are there valid reasons why should we not allow C++ STL containers in
>> Mozilla code?
>>
>> Cheers,
>> Chris P.
>>
>
>
> std::map/set may not have the performance characteristics people think.
> They are significantly slower than
> xpcom hashtables (since they are usually trees), yet people tend to use
> them as HashTables or HashSets.
>
> Someone should compare the performance of std::unordered_map to our
> hashtables.
> (Problem is that the comparison needs to be done on all the platforms).
>

There are many non-performance-sensitive use cases where std::set or
std::map does what we need, and it has a better API than pldhash or our
wrappers around it. Often, it makes for easier-to-read code, especially
std::set compared to using a hashtable to do the same thing, since AFAIK we
don't have a HashSet class in mfbt or xpcom.

std::map and std::set are also free of any size restriction, and do not
require a contiguous part of our address space (see
https://bugzilla.mozilla.org/show_bug.cgi?id=944810#c12 ) , so they won't
run OOM before we are actually OOM.

They can even have performance advantages in some cases. Their worst-case
complexity is actually lot better than a pldhash as they never require a
big realloc, and IIUC they should typically have less overhead than
hashtables for small numbers of elements.

Also, I'm not aware of any mozilla equivalent of std::bitset.

Just mentioning a few reasons why people may want to use STL containers.

Benoit






>
>
>
> -Olli
>
>
>
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Benoit Jacob
Also note that IIUC, the only thing that prevents us from solving the
memory-reporting problem using a STL allocator, is that the spec doesn't
allow us to rely on storing per-object member data on a STL allocator.

Even without that, we could at least have a STL allocator doing
per-STL-container-class memory reporting, so that we can at least know how
much memory is taken by all std::set 's together. Just so we know if that
ever becomes a significant portion of dark matter.

Benoit


2013/12/10 Benoit Jacob 

> Note that we already do use at least those STL containers for which we
> don't have an equivalent in the tree. I've seen usage of at least:
> std::map, std::set, and std::bitset.
>
> I think that Nick has a good point about reporting memory usage, but I
> think that the right solution to this problem is to add Mozilla equivalents
> for the STL data structures that we need to "fork", not to skew all your
> design to use the data structures that we have instead of the ones you need.
>
> "Forking" STL data structures into Mozilla code seems reasonable to me.
> Besides memory reporting, it also gives us another benefit: guarantee of
> consistent implementation across platforms and compilers.
>
> Benoit
>
>
>
> 2013/12/10 Nicholas Nethercote 
>
>> On Tue, Dec 10, 2013 at 8:28 PM, Chris Pearce 
>> wrote:
>> > Hi All,
>> >
>> > Can we start using C++ STL containers like std::set, std::map,
>> std::queue in
>> > Mozilla code please?
>>
>>
>> https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code#C.2B.2B_and_Mozilla_standard_libraries
>> has the details.
>>
>> "As a general rule of thumb, prefer the use of MFBT or XPCOM APIs to
>> standard C++ APIs. Some of our APIs include extra methods not found in
>> the standard API (such as those reporting the size of data
>> structures). "
>>
>> I'm particularly attuned to that last point.  Not all structures grow
>> large enough to be worth reporting, but many are.  In the past I've
>> converted STL containers to Mozilla containers just to get memory
>> reporting.
>>
>> Nick
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Benoit Jacob
Note that we already do use at least those STL containers for which we
don't have an equivalent in the tree. I've seen usage of at least:
std::map, std::set, and std::bitset.

I think that Nick has a good point about reporting memory usage, but I
think that the right solution to this problem is to add Mozilla equivalents
for the STL data structures that we need to "fork", not to skew all your
design to use the data structures that we have instead of the ones you need.

"Forking" STL data structures into Mozilla code seems reasonable to me.
Besides memory reporting, it also gives us another benefit: guarantee of
consistent implementation across platforms and compilers.

Benoit



2013/12/10 Nicholas Nethercote 

> On Tue, Dec 10, 2013 at 8:28 PM, Chris Pearce  wrote:
> > Hi All,
> >
> > Can we start using C++ STL containers like std::set, std::map,
> std::queue in
> > Mozilla code please?
>
>
> https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code#C.2B.2B_and_Mozilla_standard_libraries
> has the details.
>
> "As a general rule of thumb, prefer the use of MFBT or XPCOM APIs to
> standard C++ APIs. Some of our APIs include extra methods not found in
> the standard API (such as those reporting the size of data
> structures). "
>
> I'm particularly attuned to that last point.  Not all structures grow
> large enough to be worth reporting, but many are.  In the past I've
> converted STL containers to Mozilla containers just to get memory
> reporting.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Benoit Jacob
Here, stripping a non-opt debug linux 64bit libxul brings it down from 534
MB down to 117 MB.

Benoit


2013/12/3 L. David Baron 

> On Tuesday 2013-12-03 10:18 -0800, Brian Smith wrote:
> > Also, I would be very interested in seeing "size of libxul.so" for
> > fully-optimized (including PGO, where we normally do PGO) builds. Do
> > unified builds help or hurt libxul size for release builds? Do unified
> > builds help or hurt performance in release builds?
>
> I'd certainly hope that nearly all of the difference in size of
> libxul.so is debugging info that wouldn't be present in a non-debug
> build.  But it's worth testing, because if that's not the case,
> there are some serious improvements that could be made in the C/C++
> toolchain...
>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Benoit Jacob
2013/12/3 Chris Peterson 

> On 12/3/13, 8:53 AM, Ted Mielczarek wrote:
>
>> On 12/2/2013 11:39 PM, Mike Hommey wrote:
>>
>>> Current setup (16):
>>>real11m7.986s
>>>user63m48.075s
>>>sys 3m24.677s
>>>Size of the objdir: 3.4GiB
>>>Size of libxul.so: 455MB
>>>
>>>  Just out of curiosity, did you try with greater than 16?
>>
>
> I tested unifying 99 files. On my not-super-fast MacBook Pro, I saw no
> significant difference (up or down) in real time compared to 16 files. This
> result is in line with Mike's results showing only small improvements
> between 8, 12, and 16 files.
>

See my email earlier in this thread.  Until we know the effective
unification ratio (as opposed to the one we request) we can't draw
conclusions from that.

Benoit



>
> chris
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Benoit Jacob
I would like to know the *effective* average number of original source
files per unified source file, and see how it compares to the *requested*
one (which you are adjusting here).

Because many unified directories have a low number of source files, the
effective number of sources per unified source will be lower than the
requested one.

Benoit


2013/12/2 Mike Hommey 

> Hi,
>
> It was already mentioned that unified builds might be causing memory
> issues. Since the number of unified sources (16) was decided more or
> less arbitrarily (in fact, it's just using whatever was used for
> ipdl/webidl builds, which, in turn just used whatever seemed a good
> tradeoff between clobber build and incremental build with a single .cpp
> changing), it would be good to make an informed decision about the
> number of unified sources.
>
> So, now that mozilla-inbound (finally) builds with different numbers of
> unified sources (after fixing bugs 944844 and 945563, but how long
> before another problem slips in?[1]), I got some build time numbers on my
> machine (linux, old i7, 16GB RAM) to give some perspective:
>
> Current setup (16):
>   real11m7.986s
>   user63m48.075s
>   sys 3m24.677s
>   Size of the objdir: 3.4GiB
>   Size of libxul.so: 455MB
>
> 12 unified sources (requires additional patches for yet-to-be-filed bugs
> (yes, plural)):
>   real  11m18.572s
>   user  65m24.145s
>   sys   3m28.113s
>   Size of the objdir: 3.5GiB
>   Size of libxul.so: 464MB
>
> 8 unified sources:
>   real11m47.825s
>   user68m21.888s
>   sys 3m39.406s
>   Size of the objdir: 3.6GiB
>   Size of libxul.so: 476MB
>
> 4 unified sources:
>   real  12m52.630s
>   user  76m41.208s
>   sys   4m2.783s
>   Size of the objdir: 3.9GiB
>   Size of libxul.so: 509MB
>
> 2 unified sources:
>   real  14m59.050s
>   user  90m44.928s
>   sys   4m45.418s
>   Size of the objdir: 4.3GiB
>   Size of libxul.so: 561MB
>
> disabled unified sources:
>   real  18m1.001s
>   user  113m0.524s
>   sys   5m57.970s
>   Size of the objdir: 4.9GiB
>   Size of libxul.so: 628MB
>
> Mike
>
> 1. By the way, those type of bugs that show up at different number of
> unified sources are existing type of problems waiting to arise when we
> add source files, and running non-unified builds doesn't catch them.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mitigating unified build side effects Was: Thinking about the merge with unified build

2013-11-30 Thread Benoit Jacob
I'm all for reducing usage of 'using' and in .cpp files I've been switching
to doing

namespace foo {
// my code
}

instead of

using namespace foo;
// my code

where possible, as the latter leaks to other .cpp files in unified builds
and the former doesn't.

Regarding the proposal to ban 'using' only at root scope only, keep in mind
that we have conflicting *nested* namespaces too:

mozilla::ipc
mozilla::dom::ipc

so at least that class of problems won't be solved by this proposal. But I
still agree that it's a step in the right direction.

Benoit


2013/11/29 Mike Hommey 

> On Sat, Nov 30, 2013 at 12:39:59PM +0900, Mike Hommey wrote:
> > Incidentally, in those two weeks, I did two attempts at building
> > without unified sources, resulting in me filing 4 bugs in different
> > modules for problems caused by 6 different landings[1]. I think it is
> time
> > to seriously think about having regular non-unified builds (bug 942167).
> > If that helps, I can do that on birch until that bug is fixed.
>
> Speaking of which, there are essentially two classes of such errors:
> - missing headers.
> - namespace spilling.
>
> The latter is due to one source doing "using namespace foo", and some
> other source forgetting the same because, in the unified case, they
> benefit from the other source doing it. I think in the light of unified
> sources, we should forbid non-scoped use of "using".
>
> That is:
>
> using namespace foo;
>
> would be forbidden, but
>
> namespace bar {
> using namespace foo;
> }
>
> wouldn't. In most cases, bar could be mozilla anyways.
>
> Thoughts?
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HWA and OMTC on Linux

2013-11-26 Thread Benoit Jacob
Congrats Nick, after all is said and done, this is a very nice milestone to
cross!


2013/11/26 Nicholas Cameron 

> This has finally happened. If it sticks, then after this commit (
> https://tbpl.mozilla.org/?tree=Mozilla-Inbound&rev=aa0066b3865c) there
> will be no more main thread OpenGL compositing on any platform. See my blog
> post (
> http://featherweightmusings.blogspot.co.nz/2013/11/no-more-main-thread-opengl-in-firefox.html)
> for details (basically what I proposed at the beginning of this thread).
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Recent build time improvements due to unified sources

2013-11-20 Thread Benoit Jacob
2013/11/20 Ehsan Akhgari 

> On 2013-11-20 5:27 PM, Robert O'Callahan wrote:
>
>> On Thu, Nov 21, 2013 at 11:06 AM, Zack Weinberg  wrote:
>>
>>  On 2013-11-20 12:37 PM, Benoit Jacob wrote:
>>>
>>>  Talking about ideas for further extending the impact of UNIFIED_SOURCES,
>>>> it
>>>> seems that the biggest limitation at the moment is that sources can't be
>>>> unified between different moz.build's. Because of that, source
>>>> directories
>>>> that consist of many small sub-directories do not benefit much from
>>>> UNIFIED_SOURCES at the moment. I would love to have the ability to
>>>> declare
>>>> in a moz.build that UNIFIED_SOURCES from here downwards, including
>>>> subdirectories, are to be unified with each other. Does that sound
>>>> reasonable?
>>>>
>>>>
>>> ... Maybe this should be treated as an excuse to reduce directory
>>> nesting?
>>>
>>>
>> We don't need an excuse!
>>
>> layout/xul/base/src, and pretty much everything under content/, I'm
>> looking
>> at you.
>>
>
> How do you propose that we know which directory contains the "source" then?
>

And I always thought that all public: methods had to go in the public/
directory!

Benoit


>
> 
>
> Ehsan
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Recent build time improvements due to unified sources

2013-11-20 Thread Benoit Jacob
2013/11/20 Gregory Szorc 

> On Nov 20, 2013, at 9:37, Benoit Jacob  wrote:
>
> > Talking about ideas for further extending the impact of UNIFIED_SOURCES,
> it
> > seems that the biggest limitation at the moment is that sources can't be
> > unified between different moz.build's. Because of that, source
> directories
> > that consist of many small sub-directories do not benefit much from
> > UNIFIED_SOURCES at the moment. I would love to have the ability to
> declare
> > in a moz.build that UNIFIED_SOURCES from here downwards, including
> > subdirectories, are to be unified with each other. Does that sound
> > reasonable?
>
> You can do this today by having a parent moz.build list sources in child
> directories.
>

>From the perspective of someone porting a directory to UNIFIED_SOURCES, and
wanting to make minimal changes at this point to see how much compile time
improvement we can get without making too intrusive changes everywhere,
that is not the same: Switching an entire directory to listing all sources
in the parent moz.build is a very intrusive change to make to the build
system. I've been refraining from doing that for now.

Benoit


>
> Keep in mind some moz.build/Makefile.in still customize directories on a
> directory-by-directory basis.
>
> I would love to see a project that consolidated data into fewer moz.build
> files. The recent work around how libraries are defined should have made
> that easier. But there are still things you can only do once per directory.
> Those limitations will disappear eventually.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Recent build time improvements due to unified sources

2013-11-20 Thread Benoit Jacob
Talking about ideas for further extending the impact of UNIFIED_SOURCES, it
seems that the biggest limitation at the moment is that sources can't be
unified between different moz.build's. Because of that, source directories
that consist of many small sub-directories do not benefit much from
UNIFIED_SOURCES at the moment. I would love to have the ability to declare
in a moz.build that UNIFIED_SOURCES from here downwards, including
subdirectories, are to be unified with each other. Does that sound
reasonable?

Benoit


2013/11/20 Chris Peterson 

> On 11/19/13, 10:08 PM, Gregory Szorc wrote:
>
>> And 24 hours later, m-c (4f993fa378eb) is getting faster:
>>
>> Wall   8:47  (527)
>> User  52:41 (3161)
>> Sys4:38  (278)
>> Total 57:19 (3439)
>>
>
> Unified builds currently coalesce source files in batches of 16.
>
> It might be useful to add a files_per_unified_file parameter to mozconfig
> or mach build. People could benchmark different values of
> files_per_unified_file (trading off clobber vs incremental build times).
> The same parameter could also be used to disable unified builds with
> files_per_unified_file = 1.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Graphics leaks(?)

2013-11-19 Thread Benoit Jacob
2013/11/19 Nicholas Nethercote 

> And the comment at https://bugzilla.mozilla.org/show_bug.cgi?id=915940#c13is
> worrying:
> "... once allocated the memory is only referenced via a SurfaceDescriptor,
> which is a generated class (from IPDL). These are then passed around from
> thread to thread and not really kept tracked of - the lifetime management
> for
> them and their resources is an ongoing nightmare and is why we were leaking
> this memory image memory until Friday."
>
> Is my perception wrong -- is graphics code especially leak-prone?  If not,
> could we be doing more and/or different things to make such leaks less
> likely?
> https://bugzilla.mozilla.org/show_bug.cgi?id=935778 (hook
> RefCounted/RefPtr
> into the leak checking) is one idea.  Any others?
>

The problem is that SurfaceDescriptor is a non-refcounted IPDL wrapper, and
as such it should only ever have been used to reference surfaces short-term
in IPDL-related code, where "short-term" means over a period of time not
extending across the handling of more than one IPDL message. Otherwise, as
the thing it wraps is often a non-refcounted IPDL actor, it can have been
deleted at any time by the IPDL code (e.g. on channel error). So the
problem here is not just that SurfaceDescriptor makes it easy to write
leaky code, it also makes it super easy to write crashy code, if one
doesn't stick to the precise usage pattern that it is safe for (as said
above, only use it to process one IPDL message).

But it was very much used outside of that safe use case, mainly because
there was no other platform-generic surface type that people could use,
that would cover all the surface types that could be covered by a
SurfaceDescriptor. In a nutshell, new platform-specific surface types were
added, but platform-generic code needs a platform-generic surface type that
can specialize to any of these platform-specific types, and the only such
platform-generic surface type that we had for a while, that covered all the
newly added surface types, was SurfaceDescriptor.

Because of the way it ended being used in many places, SurfaceDescriptor
was involved in maybe half of the b2g 1.2 blocking (koi+) graphics crashes
that we went over over the past few months.

During the Paris work week we had extended sessions (I think they totalled
about 10 hours) about what a "right" platform-generic surface type would
be, and how they would be passed around. Obviously, it would be
reference-counted, but we worked out the details, and you can see the
results of these sessions here:

https://wiki.mozilla.org/Platform/GFX/Surfaces

In a nutshell, there is a near-term-but-not-immediately-trivial plan to get
such a "right" surface type, and it would come from unifying the existing
TextureClient and TextureHost classes.

Meanwhile, I had initially also been working on an even more near-term plan
to provide a drop-in safe replacement for SurfaceDescriptor, and wrote
patches on this bug, https://bugzilla.mozilla.org/show_bug.cgi?id=932537 ,
but have since been told that this is considered not worth it anymore since
we should get the "right" surface type described above soon enough.

Hope that answers some of your questions / eases some of your concerns
Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Unified builds

2013-11-18 Thread Benoit Jacob
2013/11/18 Boris Zbarsky 

> On 11/17/13 5:26 PM, Ehsan Akhgari wrote:
>
>> I don't think that we need to try to fix this problem any more than the
>> general problem of denoting our dependencies explicitly.  It's common for
>> you to remove an #include from a header and find dozens of .cpp files in
>> the tree that implicitly depended on it.  And that is much more likely to
>> happen than people adding/removing cpp files.
>>
>
> While true, in the new setup we have a different problem: adding or
> removing a .cpp file makes other random .cpp files not compile.
>
> This is especially a problem where windows.h is involved.  For bindings we
> simply disallowed including it in binding .cpp files, but for other .cpp
> files that's not necessarily workable.  Maybe we need a better solution for
> windows.h bustage.  :(
>

While working on porting directories to UNIFIED_SOURCES, I too have found
that the main problem was system headers (not just windows.h but also Mac
and X11 headers) tend to define very polluting symbols in the root
namespace, which we collide with thanks to "using namespace" statements.

The solution I've employed so far has been to:
 1) minimize the number of cpp files that need to #include such system
headers, by typically moving code out of header files and only #including
system headers in a few implementation files;
 2) Keep these cpp files, that #include system headers, in plain old
SOURCES, not in UNIFIED_SOURCES.

In other words, I've been doing partial ports to UNIFIED_SOURCES only, not
full ports, but in this way we can still get 90% of the benefits and
sidestep problems caused by system headers. And 1) is generally considered
a good thing regardless.

Benoit



>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   >