Re: Intent to implement: cryptomining and fingerprinting resource blocking

2019-03-22 Thread Rik Cabanier
On Fri, Mar 22, 2019 at 6:07 AM Ehsan Akhgari 
wrote:

> On Thu, Mar 21, 2019, 9:39 PM Rik Cabanier,  wrote:
>
>> Why are these sites not included in the "safe browsing" service that is
>> used by most browsers?
>> That way, everyone would be protected.
>>
>
> Because the relevant part of safe browsing service covers a different set
> of criteria: https://www.google.com/about/unwanted-software-policy.html.
>

I think this page has the 3 criteria:
https://safebrowsing.google.com/#policies
It seems origins that try to fingerprint users or do cryptomining fall
under category 1 and 3


> But more importantly, Google's safe browsing isn't by far the only block
> list of bad URLs based on various criteria that various browsers and
> extension use to improve the user's browsing experience. To answer your
> actual question here, the block lists we're working with Disconnect to
> create here are available for everyone to use under a permissive license at
> https://github.com/disconnectme/disconnect-tracking-protection. We
> actually ingest the list using the safe browsing protocol so other browsers
> that have implemented that protocol could do the same today.
>

Good to know. Thanks for that link!


>
>> On Thu, Mar 21, 2019 at 2:59 PM Steven Englehardt <
>> sengleha...@mozilla.com>
>> wrote:
>>
>> > Summary:
>> > We are expanding the set of resources blocked by Content Blocking to
>> > include domains found to participate in cryptomining and fingerprinting.
>> > Cryptomining has a significant impact on a device’s resources [0], and
>> the
>> > scripts are almost exclusively deployed without notice to the user [1].
>> > Fingerprinting has long been used to track users, and is in violation
>> our
>> > anti-tracking policy [2].
>> >
>> > In support of this, we’ve worked with Disconnect to introduce two new
>> > categories of resources to their list: cryptominers [3] and
>> fingerprinters
>> > [4]. As of Firefox 67, we have exposed options to block these
>> categories of
>> > domains under the “Custom” section of the Content Blocking in
>> > about:preferences#privacy. We are actively working with Disconnect to
>> > discover new domains that participate in these practices, and expect the
>> > lists to grow over time. A full description of the lists is given here
>> [5].
>> >
>> > Bugs:
>> > Implementation: https://bugzilla.mozilla.org/show_bug.cgi?id=1513159
>> > Breakage:
>> > Cryptomining: https://bugzilla.mozilla.org/show_bug.cgi?id=1527015
>> > Fingerprinting: https://bugzilla.mozilla.org/show_bug.cgi?id=1527013
>> >
>> > We plan to test the impact of blocking these categories during the
>> Firefox
>> > 67 release cycle [6][7]. We are currently targeting Firefox 69 to block
>> > both categories by default, however this may change depending on the
>> > results of our user studies.
>> >
>> > To further field test the new lists, we expect to enable the blocking of
>> > both categories by default in Nightly within the coming month. If you do
>> > discover breakage related to this feature, we ask that you report it in
>> one
>> > of the cryptomining or fingerprinting blocking breakage bugs above.
>> >
>> > Link to standard: These are additions to Content Blocking/Tracking
>> > Protection which is not a feature we've standardized.
>> >
>> > Platform coverage:
>> > Desktop for now. It is being considered for geckoview: (
>> > https://bugzilla.mozilla.org/show_bug.cgi?id=1530789) but is on hold
>> until
>> > the feature is more thoroughly tested.
>> >
>> > Estimated release:
>> > Disabled by default and available for testing in Firefox 67. We expect
>> to
>> > ship this on by default in a future release, pending user testing
>> results.
>> > An intent to ship will be sent later.
>> >
>> > Preferences:
>> > * privacy.trackingprotection.fingerprinting.enabled - controls whether
>> > fingerprinting blocking is enabled
>> > * privacy.trackingprotection.cryptomining.enabled - controls whether
>> > cryptomining blocking is enabled
>> >
>> > These can also be enabled using the checkboxes under the Custom section
>> of
>> > Content Blocking in about:preferences#privacy for Firefox 67+.
>> >
>> > Is this feature enabled by default in sandboxed iframes?: Blocking
>> applies
>> > to all resources, regardless of their source.
>> >
>> > DevT

Re: Intent to implement: cryptomining and fingerprinting resource blocking

2019-03-21 Thread Rik Cabanier
Why are these sites not included in the "safe browsing" service that is
used by most browsers?
That way, everyone would be protected.

On Thu, Mar 21, 2019 at 2:59 PM Steven Englehardt 
wrote:

> Summary:
> We are expanding the set of resources blocked by Content Blocking to
> include domains found to participate in cryptomining and fingerprinting.
> Cryptomining has a significant impact on a device’s resources [0], and the
> scripts are almost exclusively deployed without notice to the user [1].
> Fingerprinting has long been used to track users, and is in violation our
> anti-tracking policy [2].
>
> In support of this, we’ve worked with Disconnect to introduce two new
> categories of resources to their list: cryptominers [3] and fingerprinters
> [4]. As of Firefox 67, we have exposed options to block these categories of
> domains under the “Custom” section of the Content Blocking in
> about:preferences#privacy. We are actively working with Disconnect to
> discover new domains that participate in these practices, and expect the
> lists to grow over time. A full description of the lists is given here [5].
>
> Bugs:
> Implementation: https://bugzilla.mozilla.org/show_bug.cgi?id=1513159
> Breakage:
> Cryptomining: https://bugzilla.mozilla.org/show_bug.cgi?id=1527015
> Fingerprinting: https://bugzilla.mozilla.org/show_bug.cgi?id=1527013
>
> We plan to test the impact of blocking these categories during the Firefox
> 67 release cycle [6][7]. We are currently targeting Firefox 69 to block
> both categories by default, however this may change depending on the
> results of our user studies.
>
> To further field test the new lists, we expect to enable the blocking of
> both categories by default in Nightly within the coming month. If you do
> discover breakage related to this feature, we ask that you report it in one
> of the cryptomining or fingerprinting blocking breakage bugs above.
>
> Link to standard: These are additions to Content Blocking/Tracking
> Protection which is not a feature we've standardized.
>
> Platform coverage:
> Desktop for now. It is being considered for geckoview: (
> https://bugzilla.mozilla.org/show_bug.cgi?id=1530789) but is on hold until
> the feature is more thoroughly tested.
>
> Estimated release:
> Disabled by default and available for testing in Firefox 67. We expect to
> ship this on by default in a future release, pending user testing results.
> An intent to ship will be sent later.
>
> Preferences:
> * privacy.trackingprotection.fingerprinting.enabled - controls whether
> fingerprinting blocking is enabled
> * privacy.trackingprotection.cryptomining.enabled - controls whether
> cryptomining blocking is enabled
>
> These can also be enabled using the checkboxes under the Custom section of
> Content Blocking in about:preferences#privacy for Firefox 67+.
>
> Is this feature enabled by default in sandboxed iframes?: Blocking applies
> to all resources, regardless of their source.
>
> DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1537627
> When blocking of either category is enabled, any blocked resources will be
> logged to the console with the following message: `The resource at “
> example.com” was blocked because content blocking is enabled.`
>
> Do other browser engines implement this?
> Opera and Brave block cryptominers using the no-coin cryptomining list
> [8][9]. The cryptomining list supplied by Disconnect is, in part, created
> by matching web crawl data against no-coin and other crowdsourced lists.
> No other browsers currently block the fingerprinting list, as we are
> working with Disconnect to build it for this feature. However, many of the
> domains on the fingerprinting list are likely to appear on other
> crowdsourced adblocking lists.
>
> Web-platform-tests: Since content blocking is not a standardized feature,
> there are no wpts.
>
> Is this feature restricted to secure contexts? No. Users benefit from
> blocking in all contexts.
>
> [0] https://arxiv.org/pdf/1806.01994.pdf
> [1] https://nikita.ca/papers/outguard-www19.pdf
> [2] https://wiki.mozilla.org/Security/Anti_tracking_policy
> [3]
>
> https://github.com/mozilla-services/shavar-prod-lists/blob/7eaadac98bc9dcc95ce917eff7bbb21cb71484ec/disconnect-blacklist.json#L9537
> [4]
>
> https://github.com/mozilla-services/shavar-prod-lists/blob/7eaadac98bc9dcc95ce917eff7bbb21cb71484ec/disconnect-blacklist.json#L9316
> [5] https://wiki.mozilla.org/Security/Tracking_protection#Lists
> [6] https://bugzilla.mozilla.org/show_bug.cgi?id=1533778
> [7] https://bugzilla.mozilla.org/show_bug.cgi?id=1530080
> [8]
>
> https://www.zdnet.com/article/opera-just-added-a-bitcoin-mining-blocker-to-its-browser/
> [9] https://github.com/brave/adblock-lists/blob/master/coin-miners.txt
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list

Re: Run firefox as headless???

2016-04-07 Thread Rik Cabanier
If the problem is that you're running on a windowless system, you can use
xfvb-run to launch your application

On Thu, Apr 7, 2016 at 5:11 PM, Devan Shah  wrote:

> slimerjs is not fully headless unless using xvfb,  is there anything else
> that can he used.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Skia Content on OS X

2016-03-22 Thread Rik Cabanier
On Tue, Mar 22, 2016 at 12:21 PM, Chris Peterson 
wrote:

> On 3/22/16 9:03 AM, Mason Chang wrote:
>
>> It’s also quite nice that micro-level optimizations at the backend level
>> can mostly be done for us as Skia is optimizing performance with Chrome as
>> one of their use cases.
>>
>
> On which platforms does Chrome use Skia?


It's used on all its platforms. AFAIK GPU rendering is turned for mobile
while desktop uses CPU based rendering
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: revisit and implement navigator.hardwareConcurrency

2015-09-08 Thread Rik Cabanier
On Tue, Sep 8, 2015 at 2:14 PM, Robert O'Callahan 
wrote:

> Yes, I think we should do this.
>

Happy to hear the positive responses.
I implemented a patch for this last year. Since the code is trivial, it
probably still applies: https://bugzilla.mozilla.org/show_bug.cgi?id=1008453
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: CSS and SVG filters for canvas 2D

2015-06-15 Thread Rik Cabanier
Great news! I'm super excited to see this go in!

On Mon, Jun 15, 2015 at 10:15 AM, Markus Stange msta...@themasta.com
wrote:

 Summary: The filter property on CanvasRenderingContext2D allows authors
 to specify a filter that will get applied during canvas drawing. It accepts
 the same values as the CSS filter property, so CSS filter shorthand
 functions, references to SVG filters, and chained combinations of the two.

 Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=927892 and
 https://bugzilla.mozilla.org/show_bug.cgi?id=1173545

 Link to standard: This would be added to the canvas spec. There has been
 some discussion [1] of this feature on the WhatWG mailing list, but no
 actual spec has been written yet.

 Platform coverage: All

 Estimated or target release: Firefox 41 or 42

 Preference behind which this will be implemented:
 This feature has been implemented behind the preference
 canvas.filters.enabled . We are planning to flip this preference to true by
 default in the very near future. The pref flip is being tracked in bug
 1173545.

 [1]
 https://lists.w3.org/Archives/Public/public-whatwg-archive/2014Sep/0110.html
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


intent to ship: isolation

2014-10-30 Thread Rik Cabanier
Primary eng emails
caban...@adobe.com, mih...@adobe.com

*Spec*
*http://www.w3.org/TR/compositing-1/#isolation
http://www.w3.org/TR/compositing-1/#isolation*
The spec has been in CR since Feb 20. On Oct 30 the CSS and SVG working
groups approved it to advance to PR.

*Summary*
The 'isolation' property makes the rendering of an element a new stacking
context. The main reason for this property is to limit context of the
blending of one of its children.

This property landed in gecko last month behind the
layout.css.isolation flag.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1091885

*WebKit:*
This feature is currently shipping in Safari 7.1 and 8

*Blink:*
Blink implemented this feature but it's still behind a flag. We're making
good progress on extending their compositor and are getting ready to ask
for shipping.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Filters

2014-09-16 Thread Rik Cabanier
On Tue, Sep 16, 2014 at 2:44 PM, L. David Baron dba...@dbaron.org wrote:

 On Tuesday 2014-09-16 21:29 +, Max Vujovic wrote:
  == Interop ==
  Safari, Chrome, and Opera currently ship an interoperable implementation
 of CSS Filters behind a -webkit prefix.

 Do they have plans to ship without the prefix?  It's not really
 interoperable if it requires different syntax.


Are you suggesting that we should add a '-webkit-filter' property so
Firefox is interoperable?
A cursory glance on GitHub [1] shows that roughly 2 out of 3 sites only use
the prefixed version.

1:
https://github.com/search?l=csso=descp=22q=-webkit-filterref=searchresultss=type=Codeutf8=%E2%9C%93
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Rik Cabanier
On Mon, Jun 9, 2014 at 1:28 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:

 2014-06-09 16:12 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:

 
 
 
  2014-06-09 15:56 GMT-04:00 Botond Ballo bba...@mozilla.com:
 
  - Original Message -
   From: Benoit Jacob jacob.benoi...@gmail.com
   To: Botond Ballo bba...@mozilla.com
   Cc: dev-platform dev-platform@lists.mozilla.org
   Sent: Monday, June 9, 2014 3:45:20 PM
   Subject: Re: C++ standards proposals of potential interest, and
  upcoming committee meeting
  
   2014-06-09 15:31 GMT-04:00 Botond Ballo bba...@mozilla.com:
  
Cairo-based 2D drawing API (latest revision):
  http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4021.pdf
   
  
   I would like the C++ committee's attention to be drawn to the dangers,
  for
   committee, to try to make decisions outside of its domain of
 expertise.
  I
   see more potential for harm than for good in having the C++ committee
  join
   the ranks of non graphics specialists thinking they know how to do
   graphics...
 
  Does this caution apply even if the explicit goal of this API is to
 allow
  people learning C++ and/or creating simple graphical applications to be
  able to do so with minimal overhead (setting up third-party libraries
 and
  such), rather than necessarily provide a tool for
 expert-level/heavy-duty
  graphics work?
 
 
  That would ease my concerns a lot, if that were the case, but skimming
  through the proposal, it explicitly seems not to be the case.
 
  The Motivation and Scope section shows that this aims to target drawing
  GUIs and cover other needs of graphical applications, so it's not just
  about learning or tiny use cases.
 
  Even more worryingly, the proposal talks about GPUs and Direct3D and
  OpenGL and even Mantle, and that scares me, given what we know about how
  sad it is to have to take an API like Cairo (or Skia, or Moz2D, or Canvas
  2D, it doesn't matter) and try to make it efficiently utilize GPUs. The
  case of a Cairo-like or Skia-like API could totally be made, but the only
  mention of GPUs should be to say that they are mostly outside of its
 scope;
  anything more enthusiastic than that confirms fears that the proposal's
  authors are not talking out of experience.
 

 It's actually even worse than I realized: the proposal is peppered with
 performance-related comments about GPUs. Just search for GPU in it, there
 are 42 matches, most of them scarily talking about GPU performance
 characteristics (a typical one is GPU resources are expensive to copy).

 This proposal should either not care at all about GPU details, which would
 be totally fine for a basic software 2D renderer, which could already cover
 the needs of many applications; or, if it were to seriously care about
 running fast on GPUs, it would not use Cairo as its starting point and it
 would look totally different (it would try to lend itself ot seamlessly
 batching and reordering drawing primitives; typically, a declarative /
 scene-graph API would be a better starting point).


This is a terrible proposal. For instance:

class surface {

...

void finish();

void flush();

::std::shared_ptrdevice get_device();

content get_content();

void mark_dirty();

void mark_dirty_rectangle(const rectangle rect);

void set_device_offset(const point offset);

void get_device_offset(point offset);

void write_to_png(const ::std::string filename);

image_surface map_to_image(const rectangle extents);

void unmap_image(image_surface image);

bool has_surface_resource() const;


They simply took the Cairo C library and wrapped it up in classes.
Taking something like the Canvas 2D API and adding support for layers,
filters and proper text support would be a better way to go.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Rik Cabanier
On Mon, Jun 9, 2014 at 1:50 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:

 2014-06-09 16:27 GMT-04:00 Jet Villegas j...@mozilla.com:

  It seems healthy for the core C++ language to explore new territory here.
  Modern primitives for things like pixels and colors would be a good
 thing,
  I think. Let the compiler vendors compete to boil it down to the CPU/GPU.


 In the Web world, we have such an API, Canvas 2D, and the compiler
 vendors are the browser vendors. After years of intense competition
 between browser vendors, and very high cost to all browser vendors, nobody
 has figured yet how to make Canvas2D efficiently utilize GPUs.


Chrome, IE and Safari all have GPU accelerated backends with good success.
Deferred rendering is working very well.


 There are
 basically two kinds of Canvas2D applications: those for which GPUs have
 been useless so far, and those which have benefited much more from getting
 ported to WebGL, than they did from accelerated Canvas 2D.


That is not true. For instance, do you think mozilla's shumway would be
better and reliable if it was written in WebGL?


  There will always be the argument for keeping such things out of Systems
  languages, but that school of thought won't use those features anyway. I
  was taught to not divide by 2 because bit-shifting is how you do fast
  graphics in C/C++. I sure hope the compilers have caught up and such
  trickery is no longer required--Graphics shouldn't be such a black art.
 
  --Jet
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Rik Cabanier
On Mon, Jun 9, 2014 at 2:38 PM, Jeff Gilbert jgilb...@mozilla.com wrote:

 - Original Message -
  From: Rik Cabanier caban...@gmail.com
  To: Benoit Jacob jacob.benoi...@gmail.com
  Cc: Botond Ballo bba...@mozilla.com, dev-platform 
 dev-platform@lists.mozilla.org, Jet Villegas
  j...@mozilla.com
  Sent: Monday, June 9, 2014 2:08:37 PM
  Subject: Re: C++ standards proposals of potential interest, and upcoming
  committee meeting
 
  On Mon, Jun 9, 2014 at 1:50 PM, Benoit Jacob jacob.benoi...@gmail.com
  wrote:
 
   2014-06-09 16:27 GMT-04:00 Jet Villegas j...@mozilla.com:
  
It seems healthy for the core C++ language to explore new territory
 here.
Modern primitives for things like pixels and colors would be a good
   thing,
I think. Let the compiler vendors compete to boil it down to the
 CPU/GPU.
  
  
   In the Web world, we have such an API, Canvas 2D, and the compiler
   vendors are the browser vendors. After years of intense competition
   between browser vendors, and very high cost to all browser vendors,
 nobody
   has figured yet how to make Canvas2D efficiently utilize GPUs.
 
 
  Chrome, IE and Safari all have GPU accelerated backends with good
 success.
  Deferred rendering is working very well.
 We also use Skia's GL backend on some platforms, and D2D on windows.
 They're strictly slower than reimplementing the app in WebGL.


Sure, if you're just animating bitmaps with optional filters/compositing
WebGL is faster. Drawing paths and text on the GPU requires complex shaders
or tesselation and that require a lot of finessing.


   There are
   basically two kinds of Canvas2D applications: those for which GPUs have
   been useless so far, and those which have benefited much more from
 getting
   ported to WebGL, than they did from accelerated Canvas 2D.
 
 
  That is not true. For instance, do you think mozilla's shumway would be
  better and reliable if it was written in WebGL?
 It depends on the primitives shumway receives. If Flash uses a 2d-like API
 like skia/cairo/etc., then it is probably not worth the effort to
 reimplement the awful-for-GL parts of those APIs.
 If there's some performant subset that can be identified, then yes, doing
 a WebGL path for that would have a higher performance ceiling.


Likely a hybrid approach where vectors are rendered by canvas 2D into a
bitmap and those bitmaps are then animated by WebGL.
A bitmap would be generated when a movieclip has:
- a filter
- a blend mode
- cache as bitmap


There will always be the argument for keeping such things out of
 Systems
languages, but that school of thought won't use those features
 anyway. I
was taught to not divide by 2 because bit-shifting is how you do fast
graphics in C/C++. I sure hope the compilers have caught up and such
trickery is no longer required--Graphics shouldn't be such a black
 art.
   
--Jet
   
   ___
   dev-platform mailing list
   dev-platform@lists.mozilla.org
   https://lists.mozilla.org/listinfo/dev-platform
  
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-08 Thread Rik Cabanier
On Sat, Jun 7, 2014 at 9:38 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-07 12:49 GMT-04:00 L. David Baron dba...@dbaron.org:

 On Monday 2014-06-02 20:45 -0700, Rik Cabanier wrote:
  - change isIdentity() so it's a flag.

 I'm a little worried about this one at first glance.

 I suspect isIdentity is going to be used primarily for optimization.
 But we want optimizations on the Web to be good -- we should care
 about making it easy for authors to care about performance.  And I'm
 worried that a flag-based isIdentity will not be useful for
 optimization because it won't hit many of the cases that authors
 care about, e.g., translating and un-translating, or scaling and
 un-scaling.


 Note that the current way that isIdentity() works also fails to offer that
 characteristic, outside of accidental cases, due to how floating point
 works.

 The point of this optimizations is not so much to detect when a generic
 transformation happens to be of a special form, it is rather to represent
 transformations as a kind of variant type where matrix transformation is
 one possible variant type, and exists alongside the default, more optimized
 type, identity transformation.

 Earlier in this thread I pleaded for the removal of isIdentity(). What I
 mean is that as it only is defensible as a variant optimization as
 described above, it doesn't make sense in a _matrix_ class. If we want to
 have such a variant type, we should call it a name that does not contain
 the word matrix, and we should have it one level above where we actually
 do matrix arithmetic.

 Strawman class diagram:

   Transformation
   /  |  \
  /   |   \
 /|\
/ | \
 Identity   MatrixOther transform types
e.g. Translation

 In such a world, the class containing the word Matrix in its name would
 not have a isIdentity() method; and for use cases where having a variant
 type that can avoid being a full blown matrix is meaningful, we would have
 such a variant type, like Transformation in the above diagram, and the
 isIdentity() method there would be merely asking the variant type for its
 type field.


Note that DOMMatrix implements the following under the hood:

 DOMMatrix
  /  |  \
 /   |   \
/|\
   / | \
Identity 2D-Matrix  3D-Matrix
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-08 Thread Rik Cabanier
On Sun, Jun 8, 2014 at 8:13 AM, Benoit Jacob jacob.benoi...@gmail.com
wrote:

 2014-06-08 8:56 GMT-04:00 fb.01...@gmail.com:

  On Monday, June 2, 2014 12:11:29 AM UTC+2, Benoit Jacob wrote:
   My ROI for arguing on standards mailing on matrix math topics lists has
   been very low, presumably because these are specialist topics outside
 of
   the area of expertise of these groups.
  
   Here are a couple more objections by the way:
  
   [...]
  
   Benoit
 
  Benoit, would you mind producing a strawman for ES7, or advising someone
  who can? Brendan Eich is doing some type stuff which is probably relevant
  to this (also for SIMD etc.). I firmly believe proper Matrix handling 
  APIs for JS are wanted by quite a few people. DOMMatrix-using APIs may
 then
  be altered to accept JS matrices (or provide a way to translate from
  JSMatrix to DOMMatrix and back again). This may help in the long term
 while
  the platform can have the proposed APIs. Thanks!
 
 
 Don't put matrix arithmetic concepts directly in a generalist language like
 JS, or in its standard library. That's too much of a specialist topic and
 with too many compromises to decide on.

 Instead, at the language level, simply make sure that the language offers
 the right features to allow third parties to build good matrix classes on
 top of it.

 For example, C++'s templates, OO concepts, alignment/SIMD extensions, etc,
 make it a decent language to implement matrix libraries on top of, and as a
 result, C++ programmers are much better serve by the offering of
 independent matrix libraries, than they would be by a standard library
 attempt at matrix library design. Another example is Fortran, which IIRC
 has specific features enabling fast array arithmetic, but lets the actual
 matrix arithmetic up to 3rd-party libraries (BLAS, LAPACK). I think that
 all the history shows that leaving matrix arithmetic up to 3rd parties is
 best, but there are definitely language-level issues to discuss to enable
 3rd parties to do that well.


I agree. Please keep in mind that DOMMatrix is not designed as a generic
high-performance solution for matrix math. A pure JS solution will easily
outperform it.
It's designed to replace SVGMatrix and fix some of its deficiencies. It
also added a couple of helper functions.

Once it lands, we plan to extend CSS transforms so you can get to the
current matrix without stringifying and reparsing. (this will replace
WebKitCSSMatrix and MSCSSMatrix).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-06 Thread Rik Cabanier
On Thu, Jun 5, 2014 at 9:58 PM, Dirk Schulze dschu...@adobe.com wrote:


 On Jun 6, 2014, at 6:52 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Thu, Jun 5, 2014 at 9:40 PM, Dirk Schulze dschu...@adobe.com wrote:
 
  On Jun 6, 2014, at 6:27 AM, Robert O'Callahan rob...@ocallahan.org
 wrote:
 
   On Fri, Jun 6, 2014 at 4:22 PM, Dirk Schulze dschu...@adobe.com
 wrote:
   What about
  
   DOMMatrix(1,0,0,1,0,0) or
   DOMMatrix(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1)
  
   Do we check the values and determine if the matrix is identity or not?
 If we do, then authors could write DOMMatrix(other.a, other.b, other.c,
 other.d, other.e, other.f) and check any kind of matrix after transforming
 for identity. In this case, a real isIdentity check wouldn’t be worst IMO.
  
   I would lean towards just setting isIdentity to false for that case,
 but I could go either way. If authors try really hard to shoot themselves
 in the foot, they can.
  
   Rob
 
  Just as comparison: Gecko checks for IsIdentity 75 times (exclusive the
 definitions in matrix and matrix4x4). Every time the values are simply
 checked for 0 and 1. Means Gecko is shooting itself in the foot quite often
 :P. (In WebKit it is about ~70 times as well.)
 
  The question is not that 'isIdentity' is bad. Benoit's issue was that
 checking for 'isIdentity' after doing transformations might cause jittery
 results (ie switch to true or false depending on the conditions).
  Quickle scanning mozilla's codebase, our current definition of
 'isIdentity' would return the correct result in all cases.
 

 Just take the first result of many:

 static PathInterpolationResult
 CanInterpolate(const SVGPathDataAndInfo aStart,
const SVGPathDataAndInfo aEnd)
 {
   if (aStart.IsIdentity()) {
 return eCanInterpolate;
   }
 …

 Where can you guarantee that you don’t see jittering? aStart could be
 modified


That one does not check for an identity matrix:

  /**
   * Returns true if this object is an identity value, from the
perspective
   * of SMIL. In other words, returns true until the initial value set up in
   * SVGPathSegListSMILType::Init() has been changed with a SetElement()
call.
   */
  bool IsIdentity() const {
if (!mElement) {
  NS_ABORT_IF_FALSE(IsEmpty(), target element propagation failure);
  return true;
}
return false;
  }

Maybe you and I should take this offline...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-06 Thread Rik Cabanier
On Fri, Jun 6, 2014 at 8:57 AM, Milan Sreckovic msrecko...@mozilla.com
wrote:

 On Jun 5, 2014, at 18:34 , Rik Cabanier caban...@gmail.com wrote:

 On Thu, Jun 5, 2014 at 3:28 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 On Fri, Jun 6, 2014 at 9:07 AM, Rik Cabanier caban...@gmail.com wrote:

 ...



 Then there's this case:
 var m = new DOMMatrix();
 m.translate(-1,-1);
 m.translate(1,1);
 m.isIdentity() == false

 I'm OK with that. Maybe we do need a better name though. Invert the
 meaning and call it maybeHasTransform()?


 That sounds good to me.


 That just feels very wrong.  I understand not having an isIdentity()
 method as Benoit proposes.  The argument being “is identity question is
 more complicated than you think, so we won’t let you ask it, and instead
 you have to do it manually, which means you understand what’s going on”.

 I don’t understand having isIdentity() method and having it return false
 when you actually have an identity transform.  If it was
 “hasBeenModified()” or some such, I can understand having it behave that
 way.


I could live with that name as well. The problem is what modified means.

var m = DOMMatrix(2,0,0,1,0,0) ;

m. hasBeenModified(); //?


I've been thinking more and I'm leaning back towards isIdentity.


 I’d much rather have “isIdentityExactly()” or isCloseToIdentity(float
 tolerance)” for a given definition of tolerance.  Or not have it at all and
 write the JS utility myself.


Yes, you can do this yourself. You should ask yourself though if you would
really need to do this... As Benoit said, this might cause inconsistent
behavior.
Moreover, non-identity matrices are very rare so you should ask yourself if
the fixed cost of always checking for true identity is worth it.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-06 Thread Rik Cabanier
On Fri, Jun 6, 2014 at 12:18 PM, Kip Gilbert kgilb...@mozilla.com wrote:

 Hello,

 From a game programmer's perspective, isIdentity would normally be used to
 cull out work.  In these cases, it is expected that isIdentity will return
 true only when the matrix is exactly equal to the identity matrix.  Due to
 the nature of floating point, the isIdentity will only likely be true when
 it has been initialized or assigned identity directly.  This is what occurs
 in existing game / math libraries.


Yes, this is why storing it as an internal flag is the correct solution.


 If we allow some amount of tolerance in the comparison, the issue is of
 how much tolerance is right for a particular application.  One application
 may benefit from having small amount of variance as it may be rotating
 items around a pivot that is close to their origins with units that are
 screen space coordinates; however, another application may use a pivot /
 origin that has a real-world relation to the objects represented in a
 scene.  The same amount of variance that would allow an unnoticeable
 sub-pixel difference in the first case would result in an object not
 appearing at all in the second case.  IMHO, acceptable level of tolerance
 is something that the platform should never dictate and that users of the
 library would never expect.  It is considered safe to error on the side of
 returning false but never safe to return true when the value is not exactly
 identity.

 Another use case is physics calculation.  The vast majority of simulated
 objects will be at their resting position until they are hit by a collider
 in which case their transform becomes non-identity.  The moving objects
 live for a short time until they respawn in which case their transform is
 set back to the origin with the assignment of an identity matrix.
  Expensive calculations such as IK (Inverse Kinematics) chains are executed
 only on the objects that are not at rest and thus are transformed by a
 matrix that is not identity.

 tl;dr..  I would only see real-world benefit in a simple IsIdentity
 function which returns true only when the matrix is exactly identity.


I agree. At this point, it's clear that isIdentity() is too confusing and
doesn't do what its name implies.
Let's go with roc's proposal and replace the API with 'maybeHasTransform'.



  It is a good point that checking all 16 elements every time is costy. But
 that is exactly what authors would expect the UA to do.

 I still don’t buy the argument with unexpected results though. We never
 can guarantee exact results. 1 divided by 3 doesn’t give exact results but
 at least interoperable results as long as platforms follow IEEE. This is
 not the case for trigonometric functions. It is not possible to guarantee
 that sin(Math.PI/3) is the same on all platforms since implementations vary
 across platforms. This of course affects DOMMatrix when one uses rotate. So
 none of the values can be guaranteed to be interoperable across all
 platforms. That means that isIdentity might not be guaranteed to give exact
 results either. And this usually is fine. If isIdentity does return false,
 well then the engine has to do a bit more work and can’t return earlier…
 That is what isIdentity is used for anyway. Make sure that engines don’t do
 unnecessary transformations.

 It is good that DOMMatrix can be extended by users to add this basic
 functionality that all drawing engines and browser engines use under the
 hood already.

 Greetings,
 Dirk

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-06 Thread Rik Cabanier
On Fri, Jun 6, 2014 at 1:52 PM, Kip Gilbert kgilb...@mozilla.com wrote:


 On 2014-06-06, 1:23 PM, Rik Cabanier wrote:




 On Fri, Jun 6, 2014 at 12:18 PM, Kip Gilbert kgilb...@mozilla.com
 mailto:kgilb...@mozilla.com wrote:

 Hello,

 From a game programmer's perspective, isIdentity would normally be
 used to cull out work.  In these cases, it is expected that
 isIdentity will return true only when the matrix is exactly equal
 to the identity matrix.  Due to the nature of floating point, the
 isIdentity will only likely be true when it has been initialized
 or assigned identity directly.  This is what occurs in existing
 game / math libraries.


 Yes, this is why storing it as an internal flag is the correct solution.

 Perhaps you wish to use an internal flag as an optimization?  This is
 usually not necessary as the comparison function will usually early-out
 after the first row of comparisons.  If the matrix class is implemented
 with SIMD instructions, the comparison becomes even cheaper.  There is
 nothing inherently wrong with mirroring the equality with a separate flag;
 however, this is potentially unneeded complexity.


Since a DOMMatrix can be 2x3 or 4x4, it will store a pointer to either type:

gfx::Matrix*  mMatrix2D;
gfx::Matrix4x4*   mMatrix3D;


The check then becomes:

bool
DOMMatrixReadOnly::maybeHasTransform() const
{

  return (mMatrix2D != nullptr) || (mMatrix3D != nullptr);

}


 If we allow some amount of tolerance in the comparison, the issue
 is of how much tolerance is right for a particular application.
  One application may benefit from having small amount of variance
 as it may be rotating items around a pivot that is close to their
 origins with units that are screen space coordinates; however,
 another application may use a pivot / origin that has a real-world
 relation to the objects represented in a scene.  The same amount
 of variance that would allow an unnoticeable sub-pixel difference
 in the first case would result in an object not appearing at all
 in the second case.  IMHO, acceptable level of tolerance is
 something that the platform should never dictate and that users of
 the library would never expect.  It is considered safe to error on
 the side of returning false but never safe to return true when the
 value is not exactly identity.

 Another use case is physics calculation.  The vast majority of
 simulated objects will be at their resting position until they are
 hit by a collider in which case their transform becomes
 non-identity.  The moving objects live for a short time until they
 respawn in which case their transform is set back to the origin
 with the assignment of an identity matrix.  Expensive calculations
 such as IK (Inverse Kinematics) chains are executed only on the
 objects that are not at rest and thus are transformed by a matrix
 that is not identity.

 tl;dr..  I would only see real-world benefit in a simple
 IsIdentity function which returns true only when the matrix is
 exactly identity.


 I agree. At this point, it's clear that isIdentity() is too confusing and
 doesn't do what its name implies.
 Let's go with roc's proposal and replace the API with 'maybeHasTransform'.

  I don't believe isIdentity to be confusing.  For example, the Unity3D
 Matrix4x4 class also has an isIdentity function:

 http://docs.unity3d.com/ScriptReference/Matrix4x4-isIdentity.html

 Given the estimated 2 million developers using Unity, there seems to be a
 lack of issues raised about the isIdentity function.


The issue is not that isIdentity() is confusing. The problem is that you
shouldn't make decisions based on it. From earlier in the thread:

The isIdentity() method has the same issue as was described about is2D()
above: as matrices get computed, they are going to jump unpredicably
between being exactly identity and not. People using isIdentity() to jump
between code paths are going to get unexpected jumps between code paths
i.e. typically performance cliffs, or worse if they start asserting that a
matrix should or should not be exactly identity. For that reason, I would
remove the isIdentity method.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-06 Thread Rik Cabanier
On Fri, Jun 6, 2014 at 2:10 PM, Neil n...@parkwaycc.co.uk wrote:

 Rik Cabanier wrote:

  1. isIdentity()
 We settled that this should mean that the matrix was never changed to a
 non identity state.

  Are you doing something similar for the 2d/3d case?


Yes, once a matrix becomes 3d, it will always be a 3d matrix.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-06 Thread Rik Cabanier
On Fri, Jun 6, 2014 at 2:34 PM, Dirk Schulze dschu...@adobe.com wrote:


 On Jun 6, 2014, at 11:22 PM, Rik Cabanier caban...@gmail.com wrote:

 
  The issue is not that isIdentity() is confusing. The problem is that
 you
  shouldn't make decisions based on it. From earlier in the thread:
 
  The isIdentity() method has the same issue as was described about is2D()
  above: as matrices get computed, they are going to jump unpredicably
  between being exactly identity and not. People using isIdentity() to jump
  between code paths are going to get unexpected jumps between code paths
  i.e. typically performance cliffs, or worse if they start asserting that
 a
  matrix should or should not be exactly identity. For that reason, I would
  remove the isIdentity method.

 And as real world examples show (browser implementations, graphic
 libraries and even game engines). This is not really an issue in practice.


It *is* an issue. Check all the cases in WebKit where you're checking for
almost identity.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Wed, Jun 4, 2014 at 5:47 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-04 20:28 GMT-04:00 Cameron McCormack c...@mcc.id.au:

 On 05/06/14 07:20, Milan Sreckovic wrote:

 In general, is “this is how it worked with SVGMatrix” one of the
 design principles?

 I was hoping this would be the time matrix rotate() method goes to
 radians, like the canvas rotate, and unlike SVGMatrix version that
 takes degrees...


 By the way, in the SVG Working Group we have been discussing (but haven't
 decided yet) whether to perform a wholesale overhaul of the SVG DOM.

 http://dev.w3.org/SVG/proposals/improving-svg-dom/

 If we go through with that, then we could drop SVGMatrix and use
 DOMMatrix (which wouldn't then need to be compatible with SVGMatrix) for
 all the SVG DOM methods we wanted to retain that deal with matrices. I'm
 hoping we'll resolve whether to go ahead with this at our next meeting, in
 August.


 Thanks, that's very interesting input in this thread, as the entire
 conversation here has been based on the axiom that we have to keep
 compatibility with SVGMatrix...


As Dirk says, we can't throw SVGMatrix away.
The rewrite of SVG is just a proposal from Cameron at the moment. Nobody
has signed off on its implementation and we don't know if other browsers
will buy into implementing a new interface while maintaining the old one.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Wed, Jun 4, 2014 at 2:20 PM, Milan Sreckovic msrecko...@mozilla.com
wrote:

 In general, is “this is how it worked with SVGMatrix” one of the design
 principles?

 I was hoping this would be the time matrix rotate() method goes to
 radians, like the canvas rotate, and unlike SVGMatrix version that takes
 degrees...


degrees is easier to understand for authors.
With the new DOMMatrix constructor, you can specify radians:

var m = new DOMMatrix('rotate(1.75rad)' ;

Not specifying the unit will make it default to degrees (like angles in SVG)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Thu, Jun 5, 2014 at 5:05 AM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-05 2:48 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Wed, Jun 4, 2014 at 2:20 PM, Milan Sreckovic msrecko...@mozilla.com
 wrote:

 In general, is “this is how it worked with SVGMatrix” one of the design
 principles?

 I was hoping this would be the time matrix rotate() method goes to
 radians, like the canvas rotate, and unlike SVGMatrix version that takes
 degrees...


 degrees is easier to understand for authors.
 With the new DOMMatrix constructor, you can specify radians:

 var m = new DOMMatrix('rotate(1.75rad)' ;

 Not specifying the unit will make it default to degrees (like angles in
 SVG)



 The situation isn't symmetric: radians are inherently simpler to implement
 (thus slightly faster), basically because only in radians is it true that
 sin(x) ~= x for small x.

 I also doubt that degrees are simpler to understand, and if anything you
 might just want to provide a simple name for the constant 2*pi:

 var turn = Math.PI * 2;

 Now, what is easier to understand:

 rotate(turn / 5)

 or

 rotate(72)


The numbers don't lie :-)
Just do a google search for CSS transform rotate. I went over 20 pages of
results and they all used deg.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Thu, Jun 5, 2014 at 7:08 AM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-05 9:08 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Thu, Jun 5, 2014 at 5:05 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-05 2:48 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Wed, Jun 4, 2014 at 2:20 PM, Milan Sreckovic msrecko...@mozilla.com
  wrote:

 In general, is “this is how it worked with SVGMatrix” one of the
 design principles?

 I was hoping this would be the time matrix rotate() method goes to
 radians, like the canvas rotate, and unlike SVGMatrix version that takes
 degrees...


 degrees is easier to understand for authors.
 With the new DOMMatrix constructor, you can specify radians:

 var m = new DOMMatrix('rotate(1.75rad)' ;

 Not specifying the unit will make it default to degrees (like angles in
 SVG)



 The situation isn't symmetric: radians are inherently simpler to
 implement (thus slightly faster), basically because only in radians is it
 true that sin(x) ~= x for small x.

 I also doubt that degrees are simpler to understand, and if anything you
 might just want to provide a simple name for the constant 2*pi:

 var turn = Math.PI * 2;

 Now, what is easier to understand:

 rotate(turn / 5)

 or

 rotate(72)


 The numbers don't lie :-)
 Just do a google search for CSS transform rotate. I went over 20 pages
 of results and they all used deg.


 The other problem is that outside of SVG, other parts of the platform that
 are being proposed to use SVGMatrix were using radians. For example, the
 Canvas 2D context uses radians


 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-rotate

 Not to mention that JavaScript also uses radians, e.g. in Math.cos().


DOMMatrix is designed for interaction with CSS and SVG, both of which use
degrees predominantly.
It's a bit weird that Canvas 2D decided to use radians since it's not
consistent with the platform. Google canvas rotate radians to see all the
stackoverflow and blog post on how to use it with degrees.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Wed, Jun 4, 2014 at 12:21 AM, Dirk Schulze dschu...@adobe.com wrote:


 On Jun 4, 2014, at 12:42 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Tue, Jun 3, 2014 at 3:29 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  On Wed, Jun 4, 2014 at 10:26 AM, Rik Cabanier caban...@gmail.com
 wrote:
  That would require try/catch around all the invert() calls. This is
 ugly
  but more importantly, it will significantly slow down javascript
 execution.
  I'd prefer that we don't throw at all but we have to because SVGMatrix
 did.
 
  Are you sure that returning a special value (e.g. all NaNs) would not
 fix more code than it would break?
 
  No, I'm not sure :-)
  It is very likely that people are just calling invert and don't check
 for the exception. Returning NaN might indeed make thing more stable.
 
  I think returning all NaNs instead of throwing would be much better
 behavior.
 
  I completely agree.
  Going with Benoit's suggestion, we can change the idl for invert to:
  bool invert();
  and change inverse to return a matrix with NaN for all its elements.

 I don’t think that this is really getting the point. You seem to have the
 assumption that this is the most common pattern:

 if (matrix.isInvertible())
 matrix.invert();

 It isn’t in browser engines and I don’t think it will be in web
 applications. The actual use case would be to stop complex computations and
 graphical drawing operations right away if the author knows that nothing
 will be drawn anyway.

 function drawComplexStuff() {
 if (matrix.isInvertible())
 return; // Stop here before doing complex stuff.
 … complex stuff...
 }


I think the argument is that you just do the complex logic regardless if
the matrix is invertible.
I did some sleuthing on github [1] on how people use 'inverse' and could
find very few places that actual check if a matrix is invertible. It seems
the vast majority don't catch the exception so the program crashes.

I only found 2 instances where invertibility is checked.
One is in Google's closure library [2] and the other is in another library.
I could not find any instances of people actually calling these methods.
mozilla's codebase doesn't seem to do anything special for non-invertible
matrices. WebKit does what you describe though (mainly in the canvas code)

Given this, let's stay with the decision to leave 'isInvertible()'
People can polyfill it and we can always add it later if needed.


 There was an argument that:

 if (matrix.isInvertible())
 matrix.invert();

 would force UAs to compute the determinant twice. Actually, UAs can be
 very smart about that. The determinant is a simple double. It can be stored
 and invalidated as needed internally. (If it even turns out to be an issue.)
 I don't think that the argument about numerical stability counts either.
 If the determinant is not exactly 0, then the matrix is invertible. It
 doesn’t really matter if it is a result of numerical precision or not.


Caching the determinant will be much slower because it will force us to add
an internal flag that will need to be checked every time you change the
matrix.
It would also make the DOMMatrix object bigger by the size of the flag and
a double.


 To get back to

 bool invert()
 DOMMatrix inverse()

 invert() does a matrix inversion in place. So it is not particularly
 useful as a simple check for singularity.
 inverse() currently throws an exception. If it doesn’t anymore, then
 authors need to know that they need to check the elements of DOMMatrix for
 NaN. On the other hand, relying on exception checking isn’t great either.
 Both is making the live more difficult for authors.

 So I am not arguing that inverse() must throw and I dot argue that
 invert() should return a boolean. Changing this is fine with me. I am
 arguing that isInvertible() makes a lot of sense. Why wouldn’t it on the
 web platform if it is useful in our engines? determinant() is a way to
 check for singularity. Having either determinant() or isInvertible() or
 both makes a lot of sense to me. determinant() will be used internally a
 lot anyway. Being smarter and store the result of determinant() should
 solve some of the concerns.


1:
https://github.com/search?l=JavaScriptp=1q=svg+matrix+inverseref=advsearchtype=Code

2: https://github.com/google/closure-library
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
It seems like we're getting to agreement. (Please tell me if I'm wrong
about this)
There are 2 things that I have questions about:
1. isIdentity()
We settled that this should mean that the matrix was never changed to a non
identity state.
This means that the following code:

var m = new DOMMatrix();

m.rotate(0);

m.isIdentity() == false; //!

Given this, I feel that maybe we should rename it to hasChanged or
isInitial,

2. xxxby
DOMMatrix contains a bunch of xxxby methods (ie translateBy, rotateBy) that
apply the transformation to the object itself.
Benoit was confused by it and I agree that the name is not ideal. Should we
rename it to InPlace' ?

Thoughts?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Thu, Jun 5, 2014 at 2:14 PM, Dirk Schulze dschu...@adobe.com wrote:


 On Jun 5, 2014, at 11:07 PM, Rik Cabanier caban...@gmail.com wrote:

  It seems like we're getting to agreement. (Please tell me if I'm wrong
 about this)
  There are 2 things that I have questions about:
  1. isIdentity()
  We settled that this should mean that the matrix was never changed to a
 non identity state.
  This means that the following code:
  var m = new DOMMatrix();
  m.rotate(0);
  m.isIdentity() == false; //!
  Given this, I feel that maybe we should rename it to hasChanged or
 isInitial,

 How would that be useful for authors? hasChanged or isInital wouldn’t
 reveal any information.


It would. For instance:

var m = getMatrixFromCSSOM();

if(m.hasChanged())

; // apply matrix



 The idea of isIdentity is to know if the transformation matrix will have
 any effect. If we should not be able to check this then it should
 definitely not be named isIdentity but more over: it seems to be irrelevant.


Benoit already went over this earlier in this thread:

The isIdentity() method has the same issue as was described about is2D()
above: as matrices get computed, they are going to jump unpredicably
between being exactly identity and not. People using isIdentity() to jump
between code paths are going to get unexpected jumps between code paths
i.e. typically performance cliffs, or worse if they start asserting that a
matrix should or should not be exactly identity. For that reason, I would
remove the isIdentity method.



  2. xxxby
  DOMMatrix contains a bunch of xxxby methods (ie translateBy, rotateBy)
 that apply the transformation to the object itself.
  Benoit was confused by it and I agree that the name is not ideal. Should
 we rename it to InPlace’ ?

 It is less likely that authors are willing to write translateInPlace then
 translateBy IMO.


translateMe ?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Thu, Jun 5, 2014 at 3:28 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Fri, Jun 6, 2014 at 9:07 AM, Rik Cabanier caban...@gmail.com wrote:

 There are 2 things that I have questions about:
 1. isIdentity()
 We settled that this should mean that the matrix was never changed to a
 non
 identity state.
 This means that the following code:

 var m = new DOMMatrix();

 m.rotate(0);

 m.isIdentity() == false; //!

 Given this, I feel that maybe we should rename it to hasChanged or
 isInitial,


 We can define rotate(v) to set isIdentity to false if v != 0.0 (and
 similarly for other methods such as translate). Then, in your case,
 isIdentity would still be true. That was my original intent.


Works for me. This is how I implemented it in mozilla (except for the
rotate part which I will address next)


 Then there's this case:
 var m = new DOMMatrix();
 m.translate(-1,-1);
 m.translate(1,1);
 m.isIdentity() == false

 I'm OK with that. Maybe we do need a better name though. Invert the
 meaning and call it maybeHasTransform()?


That sounds good to me.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Rik Cabanier
On Thu, Jun 5, 2014 at 9:40 PM, Dirk Schulze dschu...@adobe.com wrote:


 On Jun 6, 2014, at 6:27 AM, Robert O'Callahan rob...@ocallahan.org
 wrote:

  On Fri, Jun 6, 2014 at 4:22 PM, Dirk Schulze dschu...@adobe.com wrote:
  What about
 
  DOMMatrix(1,0,0,1,0,0) or
  DOMMatrix(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1)
 
  Do we check the values and determine if the matrix is identity or not?
 If we do, then authors could write DOMMatrix(other.a, other.b, other.c,
 other.d, other.e, other.f) and check any kind of matrix after transforming
 for identity. In this case, a real isIdentity check wouldn’t be worst IMO.
 
  I would lean towards just setting isIdentity to false for that case, but
 I could go either way. If authors try really hard to shoot themselves in
 the foot, they can.
 
  Rob

 Just as comparison: Gecko checks for IsIdentity 75 times (exclusive the
 definitions in matrix and matrix4x4). Every time the values are simply
 checked for 0 and 1. Means Gecko is shooting itself in the foot quite often
 :P. (In WebKit it is about ~70 times as well.)


The question is not that 'isIdentity' is bad. Benoit's issue was that
checking for 'isIdentity' after doing transformations might cause jittery
results (ie switch to true or false depending on the conditions).
Quickle scanning mozilla's codebase, our current definition of 'isIdentity'
would return the correct result in all cases.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Rik Cabanier
On Tue, Jun 3, 2014 at 6:13 AM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-02 23:45 GMT-04:00 Rik Cabanier caban...@gmail.com:

 To recap I think the following points have been resolved:
 - remove determinant (unless someone comes up with a strong use case)
 - change is2D() so it's a flag instead of calculated on the fly
 - change isIdentity() so it's a flag.
 - update constructors so they set/copy the flags appropriately

 Still up for discussion:
 - rename isIdentity
 - come up with better way for the in-place transformations as opposed to
 by
 - is premultiply needed?



 This list misses some of the points that I care more about:
  - Should DOMMatrix really try to be both 3D projective transformations
 and 2D affine transformations or should that be split into separate classes?


Yes, DOMMatrix reflects the transform of DOM elements so it should contain
both.
I think the underlying implementation should move to use a 2d or 3d matrix
so it avoids the pitfalls you mentioned.


  - Should we really take SVG's matrix and other existing bad matrix APIs
 and bless them and engrave them in the marble of The New HTML5 That Is Good
 By Definition?


The platform is not helped by having multiple objects doing the same thing.
We *could* support but deprecate the old API if people feel that having
better names is worth the effort.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Rik Cabanier
On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-03 3:34 GMT-04:00 Dirk Schulze dschu...@adobe.com:


 On Jun 2, 2014, at 12:11 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

  Objection #6:
 
  The determinant() method, being in this API the only easy way to get
  something that looks roughly like a measure of invertibility, will
 probably
  be (mis-)used as a measure of invertibility. So I'm quite confident
 that it
  has a strong mis-use case. Does it have a strong good use case? Does it
  outweigh that? Note that if the intent is precisely to offer some kind
 of
  measure of invertibility, then that is yet another thing that would be
 best
  done with a singular values decomposition (along with solving, and with
  computing a polar decomposition, useful for interpolating matrices), by
  returning the ratio between the lowest and the highest singular value.

 Looking at use cases, then determinant() is indeed often used for:

 * Checking if a matrix is invertible.
 * Part of actually inverting the matrix.
 * Part of some decomposing algorithms as the one in CSS Transforms.

 I should note that the determinant is the most common way to check for
 invertibility of a matrix and part of actually inverting the matrix. Even
 Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
 determinant for these operations.


 I didn't say that determinant had no good use case. I said that it had
 more bad use cases than it had good ones. If its only use case if checking
 whether the cofactors formula will succeed in computing the inverse, then
 make that part of the inversion API so you don't compute the determinant
 twice.

 Here is a good use case of determinant, except it's bad because it
 computes the determinant twice:

   if (matrix.determinant() != 0) {// once
 result = matrix.inverse(); // twice
   }

 If that's the only thing we use the determinant for, then we're better
 served by an API like this, allowing to query success status:

   var matrixInversionResult = matrix.inverse();   // once
   if (matrixInversionResult.invertible()) {
 result = matrixInversionResult.inverse();
   }

 Typical bad uses of the determinant as measures of invertibility
 typically occur in conjunction with people thinking they do the right thing
 with fuzzy compares, like this typical bad pattern:

   if (matrix.determinant()  1e-6) {
 return error;
   }
   result = matrix.inverse();

 Multiple things are wrong here:

  1. First, as mentioned above, the determinant is being computed twice
 here.

  2. Second, floating-point scale invariance is broken: floating point
 computations should generally work for all values across the whole exponent
 range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
 matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
 The determinant of that matrix is 1e-8, so that matrix is incorrectly
 treated as non-invertible here.

  3. Third, if the primary use for the determinant is invertibility and
 inversion is implemented by cofactors (as it would be for 4x4 matrices)
 then in that case only an exact comparison of the determinant to 0 is
 relevant. That's a case where no fuzzy comparison is meaningful. If one
 wanted to guard against cancellation-induced imprecision, one would have to
 look at cofactors themselves, not just at the determinant.

 In full generality, the determinant is just the volume of the unit cube
 under the matrix transformation. It is exactly zero if and only if the
 matrix is singular. That doesn't by itself give any interpretation of other
 nonzero values of the determinant, not even very small ones.

 For special classes of matrices, things are different. Some classes of
 matrices have a specific determinant, for example rotations have
 determinant one, which can be used to do useful things. So in a
 sufficiently advanced or specialized matrix API, the determinant is useful
 to expose. DOMMatrix is special in that it is not advanced and not
 specialized.


I agree with your points. Let's drop determinant for now.
If authors start to demand it, we can add it back in later.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Rik Cabanier
On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-03 3:34 GMT-04:00 Dirk Schulze dschu...@adobe.com:


 On Jun 2, 2014, at 12:11 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

  Objection #6:
 
  The determinant() method, being in this API the only easy way to get
  something that looks roughly like a measure of invertibility, will
 probably
  be (mis-)used as a measure of invertibility. So I'm quite confident
 that it
  has a strong mis-use case. Does it have a strong good use case? Does it
  outweigh that? Note that if the intent is precisely to offer some kind
 of
  measure of invertibility, then that is yet another thing that would be
 best
  done with a singular values decomposition (along with solving, and with
  computing a polar decomposition, useful for interpolating matrices), by
  returning the ratio between the lowest and the highest singular value.

 Looking at use cases, then determinant() is indeed often used for:

 * Checking if a matrix is invertible.
 * Part of actually inverting the matrix.
 * Part of some decomposing algorithms as the one in CSS Transforms.

 I should note that the determinant is the most common way to check for
 invertibility of a matrix and part of actually inverting the matrix. Even
 Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
 determinant for these operations.


 I didn't say that determinant had no good use case. I said that it had
 more bad use cases than it had good ones. If its only use case if checking
 whether the cofactors formula will succeed in computing the inverse, then
 make that part of the inversion API so you don't compute the determinant
 twice.

 Here is a good use case of determinant, except it's bad because it
 computes the determinant twice:

   if (matrix.determinant() != 0) {// once
 result = matrix.inverse(); // twice
   }

 If that's the only thing we use the determinant for, then we're better
 served by an API like this, allowing to query success status:

   var matrixInversionResult = matrix.inverse();   // once
   if (matrixInversionResult.invertible()) {
 result = matrixInversionResult.inverse();
   }


This seems to be the main use case for Determinant(). Any objections if we
add isInvertible to DOMMatrixReadOnly?


 Typical bad uses of the determinant as measures of invertibility
 typically occur in conjunction with people thinking they do the right thing
 with fuzzy compares, like this typical bad pattern:

   if (matrix.determinant()  1e-6) {
 return error;
   }
   result = matrix.inverse();

 Multiple things are wrong here:

  1. First, as mentioned above, the determinant is being computed twice
 here.

  2. Second, floating-point scale invariance is broken: floating point
 computations should generally work for all values across the whole exponent
 range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
 matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
 The determinant of that matrix is 1e-8, so that matrix is incorrectly
 treated as non-invertible here.

  3. Third, if the primary use for the determinant is invertibility and
 inversion is implemented by cofactors (as it would be for 4x4 matrices)
 then in that case only an exact comparison of the determinant to 0 is
 relevant. That's a case where no fuzzy comparison is meaningful. If one
 wanted to guard against cancellation-induced imprecision, one would have to
 look at cofactors themselves, not just at the determinant.

 In full generality, the determinant is just the volume of the unit cube
 under the matrix transformation. It is exactly zero if and only if the
 matrix is singular. That doesn't by itself give any interpretation of other
 nonzero values of the determinant, not even very small ones.

 For special classes of matrices, things are different. Some classes of
 matrices have a specific determinant, for example rotations have
 determinant one, which can be used to do useful things. So in a
 sufficiently advanced or specialized matrix API, the determinant is useful
 to expose. DOMMatrix is special in that it is not advanced and not
 specialized.

 Benoit


 Greetings,
 Dirk



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Rik Cabanier
On Tue, Jun 3, 2014 at 2:34 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-03 16:20 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-03 3:34 GMT-04:00 Dirk Schulze dschu...@adobe.com:


 On Jun 2, 2014, at 12:11 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

  Objection #6:
 
  The determinant() method, being in this API the only easy way to get
  something that looks roughly like a measure of invertibility, will
 probably
  be (mis-)used as a measure of invertibility. So I'm quite confident
 that it
  has a strong mis-use case. Does it have a strong good use case? Does
 it
  outweigh that? Note that if the intent is precisely to offer some
 kind of
  measure of invertibility, then that is yet another thing that would
 be best
  done with a singular values decomposition (along with solving, and
 with
  computing a polar decomposition, useful for interpolating matrices),
 by
  returning the ratio between the lowest and the highest singular value.

 Looking at use cases, then determinant() is indeed often used for:

 * Checking if a matrix is invertible.
 * Part of actually inverting the matrix.
 * Part of some decomposing algorithms as the one in CSS Transforms.

 I should note that the determinant is the most common way to check for
 invertibility of a matrix and part of actually inverting the matrix. Even
 Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
 determinant for these operations.


 I didn't say that determinant had no good use case. I said that it had
 more bad use cases than it had good ones. If its only use case if checking
 whether the cofactors formula will succeed in computing the inverse, then
 make that part of the inversion API so you don't compute the determinant
 twice.

 Here is a good use case of determinant, except it's bad because it
 computes the determinant twice:

   if (matrix.determinant() != 0) {// once
 result = matrix.inverse(); // twice
   }

 If that's the only thing we use the determinant for, then we're better
 served by an API like this, allowing to query success status:

   var matrixInversionResult = matrix.inverse();   // once
   if (matrixInversionResult.invertible()) {
 result = matrixInversionResult.inverse();
   }


 This seems to be the main use case for Determinant(). Any objections if
 we add isInvertible to DOMMatrixReadOnly?


 Can you give an example of how this API would be used and how it would
 *not* force the implementation to compute the determinant twice if people
 call isInvertible() and then inverse() ?


I can not.
Calculating the determinant is fast though. I bet crossing the DOM boundary
is more expensive.


  Typical bad uses of the determinant as measures of invertibility
 typically occur in conjunction with people thinking they do the right thing
 with fuzzy compares, like this typical bad pattern:

   if (matrix.determinant()  1e-6) {
 return error;
   }
   result = matrix.inverse();

 Multiple things are wrong here:

  1. First, as mentioned above, the determinant is being computed twice
 here.

  2. Second, floating-point scale invariance is broken: floating point
 computations should generally work for all values across the whole exponent
 range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
 matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
 The determinant of that matrix is 1e-8, so that matrix is incorrectly
 treated as non-invertible here.

  3. Third, if the primary use for the determinant is invertibility and
 inversion is implemented by cofactors (as it would be for 4x4 matrices)
 then in that case only an exact comparison of the determinant to 0 is
 relevant. That's a case where no fuzzy comparison is meaningful. If one
 wanted to guard against cancellation-induced imprecision, one would have to
 look at cofactors themselves, not just at the determinant.

 In full generality, the determinant is just the volume of the unit cube
 under the matrix transformation. It is exactly zero if and only if the
 matrix is singular. That doesn't by itself give any interpretation of other
 nonzero values of the determinant, not even very small ones.

 For special classes of matrices, things are different. Some classes of
 matrices have a specific determinant, for example rotations have
 determinant one, which can be used to do useful things. So in a
 sufficiently advanced or specialized matrix API, the determinant is useful
 to expose. DOMMatrix is special in that it is not advanced and not
 specialized.

 Benoit


 Greetings,
 Dirk





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Rik Cabanier
On Tue, Jun 3, 2014 at 2:40 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-03 17:34 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-03 16:20 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-03 3:34 GMT-04:00 Dirk Schulze dschu...@adobe.com:


 On Jun 2, 2014, at 12:11 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

  Objection #6:
 
  The determinant() method, being in this API the only easy way to get
  something that looks roughly like a measure of invertibility, will
 probably
  be (mis-)used as a measure of invertibility. So I'm quite confident
 that it
  has a strong mis-use case. Does it have a strong good use case? Does
 it
  outweigh that? Note that if the intent is precisely to offer some
 kind of
  measure of invertibility, then that is yet another thing that would
 be best
  done with a singular values decomposition (along with solving, and
 with
  computing a polar decomposition, useful for interpolating matrices),
 by
  returning the ratio between the lowest and the highest singular
 value.

 Looking at use cases, then determinant() is indeed often used for:

 * Checking if a matrix is invertible.
 * Part of actually inverting the matrix.
 * Part of some decomposing algorithms as the one in CSS Transforms.

 I should note that the determinant is the most common way to check for
 invertibility of a matrix and part of actually inverting the matrix. Even
 Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
 determinant for these operations.


 I didn't say that determinant had no good use case. I said that it had
 more bad use cases than it had good ones. If its only use case if checking
 whether the cofactors formula will succeed in computing the inverse, then
 make that part of the inversion API so you don't compute the determinant
 twice.

 Here is a good use case of determinant, except it's bad because it
 computes the determinant twice:

   if (matrix.determinant() != 0) {// once
 result = matrix.inverse(); // twice
   }

 If that's the only thing we use the determinant for, then we're better
 served by an API like this, allowing to query success status:

   var matrixInversionResult = matrix.inverse();   // once
   if (matrixInversionResult.invertible()) {
 result = matrixInversionResult.inverse();
   }


 This seems to be the main use case for Determinant(). Any objections if
 we add isInvertible to DOMMatrixReadOnly?


 Can you give an example of how this API would be used and how it would
 *not* force the implementation to compute the determinant twice if people
 call isInvertible() and then inverse() ?


 Actually, inverse() is already spec'd to throw if the inversion fails. In
 that case (assuming we keep it that way) there is no need at all for any
 isInvertible kind of method. Note that in floating-point arithmetic there
 is no absolute notion of invertibility; there just are different matrix
 inversion algorithms each failing on different matrices, so invertibility
 only makes sense with respect to one inversion algorithm, so it is actually
 better to keep the current exception-throwing API than to introduce a
 separate isInvertible getter.


That would require try/catch around all the invert() calls. This is ugly
but more importantly, it will significantly slow down javascript execution.
I'd prefer that we don't throw at all but we have to because SVGMatrix did.



 Typical bad uses of the determinant as measures of invertibility
 typically occur in conjunction with people thinking they do the right thing
 with fuzzy compares, like this typical bad pattern:

   if (matrix.determinant()  1e-6) {
 return error;
   }
   result = matrix.inverse();

 Multiple things are wrong here:

  1. First, as mentioned above, the determinant is being computed twice
 here.

  2. Second, floating-point scale invariance is broken: floating point
 computations should generally work for all values across the whole exponent
 range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
 matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
 The determinant of that matrix is 1e-8, so that matrix is incorrectly
 treated as non-invertible here.

  3. Third, if the primary use for the determinant is invertibility and
 inversion is implemented by cofactors (as it would be for 4x4 matrices)
 then in that case only an exact comparison of the determinant to 0 is
 relevant. That's a case where no fuzzy comparison is meaningful. If one
 wanted to guard against cancellation-induced imprecision, one would have to
 look at cofactors themselves, not just at the determinant.

 In full generality, the determinant is just the volume of the unit cube
 under the matrix transformation. It is exactly zero if and only if the
 matrix is singular. That doesn't by itself give any interpretation of other
 nonzero values of the determinant

Re: Intent to implement: DOMMatrix

2014-06-03 Thread Rik Cabanier
On Tue, Jun 3, 2014 at 3:29 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Wed, Jun 4, 2014 at 10:26 AM, Rik Cabanier caban...@gmail.com wrote:

 That would require try/catch around all the invert() calls. This is ugly
 but more importantly, it will significantly slow down javascript
 execution.
 I'd prefer that we don't throw at all but we have to because SVGMatrix
 did.


 Are you sure that returning a special value (e.g. all NaNs) would not fix
 more code than it would break?


No, I'm not sure :-)
It is very likely that people are just calling invert and don't check for
the exception. Returning NaN might indeed make thing more stable.

I think returning all NaNs instead of throwing would be much better
 behavior.


I completely agree.
Going with Benoit's suggestion, we can change the idl for invert to:

bool invert();

and change inverse to return a matrix with NaN for all its elements.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Rik Cabanier
On Tue, Jun 3, 2014 at 3:48 PM, Till Schneidereit t...@tillschneidereit.net
 wrote:

 On Wed, Jun 4, 2014 at 12:26 AM, Rik Cabanier caban...@gmail.com wrote:

  Actually, inverse() is already spec'd to throw if the inversion fails.
 In
  that case (assuming we keep it that way) there is no need at all for any
  isInvertible kind of method. Note that in floating-point arithmetic
 there
  is no absolute notion of invertibility; there just are different matrix
  inversion algorithms each failing on different matrices, so
 invertibility
  only makes sense with respect to one inversion algorithm, so it is
 actually
  better to keep the current exception-throwing API than to introduce a
  separate isInvertible getter.
 

 That would require try/catch around all the invert() calls. This is ugly
 but more importantly, it will significantly slow down javascript
 execution.
 I'd prefer that we don't throw at all but we have to because SVGMatrix
 did.


 That isn't really true in modern engines. Just having a try/catch doesn't
 meaningfully slow down code anymore. If an exception is actually thrown, a
 (very) slow path is taken, but otherwise things are good.

 (I can only say this with certainty about SpiderMonkey and V8, but would
 assume that other engines behave similarly. And even if not, it doesn't
 make sense to make decisions like this based on their current performance
 characteristics.)


Interesting!
I wrote a small experiment: http://jsfiddle.net/G83mW/14/
Gecko is indeed impervious but Chrome, Safari and IE are not. V8 is between
4 and 6 times slower if there's a try/catch.

I agree that we shouldn't make decision on the current state. (FWIW I think
that exceptions should only be used for exceptional cases.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 8:04 AM, Martin Thomson m...@mozilla.com wrote:

 On 2014-05-30, at 21:00, Benoit Jacob jacob.benoi...@gmail.com wrote:

  2x3 matrices
  representing affine 2D transformations; this mode switch corresponds to
 the
  is2D() getter

 Am I the only one that finds this method entirely unintuitive?  After
 looking at only the IDL, admittedly, is2D() === true.

 Is the name intended to convey the fact that this is a highly specialised
 matrix, because it doesn’t really do that for me.


it conveys that this is a 2d matrix and that you can ignore the 3d
components.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 10:56 AM, Nick Alexander nalexan...@mozilla.com
wrote:

 On 2014-06-02, 9:59 AM, Rik Cabanier wrote:




 On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander nalexan...@mozilla.com
 mailto:nalexan...@mozilla.com wrote:

 On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

 On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier caban...@gmail.com
 mailto:caban...@gmail.com wrote:

 isIdentity() indeed suffers from rounding errors but since
 it's useful, I'm
 hesitant to remove it.
 In our rendering libraries at Adobe, we check if a matrix is
 *almost*
 identity. Maybe we can do the same here?


 One option would be to make isIdentity and is2D state bits
 in the
 object rather than predicates on the matrix coefficients. Then
 for each
 matrix operation, we would define how it affects the isIdentity
 and is2D
 bits. For example we could say translate(tx, ty, tz)'s result
 isIdentity if
 and only if the source matrix isIdentity and tx, ty and tz are
 all exactly
 0.0, and the result is2D if and only if the source matrix is2D
 and tz is
 exactly 0.0.

 With that approach, isIdentity and is2D would be much less
 sensitive to
 precision issues. In particular they'd be independent of the
 precision used
 to compute and store store matrix elements, which would be
 helpful I think.


 I agree that most mathematical ways of determining a matrix (as a
 rotation, or a translation, etc) come with isIdentity for free; but
 are most matrices derived from some underlying transformation, or
 are they given as a list of coefficients?


 You can do it either way. Here are the constructors:
 http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix

 So you can do:

 var m = new DOMMatrix(); // identity = true, 2d = true
 var m = new DOMMatrix(translate(20 20) scale(4 4) skewX); //
 identity = depends, 2d = depends
 var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d =
 inherited
 var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d = true
 var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d =
 depends

 If the latter, the isIdentity flag needs to be determined by the
 constructor, or fed as a parameter.  Exactly how does the
 constructor determine the parameter?  Exactly how does the user?


 The constructor would check the incoming parameters as defined:

 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity


 Thanks for providing these references.  As an aside -- it worries me that
 these are defined rather differently:  is2d says are equal to 0, while
 isIdentity says are '0'.  Is this a syntactic or a semantic difference?


It looks like an oversight. I'll ask Dirk to update it.


 But, to the point, the idea of carrying around the isIdentity flag is
 looking bad, because we either have that A*A.inverse() will never have
 isIdentity() == true; or we promote the idiom that to check for identity,
 one always creates a new DOMMatrix, so that the constructor determines
 isIdentity, and then we query it.  This is no better than just having
 isIdentity do the (badly-rounded) check.

 Nick

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 11:08 AM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-02 14:06 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-02 13:56 GMT-04:00 Nick Alexander nalexan...@mozilla.com:

 On 2014-06-02, 9:59 AM, Rik Cabanier wrote:




 On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander nalexan...@mozilla.com
 mailto:nalexan...@mozilla.com wrote:

 On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

 On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier 
 caban...@gmail.com
 mailto:caban...@gmail.com wrote:

 isIdentity() indeed suffers from rounding errors but since
 it's useful, I'm
 hesitant to remove it.
 In our rendering libraries at Adobe, we check if a matrix is
 *almost*
 identity. Maybe we can do the same here?


 One option would be to make isIdentity and is2D state bits
 in the
 object rather than predicates on the matrix coefficients. Then
 for each
 matrix operation, we would define how it affects the isIdentity
 and is2D
 bits. For example we could say translate(tx, ty, tz)'s result
 isIdentity if
 and only if the source matrix isIdentity and tx, ty and tz are
 all exactly
 0.0, and the result is2D if and only if the source matrix is2D
 and tz is
 exactly 0.0.

 With that approach, isIdentity and is2D would be much less
 sensitive to
 precision issues. In particular they'd be independent of the
 precision used
 to compute and store store matrix elements, which would be
 helpful I think.


 I agree that most mathematical ways of determining a matrix (as a
 rotation, or a translation, etc) come with isIdentity for free; but
 are most matrices derived from some underlying transformation, or
 are they given as a list of coefficients?


 You can do it either way. Here are the constructors:
 http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix

 So you can do:

 var m = new DOMMatrix(); // identity = true, 2d = true
 var m = new DOMMatrix(translate(20 20) scale(4 4) skewX); //
 identity = depends, 2d = depends
 var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d =
 inherited
 var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d =
 true
 var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d =
 depends

 If the latter, the isIdentity flag needs to be determined by the
 constructor, or fed as a parameter.  Exactly how does the
 constructor determine the parameter?  Exactly how does the user?


 The constructor would check the incoming parameters as defined:

 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity


 Thanks for providing these references.  As an aside -- it worries me
 that these are defined rather differently:  is2d says are equal to 0,
 while isIdentity says are '0'.  Is this a syntactic or a semantic
 difference?

 But, to the point, the idea of carrying around the isIdentity flag is
 looking bad, because we either have that A*A.inverse() will never have
 isIdentity() == true; or we promote the idiom that to check for identity,
 one always creates a new DOMMatrix, so that the constructor determines
 isIdentity, and then we query it.  This is no better than just having
 isIdentity do the (badly-rounded) check.


 The way that propagating an is identity flag is better than determining
 that from the matrix coefficients, is that it's predictable. People are
 going to have matrices that are the result of various arithmetic
 operations, that are close to identity but most of the time not exactly
 identity. On these matrices, I would like isIdentity() to consistently
 return false, instead of returning false 99.99% of the time and then
 suddenly accidentally returning true when a little miracle happens and a
 matrix happens to be exactly identity.


 ...but, to not lose sight of what I really want:  I am still not convinced
 that we should have a isIdentity() method at all, and by default I would
 prefer no such method to exist. I was only saying the above _if_ we must
 have a isIdentity method.


Scanning through the mozilla codebase, IsIdentity is used to make decisions
if objects were transformed. This seems to match how we use Identity()
internally.
Since this seems useful for native applications, there's no reason why this
wouldn't be the case for the web platform (aka blink's rational web
platform principle). If for some reason the author *really* wants to know
if the matrix is identity, he can calculate it manually.

I would be fine with keeping this as an internal flag and defining this
behavior normative.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org

Re: Intent to implement: DOMMatrix

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 3:03 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-06-02 17:13 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Mon, Jun 2, 2014 at 11:08 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-02 14:06 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-02 13:56 GMT-04:00 Nick Alexander nalexan...@mozilla.com:

 On 2014-06-02, 9:59 AM, Rik Cabanier wrote:




 On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander 
 nalexan...@mozilla.com
 mailto:nalexan...@mozilla.com wrote:

 On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

 On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier 
 caban...@gmail.com
 mailto:caban...@gmail.com wrote:

 isIdentity() indeed suffers from rounding errors but since
 it's useful, I'm
 hesitant to remove it.
 In our rendering libraries at Adobe, we check if a matrix
 is
 *almost*
 identity. Maybe we can do the same here?


 One option would be to make isIdentity and is2D state bits
 in the
 object rather than predicates on the matrix coefficients. Then
 for each
 matrix operation, we would define how it affects the
 isIdentity
 and is2D
 bits. For example we could say translate(tx, ty, tz)'s result
 isIdentity if
 and only if the source matrix isIdentity and tx, ty and tz are
 all exactly
 0.0, and the result is2D if and only if the source matrix is2D
 and tz is
 exactly 0.0.

 With that approach, isIdentity and is2D would be much less
 sensitive to
 precision issues. In particular they'd be independent of the
 precision used
 to compute and store store matrix elements, which would be
 helpful I think.


 I agree that most mathematical ways of determining a matrix (as a
 rotation, or a translation, etc) come with isIdentity for free;
 but
 are most matrices derived from some underlying transformation, or
 are they given as a list of coefficients?


 You can do it either way. Here are the constructors:
 http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix

 So you can do:

 var m = new DOMMatrix(); // identity = true, 2d = true
 var m = new DOMMatrix(translate(20 20) scale(4 4) skewX); //
 identity = depends, 2d = depends
 var m = new DOMMatrix(otherdommatrix;  // identity = inherited,
 2d =
 inherited
 var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d =
 true
 var m = new DOMMatrix([m11 m12... m44]); // identity = depends,
 2d =
 depends

 If the latter, the isIdentity flag needs to be determined by the
 constructor, or fed as a parameter.  Exactly how does the
 constructor determine the parameter?  Exactly how does the user?


 The constructor would check the incoming parameters as defined:

 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity


 Thanks for providing these references.  As an aside -- it worries me
 that these are defined rather differently:  is2d says are equal to 0,
 while isIdentity says are '0'.  Is this a syntactic or a semantic
 difference?

 But, to the point, the idea of carrying around the isIdentity flag
 is looking bad, because we either have that A*A.inverse() will never have
 isIdentity() == true; or we promote the idiom that to check for identity,
 one always creates a new DOMMatrix, so that the constructor determines
 isIdentity, and then we query it.  This is no better than just having
 isIdentity do the (badly-rounded) check.


 The way that propagating an is identity flag is better than
 determining that from the matrix coefficients, is that it's predictable.
 People are going to have matrices that are the result of various arithmetic
 operations, that are close to identity but most of the time not exactly
 identity. On these matrices, I would like isIdentity() to consistently
 return false, instead of returning false 99.99% of the time and then
 suddenly accidentally returning true when a little miracle happens and a
 matrix happens to be exactly identity.


 ...but, to not lose sight of what I really want:  I am still not
 convinced that we should have a isIdentity() method at all, and by default
 I would prefer no such method to exist. I was only saying the above _if_ we
 must have a isIdentity method.


 Scanning through the mozilla codebase, IsIdentity is used to make
 decisions if objects were transformed. This seems to match how we use
 Identity() internally.
  Since this seems useful for native applications, there's no reason why
 this wouldn't be the case for the web platform (aka blink's rational web
 platform principle). If for some reason the author *really* wants to know
 if the matrix is identity, he can calculate it manually.

 I would be fine with keeping this as an internal flag and defining

Re: Intent to implement: DOMMatrix

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 7:07 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 Off the top of my head, the places in Gecko I know of that use isIdentity
 or is2D fall into two categories:
 1) math performance optimizations
 2) (is2D only) we're going to take an implementation approach that only
 works for 2D affine transforms, and either a) there is no support for 3D
 perspective transforms at all, or b) that support is implemented very
 differently (e.g. transforming Bezier control points vs rendering to a
 bitmap and applying 3D transform to that).

 For category #1 we can perhaps avoid having Web developers call
 isIdentity/is2D, by optimizing internally. We simply haven't bothered to do
 those optimizations in Gecko matrix classes, we let the callers do it (but
 perhaps we should reconsider that).


Yes, isIdentity is used as an indication that nothing needs to be done or
that the transform hasn't changed.
Maybe we should rename it to isDefault, isInitial or isNoOp?


 For category #2, I can't see a way around exposing is2D to the Web in some
 form.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Rik Cabanier
To recap I think the following points have been resolved:
- remove determinant (unless someone comes up with a strong use case)
- change is2D() so it's a flag instead of calculated on the fly
- change isIdentity() so it's a flag.
- update constructors so they set/copy the flags appropriately

Still up for discussion:
- rename isIdentity
- come up with better way for the in-place transformations as opposed to
by
- is premultiply needed?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 8:32 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Tue, Jun 3, 2014 at 3:28 PM, Rik Cabanier caban...@gmail.com wrote:

 Yes, isIdentity is used as an indication that nothing needs to be done or
 that the transform hasn't changed.
 Maybe we should rename it to isDefault, isInitial or isNoOp?


 I think isIdentity is the right name.

 Glancing through Gecko I see places where we're able to entirely skip
 possibly-expensive transformation steps if we have an identity matrix, so I
 guess isIdentity is useful to have since even if we make
 multiplication-by-identity free, checking isIdentity might mean you can
 completely avoid traversing some application data structure.


Yes, that's what I'm seeing in WebKit and Blink as well.
For instance:
const AffineTransform transform = context-getCTM();
if (m_shadowsIgnoreTransforms  !transform.isIdentity()) {
FloatQuad transformedPolygon =
transform.mapQuad(FloatQuad(shadowedRect));
transformedPolygon.move(m_offset);
layerRect =
transform.inverse().mapQuad(transformedPolygon).boundingBox();
} else {
layerRect = shadowedRect;
layerRect.move(m_offset);
}


and:
if (!currentTransform.isIdentity()) {
FloatPoint3D absoluteAnchorPoint(anchorPoint());
absoluteAnchorPoint.scale(size().width(), size().height(), 1);
transform.translate3d(absoluteAnchorPoint.x(),
absoluteAnchorPoint.y(), absoluteAnchorPoint.z());
transform.multiply(currentTransform);
transform.translate3d(-absoluteAnchorPoint.x(),
-absoluteAnchorPoint.y(), -absoluteAnchorPoint.z());
}
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-01 Thread Rik Cabanier
On Sun, Jun 1, 2014 at 3:11 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:




 2014-05-31 0:40 GMT-04:00 Rik Cabanier caban...@gmail.com:

  Objection #3:

 I dislike the way that this API exposes multiplication order. It's not
 obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B
 and which is doing A=B*A.


 The by methods do the transformation in-place. In this case, both are A
 = A * B
 Maybe you're thinking of preMultiply?


 Ah, I was totally confused by the method names. Multiply is already a
 verb, and the method name multiply already implicitly means multiply
 *by*. So it's very confusing that there is another method named multiplyBy.


Yeah, we had discussion on that. 'by' is not ideal, but it is much shorter
than 'InPlace'. Do you have a suggestion to improve the name?


 Methods on DOMMatrixReadOnly are inconsistently named: some, like
 multiply, are named after the /verb/ describing what they /do/, while
 others, like inverse, are named after the /noun/ describing what they
 /return/.

 Choose one and stick to it; my preference goes to the latter, i.e. rename
 multiply to product in line with the existing inverse and then the
 DOMMatrix.multiplyBy method can drop the By and become multiply.

 If you do rename multiply to product that leads to the question of
 what preMultiply should become.

 In an ideal world (not commenting on whether that's a thing we can get on
 the Web), product would be a global function, not a class method, so you
 could let people write product(X, Y) or product(Y, X) and not have to worry
 about naming differently the two product orders.


Unfortunately, we're stuck with the API names that SVG gave to its matrix.
The only way to fix this is to duplicate the API and support both old and
new names which is very confusing,


  Objection #4:

 By exposing a inverse() method but no solve() method, this API will
 encourage people who have to solve linear systems to do so by doing
 matrix.inverse().transformPoint(...), which is inefficient and can be
 numerically unstable.

 But then of course once we open the pandora box of exposing solvers, the
 API grows a lot more. My point is not to suggest to grow the API more. My
 point is to discourage you and the W3C from getting into the matrix API
 design business. Matrix APIs are bound to either grow big or be useless. I
 believe that the only appropriate Matrix interface at the Web API level is
 a plain storage class, with minimal getters (basically a thin wrapper
 around a typed array without any nontrivial arithmetic built in).


 We already went over this at length about a year ago.
 Dirk's been asking for feedback on this interface on www-style and
 public-fx so can you raise your concerns there? Just keep in mind that we
 have to support the SVGMatrix and CSSMatrix interfaces.


 My ROI for arguing on standards mailing on matrix math topics lists has
 been very low, presumably because these are specialist topics outside of
 the area of expertise of these groups.


It is a constant struggle. We need to strike a balance between
mathematicians and average authors. Stay with it and prepare to repeat
yourself; it's frustrating for everyone involved.
If you really don't want to participate anymore, we can get to an agreement
here and I can try to convince the others.


 Here are a couple more objections by the way:

 Objection #5:

 The isIdentity() method has the same issue as was described about is2D()
 above: as matrices get computed, they are going to jump unpredicably
 between being exactly identity and not. People using isIdentity() to jump
 between code paths are going to get unexpected jumps between code paths
 i.e. typically performance cliffs, or worse if they start asserting that a
 matrix should or should not be exactly identity. For that reason, I would
 remove the isIdentity method.


isIdentity() indeed suffers from rounding errors but since it's useful, I'm
hesitant to remove it.
In our rendering libraries at Adobe, we check if a matrix is *almost*
identity. Maybe we can do the same here?


 Objection #6:

 The determinant() method, being in this API the only easy way to get
 something that looks roughly like a measure of invertibility, will probably
 be (mis-)used as a measure of invertibility. So I'm quite confident that it
 has a strong mis-use case. Does it have a strong good use case? Does it
 outweigh that? Note that if the intent is precisely to offer some kind of
 measure of invertibility, then that is yet another thing that would be best
 done with a singular values decomposition (along with solving, and with
 computing a polar decomposition, useful for interpolating matrices), by
 returning the ratio between the lowest and the highest singular value.

 Either that, or explain how tricky it is to correctly use the determinant
 in a measure of invertibility, and integrate a code example about that.


I don't know why the determinant method was added and I would be fine with
removing

Intent to implement: DOMMatrix

2014-05-30 Thread Rik Cabanier
Primary eng emails
caban...@adobe.com, dschu...@adobe.com

*Proposal*
*http://dev.w3.org/fxtf/geometry/#DOMMatrix
http://dev.w3.org/fxtf/geometry/#DOMMatrix*

*Summary*
Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that
offer a matrix abstraction.

*Motivation*
The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical
matrix with the purpose of describing transformations in a graphical
context. The following sections describe the details of the interface.
The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix
interface from SVG.

In addition, DOMMatrix will be part of CSSOM where it will simplify getting
and setting CSS transforms.

*Mozilla bug*
https://bugzilla.mozilla.org/show_bug.cgi?id=1018497
I will implement this behind the flag: layout.css.DOMMatrix

*Concerns*
None.
Mozilla already implemented DOMPoint and DOMQuad

*Compatibility Risk*
Blink: unknown
WebKit: in development [1]
Internet Explorer: No public signals
Web developers: unknown

1: https://bugs.webkit.org/show_bug.cgi?id=110001
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-05-30 Thread Rik Cabanier
Since DOMMatrix is replacing SVGMatrix, I don't see a way to implement it
behind a flag.
Should I wait to make that change and leave both SVGMatrix and DOMMatrix in
the code for now?


On Fri, May 30, 2014 at 8:53 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 I'm all for it! :-)

 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
 le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-05-30 Thread Rik Cabanier
On Fri, May 30, 2014 at 9:00 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:

 I never seem to be able to discourage people from dragging the W3C into
 specialist topics that are outside its area of expertise. Let me try again.

 Objection #1:

 The skew* methods are out of place there, because, contrary to the rest,
 they are not geometric transformations, they are just arithmetic on matrix
 coefficients whose geometric impact depends entirely on the choice of a
 coordinate system. I'm afraid of leaving them there will propagate
 widespread confusion about skews --- see e.g. the authors of
 http://dev.w3.org/csswg/css-transforms/#matrix-interpolation who seemed
 to think that decomposing a matrix into a product of things including a
 skew would have geometric significance, leading to clearly unwanted
 behavior as demonstrated in
 http://people.mozilla.org/~bjacob/transform-animation-not-covariant.html


Many people think that the skew* methods were a mistake.
However, DOMMatrix is meant as a drop-in replacement for SVGMatrix which
unfortunately has these methods:
http://www.w3.org/TR/SVG11/coords.html#InterfaceSVGMatrix

I would note though that skewing is very popular among animators so I would
object to their removal.


 Objection #2:

 This DOMMatrix interface tries to be simultaneously a 4x4 matrices
 representing projective 3D transformations, and about 2x3 matrices
 representing affine 2D transformations; this mode switch corresponds to the
 is2D() getter. I have a long list of objections to this mode switch:
  - I believe that, being based on exact floating point comparisons, it is
 going to be fragile. For example, people will assert that !is2D() when they
 expect a 3D transformation, and that will intermittently fail when for
 whatever reason their 3D matrix is going to be exactly 2D.
  - I believe that these two classes of transformations (projective 3D and
 affine 2D) should be separate classes entirely, that that will make the API
 simpler and more efficiently implementable and that forcing authors to
 think about that choice more explicitly is doing them a favor.
  - I believe that that feature set, with this choice of two classes of
 transformations (projective 3D and affine 2D), is arbitrary and
 inconsistent. Why not support affine 3D or projective 2D, for instance?


These objections sound valid.
However WebKit, Blink and Microsoft already expose CSSMatrix that combines
a 4x4 and 2x3 matrix:
https://developer.apple.com/library/safari/documentation/AudioVideo/Reference/WebKitCSSMatrixClassReference/WebKitCSSMatrix/WebKitCSSMatrix.html
and
it is used extensively by authors.
The spec is standardizing that existing class so we can remove the prefix.


 Objection #3:

 I dislike the way that this API exposes multiplication order. It's not
 obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B
 and which is doing A=B*A.


The by methods do the transformation in-place. In this case, both are A =
A * B
Maybe you're thinking of preMultiply?


 Objection #4:

 By exposing a inverse() method but no solve() method, this API will
 encourage people who have to solve linear systems to do so by doing
 matrix.inverse().transformPoint(...), which is inefficient and can be
 numerically unstable.

 But then of course once we open the pandora box of exposing solvers, the
 API grows a lot more. My point is not to suggest to grow the API more. My
 point is to discourage you and the W3C from getting into the matrix API
 design business. Matrix APIs are bound to either grow big or be useless. I
 believe that the only appropriate Matrix interface at the Web API level is
 a plain storage class, with minimal getters (basically a thin wrapper
 around a typed array without any nontrivial arithmetic built in).


We already went over this at length about a year ago.
Dirk's been asking for feedback on this interface on www-style and
public-fx so can you raise your concerns there? Just keep in mind that we
have to support the SVGMatrix and CSSMatrix interfaces.



 2014-05-30 20:02 GMT-04:00 Rik Cabanier caban...@gmail.com:

 Primary eng emails
 caban...@adobe.com, dschu...@adobe.com

 *Proposal*
 *http://dev.w3.org/fxtf/geometry/#DOMMatrix
 http://dev.w3.org/fxtf/geometry/#DOMMatrix*

 *Summary*
 Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that
 offer a matrix abstraction.

 *Motivation*

 The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical
 matrix with the purpose of describing transformations in a graphical
 context. The following sections describe the details of the interface.
 The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix
 interface from SVG.

 In addition, DOMMatrix will be part of CSSOM where it will simplify
 getting
 and setting CSS transforms.

 *Mozilla bug*

 https://bugzilla.mozilla.org/show_bug.cgi?id=1018497
 I will implement this behind the flag: layout.css.DOMMatrix

 *Concerns*

 None.
 Mozilla already implemented

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Rik Cabanier
On Mon, May 19, 2014 at 6:35 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com wrote:
  I don't see why the web platform is special here and we should trust that
  authors can do the right thing.

 I'm fairly sure people have already pointed this out to you. But the
 reason the web platform is different is that because we allow
 arbitrary application logic to run on the user's device without any
 user opt-in.

 I.e. the web is designed such that it is safe for a user to go to any
 website without having to consider the risks of doing so.

 This is why we for example don't allow websites to have arbitrary
 read/write access to the user's filesystem. Something that all the
 other platforms that you have pointed out do.

 Those platforms instead rely on that users make a security decision
 before allowing any code to run. This has both advantages (easier to
 design APIs for those platforms) and disadvantages (malware is pretty
 prevalent on for example Windows)


I'm unsure what point you are trying to make.
This is not an API that exposes any more information than a user agent
sniffer can approximate.
It will just be more precise and less wasteful. For the high value system
(= lots of cores), we intentionally limited the number of cores to 8. This
number of cores is very common and most applications won't see much use
above 8 anyway.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Rik Cabanier
On Mon, May 19, 2014 at 6:46 PM, Benoit Jacob jacob.benoi...@gmail.comwrote:

 +1000! Thanks for articulating so clearly the difference between the
 Web-as-an-application-platform and other application platforms.


It really surprises me that you would make this objection.
WebGL certainly would *not* fall into this Web-as-an-application-platform
category since it exposes machine information [1] and is generally insecure
[2] according to Apple and (in the past) Microsoft.

Please note that I really like WebGL and not worried about these issues.
Just pointing out your double standard.

1: http://renderingpipeline.com/webgl-extension-viewer/
2: http://lists.w3.org/Archives/Public/public-fx/2012JanMar/0136.html


 2014-05-19 21:35 GMT-04:00 Jonas Sicking jo...@sicking.cc:

  On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com
 wrote:
  I don't see why the web platform is special here and we should trust
 that
  authors can do the right thing.

 I'm fairly sure people have already pointed this out to you. But the
 reason the web platform is different is that because we allow
 arbitrary application logic to run on the user's device without any
 user opt-in.

 I.e. the web is designed such that it is safe for a user to go to any
 website without having to consider the risks of doing so.

 This is why we for example don't allow websites to have arbitrary
 read/write access to the user's filesystem. Something that all the
 other platforms that you have pointed out do.

 Those platforms instead rely on that users make a security decision
 before allowing any code to run. This has both advantages (easier to
 design APIs for those platforms) and disadvantages (malware is pretty
 prevalent on for example Windows).

 / Jonas
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Rik Cabanier
FYI this attribute landed in WebKit today:
http://trac.webkit.org/changeset/169017


On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:




 On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari 
 ehsan.akhg...@gmail.comwrote:

 On 2014-05-13, 9:01 PM, Rik Cabanier wrote:

 ...

 The problem is that the API doesn't really make it obvious that
 you're not supposed to take the value that the getter returns and
 just spawn N workers.  IOW, the API encourages the wrong behavior by
 design.


 That is simply untrue.


 I'm assuming that the goal of this API is to allow authors to spawn as
 many workers as possible so that they can exhaust all of the cores in the
 interest of finishing their computation faster.


 That is one way of using it but not the only one.
 For instance, let's say that I'm writing on a cooperative game. I might
 want to put all my network logic in a worker and want to make sure that
 worker is scheduled. This worker consumes little (if any) cpu, but I want
 it to be responsive.
 NumCores = 1 - do everything in the main thread and try to make sure the
 network code executes
 NumCores = 2 - spin up a worker for the network code. Everything else in
 the main thread
 NumCores = 3 - spin up a worker for the network code + another one for
 physics and image decompression. Everything else in the main thread


  I have provided reasons why any thread which is running at a higher
 priority on the system busy doing work is going to make this number an over
 approximation, I have given you two examples of higher priority threads
 that we're currently shipping in Firefox (Chrome Workers and the
 MediaStreamGraph thread)


 You're arguing against basic multithreading functionality. I'm unsure how
 ANY thread framework in a browser could fix this since there might be other
 higher priority tasks in the system.
 For your example of Chrome Workers and MediaStreamGraph, I assume those
 don't run at a constant 100% so a webapp that grabs all cores will still
 get more work done.


 and have provided you with experimental evidence of running Eli's test
 cases trying to exhaust as many cores as it can fails to predict the number
 of cores in these situations.


 Eli's code is an approximation. It doesn't prove anything.
 I don't understand your point here.


  If you don't find any of this convincing, I'd respectfully ask us to
 agree to disagree on this point.


 OK.


  For the sake of argument, let's say you are right. How are things worse
 than before?


 I don't think we should necessarily try to find a solution that is just
 not worse than the status quo, I'm more interested in us implementing a
 good solution here (and yes, I'm aware that there is no concrete proposal
 out there that is better at this point.)


 So, worst case, there's no harm.
 Best case, we have a more responsive application.

  ...


 That's fine but we're coming right back to the start: there is no way
 for informed authors to make a decision today.


 Yes, absolutely.


  The let's build something complex that solves everything proposal
 won't be done in a long time. Meanwhile apps can make responsive UI's
 and fluid games.


 That's I think one fundamental issue we're disagreeing on.  I think that
 apps can build responsive UIs and fluid games without this today on the Web.


 Sure. You can build apps that don't tax the system or that are
 specifically tailored to work well on a popular system.


  There were 24,000 hits for java which is on the web and a VM but now you
 say that it's not a vote of popularity?


 We may have a different terminology here, but to me, positive feedback
 from web developers should indicate a large amount of demand from the web
 developer community for us to solve this problem at this point, and also a
 strong positive signal from them on this specific solution with the flaws
 that I have described above in mind.  That simply doesn't map to searching
 for API names on non-Web technologies on github. :-)


 This was not a simple search. Please look over the examples especially the
 node.js ones and see how it's being used.
 This is what we're trying to achieve with this attribute.


 Also, FTR, I strongly disagree that we should implement all popular Java
 APIs just because there is a way to run Java code on the web.  ;-)

  ...

 Can you restate the actual problem? I reread your message but didn't
 find anything that indicates this is a bad idea.


 See above where I re-described why this is not a good technical solution
 to achieve the goal of the API.

 Also, as I've mentioned several times, this API basically ignores the
 fact that there are AMP systems shipping *today* and dies not take the fact
 that future Web engines may try to use as many cores as they can at a
 higher priority (Servo being one example.)


 OK. They're free to do so. This is not a problem (see previous messages)
 It seems like you're arguing against basic multithreading again.


Others

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Rik Cabanier
On Sun, May 18, 2014 at 4:51 PM, Xidorn Quan quanxunz...@gmail.com wrote:

 IMO, though we may have a better model in the future, it is at least not
 harmful to have such attribute with some limitation. The WebKit guys think
 it is not a fingerprinting when limiting the max value to 8. I think it
 might be meaningful to also limit the number to power of 2 (very few people
 has 3 or 6 cores so they will be fingerprinting as well).


There are CPU's from Intel [1], AMD [2], Samsung [3] and possible others
that have 6 cores. I'm unsure why we would treat them differently since
they're not high-value systems.

And I think it makes sense to announce that, UA does not guarantee this
 value a constant, so that UA can return whatever value it feels comfortable
 with when the getter is invoked. Maybe in the future, we can even have an
 event to notify the script that the number has been changed.


Yes, if a user agent wants to return a lower number (ie so a well-behaved
application leaves a CPU free), it's free to do so.
I'm unsure if the event is needed but that can be addressed later.


 In addition, considering that WebKit has landed this feature, and Blink is
 also going to implement that, it is not a bad idea for us to have the
 attribute as well.


The WebKit patch limits the maximum number to 8. The blink patch currently
does not limit what it returns.
My proposed mozilla patch [4] makes the maximum return value configurable
through a dom.maxHardwareConcurrency preference key. It currently has a
default value of 8.

1: http://ark.intel.com/products/63697 http://ark.intel.com/products/77780
2:
http://products.amd.com/en-us/DesktopCPUDetail.aspx?id=811f1=f2=f3=f4=f5=f6=f7=f8=f9=f10=f11=f12=
3:
http://www.samsung.com/global/business/semiconductor/minisite/Exynos/products5hexa.html
4: https://bugzilla.mozilla.org/show_bug.cgi?id=1008453


 On Mon, May 19, 2014 at 9:23 AM, Rik Cabanier caban...@gmail.com wrote:

 FYI this attribute landed in WebKit today:
 http://trac.webkit.org/changeset/169017


 On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari 
 ehsan.akhg...@gmail.comwrote:
 
  On 2014-05-13, 9:01 PM, Rik Cabanier wrote:
 
  ...
 
  The problem is that the API doesn't really make it obvious that
  you're not supposed to take the value that the getter returns and
  just spawn N workers.  IOW, the API encourages the wrong behavior
 by
  design.
 
 
  That is simply untrue.
 
 
  I'm assuming that the goal of this API is to allow authors to spawn as
  many workers as possible so that they can exhaust all of the cores in
 the
  interest of finishing their computation faster.
 
 
  That is one way of using it but not the only one.
  For instance, let's say that I'm writing on a cooperative game. I might
  want to put all my network logic in a worker and want to make sure that
  worker is scheduled. This worker consumes little (if any) cpu, but I
 want
  it to be responsive.
  NumCores = 1 - do everything in the main thread and try to make sure
 the
  network code executes
  NumCores = 2 - spin up a worker for the network code. Everything else
 in
  the main thread
  NumCores = 3 - spin up a worker for the network code + another one for
  physics and image decompression. Everything else in the main thread
 
 
   I have provided reasons why any thread which is running at a higher
  priority on the system busy doing work is going to make this number an
 over
  approximation, I have given you two examples of higher priority threads
  that we're currently shipping in Firefox (Chrome Workers and the
  MediaStreamGraph thread)
 
 
  You're arguing against basic multithreading functionality. I'm unsure
 how
  ANY thread framework in a browser could fix this since there might be
 other
  higher priority tasks in the system.
  For your example of Chrome Workers and MediaStreamGraph, I assume those
  don't run at a constant 100% so a webapp that grabs all cores will still
  get more work done.
 
 
  and have provided you with experimental evidence of running Eli's test
  cases trying to exhaust as many cores as it can fails to predict the
 number
  of cores in these situations.
 
 
  Eli's code is an approximation. It doesn't prove anything.
  I don't understand your point here.
 
 
   If you don't find any of this convincing, I'd respectfully ask us to
  agree to disagree on this point.
 
 
  OK.
 
 
   For the sake of argument, let's say you are right. How are things
 worse
  than before?
 
 
  I don't think we should necessarily try to find a solution that is just
  not worse than the status quo, I'm more interested in us implementing a
  good solution here (and yes, I'm aware that there is no concrete
 proposal
  out there that is better at this point.)
 
 
  So, worst case, there's no harm.
  Best case, we have a more responsive application.
 
   ...
 
 
  That's fine but we're coming right back to the start: there is no way

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-16 Thread Rik Cabanier
On Fri, May 16, 2014 at 11:03 AM, lrb...@gmail.com wrote:

 Do you think it would be feasible that the browser fires events every time
 the number of cores available for a job changes? That might allow to build
 an efficient event-based worker pool.


I think this will be very noisy and might cause a lot of confusion.
Also I'm unsure how we could even implement this since the operating
systems don't give us such information.


 In the meantime, there are developers out there who are downloading
 micro-benchmarks on every client to stress-test the browser and determine
 the number of physical core. This is nonsense, we can all agree, but unless
 you give them a short-term alternative, they'll keep doing exactly that.
 And native will keep looking a lot more usable than the web.


I agree.
Do you have pointers where people are describing this?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-15 Thread Rik Cabanier
On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2014-05-13, 9:01 PM, Rik Cabanier wrote:

 ...

 The problem is that the API doesn't really make it obvious that
 you're not supposed to take the value that the getter returns and
 just spawn N workers.  IOW, the API encourages the wrong behavior by
 design.


 That is simply untrue.


 I'm assuming that the goal of this API is to allow authors to spawn as
 many workers as possible so that they can exhaust all of the cores in the
 interest of finishing their computation faster.


That is one way of using it but not the only one.
For instance, let's say that I'm writing on a cooperative game. I might
want to put all my network logic in a worker and want to make sure that
worker is scheduled. This worker consumes little (if any) cpu, but I want
it to be responsive.
NumCores = 1 - do everything in the main thread and try to make sure the
network code executes
NumCores = 2 - spin up a worker for the network code. Everything else in
the main thread
NumCores = 3 - spin up a worker for the network code + another one for
physics and image decompression. Everything else in the main thread


  I have provided reasons why any thread which is running at a higher
 priority on the system busy doing work is going to make this number an over
 approximation, I have given you two examples of higher priority threads
 that we're currently shipping in Firefox (Chrome Workers and the
 MediaStreamGraph thread)


You're arguing against basic multithreading functionality. I'm unsure how
ANY thread framework in a browser could fix this since there might be other
higher priority tasks in the system.
For your example of Chrome Workers and MediaStreamGraph, I assume those
don't run at a constant 100% so a webapp that grabs all cores will still
get more work done.


 and have provided you with experimental evidence of running Eli's test
 cases trying to exhaust as many cores as it can fails to predict the number
 of cores in these situations.


Eli's code is an approximation. It doesn't prove anything.
I don't understand your point here.


  If you don't find any of this convincing, I'd respectfully ask us to
 agree to disagree on this point.


OK.


  For the sake of argument, let's say you are right. How are things worse
 than before?


 I don't think we should necessarily try to find a solution that is just
 not worse than the status quo, I'm more interested in us implementing a
 good solution here (and yes, I'm aware that there is no concrete proposal
 out there that is better at this point.)


So, worst case, there's no harm.
Best case, we have a more responsive application.

...

 That's fine but we're coming right back to the start: there is no way
 for informed authors to make a decision today.


 Yes, absolutely.


  The let's build something complex that solves everything proposal
 won't be done in a long time. Meanwhile apps can make responsive UI's
 and fluid games.


 That's I think one fundamental issue we're disagreeing on.  I think that
 apps can build responsive UIs and fluid games without this today on the Web.


Sure. You can build apps that don't tax the system or that are specifically
tailored to work well on a popular system.


 There were 24,000 hits for java which is on the web and a VM but now you
 say that it's not a vote of popularity?


 We may have a different terminology here, but to me, positive feedback
 from web developers should indicate a large amount of demand from the web
 developer community for us to solve this problem at this point, and also a
 strong positive signal from them on this specific solution with the flaws
 that I have described above in mind.  That simply doesn't map to searching
 for API names on non-Web technologies on github. :-)


This was not a simple search. Please look over the examples especially the
node.js ones and see how it's being used.
This is what we're trying to achieve with this attribute.


 Also, FTR, I strongly disagree that we should implement all popular Java
 APIs just because there is a way to run Java code on the web.  ;-)

...

 Can you restate the actual problem? I reread your message but didn't
 find anything that indicates this is a bad idea.


 See above where I re-described why this is not a good technical solution
 to achieve the goal of the API.

 Also, as I've mentioned several times, this API basically ignores the fact
 that there are AMP systems shipping *today* and dies not take the fact that
 future Web engines may try to use as many cores as they can at a higher
 priority (Servo being one example.)


OK. They're free to do so. This is not a problem (see previous messages)
It seems like you're arguing against basic multithreading again.


Others do this is just not going to convince me here.

 What would convince you? The fact that every other framework provides
 this and people use it, is not a strong indication?
 It's not possible for me

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Rik Cabanier
On Tue, May 13, 2014 at 8:20 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On Tue, May 13, 2014 at 2:37 AM, Rik Cabanier caban...@gmail.com wrote:

 On Mon, May 12, 2014 at 10:15 PM, Joshua Cranmer  pidgeo...@gmail.com
 wrote:

  On 5/12/2014 7:03 PM, Rik Cabanier wrote:
 
  *Concerns*
 
  The original proposal required that a platform must return the exact
  number
  of logical CPU cores. To mitigate the fingerprinting concern, the
 proposal
  was updated so a user agent can lie about this.
  In the case of WebKit, it will return a maximum of 8 logical cores so
 high
  value machines can't be discovered. (Note that it's already possible
 to do
  a rough estimate of the number of cores)
 
 
  The discussion on the WHATWG mailing list covered a lot more than the
  fingerprinting concern. Namely:
  1. The user may not want to let web applications hog all of the cores
 on a
  machine, and exposing this kind of metric makes it easier for
 (good-faith)
  applications to inadvertently do this.
 

 Web applications can already do this today. There's nothing stopping them
 from figuring out the CPU's and trying to use them all.
 Worse, I think they will likely optimize for popular platforms which
 either
 overtax or underutilize non-popular ones.


 Can you please provide some examples of actual web applications that do
 this, and what they're exactly trying to do with the number once they
 estimate one?  (Eli's timing attack demos don't count. ;-)


Eli's listed some examples:
http://wiki.whatwg.org/wiki/NavigatorCores#Example_use_cases
I don't have any other cases where this is done. Maybe PDF.js would be
interested. They use workers to render pages and decompress images so I
could see how this is useful to them.


  2. It's not clear that this feature is necessary to build high-quality
  threading workload applications. In fact, it's possible that this
 technique
  makes it easier to build inferior applications, relying on a potentially
  inferior metric. (Note, for example, the disagreement on figuring out
 what
  you should use for make -j if you have N cores).


 Everyone is in agreement that that is a hard problem to fix and that there
 is no clear answer.
 Whatever solution is picked (maybe like Grand Central or Intel TBB), most
 solutions will still want to know how many cores are available.
 Looking at the native platform (and Adobe's applications), many query the
 operating system for this information to balance the workload. I don't see
 why this would be different for the web platform.


 I don't think that the value exposed by the native platforms is
 particularly useful.  Really if the use case is to try to adapt the number
 of workers to a number that will allow you to run them all concurrently,
 that is not the same number as reported traditionally by the native
 platforms.


Why not? How is the web platform different?


 If you try Eli's test case in Firefox under different workloads (for
 example, while building Firefox, doing a disk intensive operation, etc.),
 the utter inaccuracy of the results is proof in the ineffectiveness of this
 number in my opinion.


As Eli mentioned, you can run the algorithm for longer and get a more
accurate result. Again, if the native platform didn't support this, doing
this in C++ would result in the same.


 Also, I worry that this API is too focused on the past/present.  For
 example, I don't think anyone sufficiently addressed Boris' concern on the
 whatwg thread about AMP vs SMP systems.


Can you provide a link to that? Are there systems that expose this to the
user? (AFAIK slow cores are substituted with fast ones on the fly.)


 This proposal also assumes that the UA itself is mostly contempt with
 using a single core, which is true for the current browser engines, but
 we're working on changing that assumption in Servo.  It also doesn't take
 the possibility of several ones of these web application running at the
 same time.


How is this different from the native platform?


 Until these issues are addressed, I do not think we should implement or
 ship this feature.


FWIW these issues were already discussed in the WebKit bug.
I find it odd that we don't want to give authors access to such a basic
feature. Not everything needs to be solved by a complex framework.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Time to revive the require SSE2 discussion

2014-05-09 Thread Rik Cabanier
On Fri, May 9, 2014 at 10:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote:

 Totally agree that 1% is probably still too much to drop, but the 4x drop
 over the past two years makes me hopeful that we'll be able to drop
 non-SSE2, eventually.

 SSE2 is not just about SIMD. The most important thing it buys us IMHO is to
 be able to not use x87 instructions anymore and instead use SSE2 (scalar)
 instructions. That removes entire classes of bugs caused by x87 being
 non-IEEE754-compliant with its crazy 80-bit registers.


Out of interest, do you have links to bugs for this issue?

Also, can't you ask the compiler to produce both sse and non-sse code and
make a decision at runtime?


 2014-05-09 13:01 GMT-04:00 Chris Peterson cpeter...@mozilla.com:

  What does requiring SSE2 buy us? 1% of hundreds of millions of Firefox
  users is still millions of people.
 
  chris
 
 
 
  On 5/8/14, 5:42 PM, matthew.br...@gmail.com wrote:
 
  On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
 
  2012/1/3 Jeff Muizelaar jmuizel...@mozilla.com:
 
 
 
   On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:
 
 
 
 
   2012/1/2 Robert Kaiser ka...@kairo.at:
 
 
 
 
   Jean-Marc Desperrier schrieb:
 
 
 
 
 
 
   According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6 ,
 
 
 
 
   the Raw Dump tab on crash-stats.mozilla.com shows the needed
 
 
 
 
   information, you need to sort out from the info on the second line CPU
 
 
 
 
   maker, family, model, and stepping information whether SSE2 is there
 or
 
 
 
 
   not (With a little search, I can find that info again, bug 593117
 gives
 
 
 
 
   a formula that's correct for most of the cases).
 
 
 
 
 
 
 
 
   https://crash-analysis.mozilla.com/crash_analysis/ holds
 
 
 
 
   *-pub-crashdata.csv.gz files that have that info from all Firefox
 
 
 
 
   desktop/mobile crashes on a given day, you should be able to analyze
  that
 
 
 
 
   for this info - with a bias, of course, as it's only people having
  crashes
 
 
 
 
   that you see there. No idea if the less biased telemetry samples have
  that
 
 
 
 
   info as well.
 
 
 
 
 
 
   On yesterday's crash data, assuming that AuthenticAMD\ family\
 
 
   [1-6][^0-9]  is the proper way to identify these old AMD CPUs (I
 
 
   didn't check that very well), I get these results:
 
 
 
 
 
 
   The measurement I have used in the past was:
 
 
 
 
   CPUs have sse2 if:
 
 
 
 
   if vendor == AuthenticAMD and family = 15
 
 
   if vendor == GenuineIntel and family = 15 or (family == 6 and (model
  == 9
 
 
   or model  11))
 
 
   if vendor == CentaurHauls and family = 6 and model = 10
 
 
 
 
 
 
  Thanks.
 
 
 
  AMD and Intel CPUs amount to 296362 crashes:
 
 
 
  bjacob@cahouette:~$ egrep AuthenticAMD\|GenuineIntel
 
  20120102-pub-crashdata.csv | wc -l
 
  296362
 
 
 
  Counting SSE2-capable CPUs:
 
 
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 1[5-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  58490
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  0
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 9
 
  20120102-pub-crashdata.csv | wc -l
 
  792
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 1[2-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  52473
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  103655
 
  bjacob@cahouette:~$ egrep AuthenticAMD\ family\ 1[5-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  59463
 
  bjacob@cahouette:~$ egrep AuthenticAMD\ family\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  8120
 
 
 
  Total SSE2 capable CPUs:
 
 
 
  58490 + 792 + 52473 + 103655 + 59463 + 8120 = 282993
 
 
 
  1 - 282993 / 296362 = 0.045
 
 
 
  So the proportion of non-SSE2-capable CPUs among crash reports is 4.5
 %.
 
 
  Just for the record, I coded this analysis up here:
  https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
 
  SSE2 apparently now at about one percent:
 
   20120102-pub-crashdata.csv.gz: 4.53
   20120401-pub-crashdata.csv.gz: 4.24
   20120701-pub-crashdata.csv.gz: 2.77
   20121001-pub-crashdata.csv.gz: 2.83
   20130101-pub-crashdata.csv.gz: 2.66
   20130401-pub-crashdata.csv.gz: 2.59
   20130701-pub-crashdata.csv.gz: 2.20
   20131001-pub-crashdata.csv.gz: 1.92
   20140101-pub-crashdata.csv.gz: 1.86
   20140401-pub-crashdata.csv.gz: 1.12
 
  Cheers,
 
  Matthew
 
 
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: intent to ship: drawFocusIfNeeded

2014-05-05 Thread Rik Cabanier
On Mon, May 5, 2014 at 9:28 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Tue, May 6, 2014 at 4:13 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 6 May 2014, Robert O'Callahan wrote:

  We could probably come up with a slightly better name, but only very
  slightly better, so at this point I would rather not reopen the
  discussion. If someone else wants to, that's up to them.

 There hasn't been a discussion at all, so far.


 Please don't be too pedantic. There has been a discussion between vendors,
 it just wasn't public.


FWIW the discussion was public.
See
http://lists.w3.org/Archives/Public/public-canvas-api/2014JanMar/0003.html


 For which I have already apologized.

 Right now the spec says it's drawSystemFocusRing() and
 drawCustomFocusRing(), because there hasn't been a request to change it.


 The WHATWG spec says that. The W3C spec was changed in January as a result
 of the aforementioned discussion. It's sad that we need to qualify which
 spec we're talking about, but we do.


Dominic brought up that the name was confusing on WHATWG:
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-September/252545.html

I like the new name better because it describes what the API is doing.
'drawSystemFocusRing' doesn't always draw and 'drawCustomFocusRing' returns
true if the author should draw (unless it draws itself!)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: intent to ship: drawFocusIfNeeded

2014-05-01 Thread Rik Cabanier
On Thu, May 1, 2014 at 11:58 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2014-05-01, 2:22 PM, Rik Cabanier wrote:




 On Thu, May 1, 2014 at 10:49 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
 mailto:ehsan.akhg...@gmail.com wrote:

 Hi Rik,

 How extensive is our testing of this feature?  I'm very surprised
 that bug 1004499 was not caught by a test when you landed this.


 I wrote a number of mochi tests. In addition Dominc (from Google) wrote
 a more real world example [1] and I checked that the blink test files
 also work in Firefox.
 There are also a number of W3C tests that check that the implementation
 follows the spec. [2]


 Sounds good!  If you think the existing mochitests don't give us enough
 coverage, we should try to add more tests.


  Can you give me an example where 1004499 would be triggered? I will
 update that bug with an extra test.


 We only draw the focus ring if something is focused by the keyboard.  In
 order to simulate that in a mochitest, you can give your DOM element a
 @tabindex attribute, and then use syntehsizeKey to generate VK_TAB events
 to focus that DOM element.  In order to test the opposite case, you can use
 synthesizeMouse to simulate setting the focus using the mouse.


Is that always the case?
In our tests, we can also use elements that can get the focus by default
(such as input and a) and we can focus them by calling 'focus()' on the
element.



  1: http://www.w3.org/2013/09/accessible_canvas_clock.html
 2:
 https://github.com/w3c/web-platform-tests/tree/master/
 2dcontext/drawing-paths-to-the-canvas


 On 2014-04-30, 8:44 PM, Rik Cabanier wrote:

 Primary eng emails
 caban...@adobe.com mailto:caban...@adobe.com

 *Spec*
 http://www.w3.org/html/wg/__drafts/2dcontext/html5_canvas_
 __CR/#dom-context-2d-__drawfocusifneeded

 http://www.w3.org/html/wg/drafts/2dcontext/html5_canvas_
 CR/#dom-context-2d-drawfocusifneeded

 *Summary*

 The drawFocusIfNeeded API is a method on the canvas context that
 allows a
 user to draw a focus ring when a fallback element is focused.
 See
 http://www.w3c-test.org/__2dcontext/drawing-paths-to-__
 the-canvas/drawFocusIfNeeded___001.html

 http://www.w3c-test.org/2dcontext/drawing-paths-to-
 the-canvas/drawFocusIfNeeded_001.html
 for
 an example.

 *Blink:*

 This is currently behind a runtime flag but engineers from
 Samsung is going
 to send an intent to ship to the blink mailing list.
 _
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 mailto:dev-platform@lists.mozilla.org
 https://lists.mozilla.org/__listinfo/dev-platform
 https://lists.mozilla.org/listinfo/dev-platform





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


intent to ship: drawFocusIfNeeded

2014-04-30 Thread Rik Cabanier
Primary eng emails
caban...@adobe.com

*Spec*
http://www.w3.org/html/wg/drafts/2dcontext/html5_canvas_CR/#dom-context-2d-drawfocusifneeded

*Summary*
The drawFocusIfNeeded API is a method on the canvas context that allows a
user to draw a focus ring when a fallback element is focused.
See
http://www.w3c-test.org/2dcontext/drawing-paths-to-the-canvas/drawFocusIfNeeded_001.html
for
an example.

*Blink:*
This is currently behind a runtime flag but engineers from Samsung is going
to send an intent to ship to the blink mailing list.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Path2D + addition canvas APIs

2014-03-25 Thread Rik Cabanier
On Tue, Mar 25, 2014 at 5:08 AM, Dirk Schulze dschu...@adobe.com wrote:

 Hi Anne,
 On Mar 25, 2014, at 11:29 AM, Anne van Kesteren ann...@annevk.nl wrote:

  On Tue, Mar 25, 2014 at 3:53 AM, Rik Cabanier caban...@gmail.com
 wrote:
  It's defined as the Path object in the WhatWG spec [1] but after a
 long
  discussion and author feedback that name was considered too generic and
  renamed.
 
  Pointer? Almost none of the other canvas-related interfaces have a
  suffix. Also, if there was agreement, why isn't the specification
  updated?

 The main concern for most discussion participants was a name collision
 with JS libraries that introduce Path in the global space. In fact that
 happened for many authors with Paper.js. Just to name one example.

 The discussion about the name collision was brought up by Elliot Sprehn in
 September 2012[1].

 Other threads started about the same topic and eventually most
 implementers agreed on Path2d in November 2013[2].

 I can not say why the spec did not adapt to Path2d yet. It certainly was
 requested multiple times. Maybe Ian wanted to wait for two interoperable
 implementations to change. Now we have WebKit, Blink and Gecke (the latter
 two behind a runtime flag) that implement Path2d with the described subset
 from Rik.


From Ian on the thread for Path2D on blink-dev:

There wasn't a decision, it's just awaiting implementations. Right now
there's one shipping implementation, and it's called the interface Path,
so that's why the spec is still saying Path. It's not a compliant
implementation, so it's not a strong vote, but it's the only vote so far.

This is being tracked here:
   https://www.w3.org/Bugs/Public/show_bug.cgi?id=23918

If an implementation ships a compliant implementation of the API with the
name Path2D, then it would move the balance over and the spec would
change.

The one shipping implementation was Safari and it was implemented last year
by Dirk. He renamed it last week to Path2D.


 [1]
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2012Sep/0288.html
 [2]
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Nov/0204.html


 
 
  --
  http://annevankesteren.nl/


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Path2D + addition canvas APIs

2014-03-25 Thread Rik Cabanier
On Tue, Mar 25, 2014 at 9:43 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Mar 25, 2014 at 4:22 PM, Rik Cabanier caban...@gmail.com wrote:
  The one shipping implementation was Safari and it was implemented last
 year
  by Dirk. He renamed it last week to Path2D.

 I see, if it has been in Safari for over a year as Path, the claims it
 causes compatibility issues seem somewhat overblown?


We did find a couple of paper.js websites that were broken by that change.


 It just seems
 weird to name this interface differently from the others. I guess it
 does not matter much...


yes, some people felt strongly that it should not be so generic and others
didn't have a strong opinion.

I posted a small example that shows the Path2D object in action:
http://cabanier.github.io/canvg/convert.html
It will work in any browser, but to get the best performance, you need to
install the latest nightly and turn canvas.path on in about:config.

The test redraws the SVG in a loop. Without Path2D support, the test takes
up 14% of my CPU but with support turned on, it drops down to 5%
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Path2D + addition canvas APIs

2014-03-25 Thread Rik Cabanier
I'm not too familiar with the intent to ship process for mozilla [1]
Given that no one objected to my proposal, can I submit a patch to remove
the runtime flag and put it up for review?

1: https://wiki.mozilla.org/WebAPI/ExposureGuidelines


On Tue, Mar 25, 2014 at 11:11 AM, Rik Cabanier caban...@gmail.com wrote:




 On Tue, Mar 25, 2014 at 9:43 AM, Anne van Kesteren ann...@annevk.nlwrote:

 On Tue, Mar 25, 2014 at 4:22 PM, Rik Cabanier caban...@gmail.com wrote:
  The one shipping implementation was Safari and it was implemented last
 year
  by Dirk. He renamed it last week to Path2D.

 I see, if it has been in Safari for over a year as Path, the claims it
 causes compatibility issues seem somewhat overblown?


 We did find a couple of paper.js websites that were broken by that change.


 It just seems
 weird to name this interface differently from the others. I guess it
 does not matter much...


 yes, some people felt strongly that it should not be so generic and others
 didn't have a strong opinion.

 I posted a small example that shows the Path2D object in action:
 http://cabanier.github.io/canvg/convert.html
 It will work in any browser, but to get the best performance, you need to
 install the latest nightly and turn canvas.path on in about:config.

 The test redraws the SVG in a loop. Without Path2D support, the test takes
 up 14% of my CPU but with support turned on, it drops down to 5%

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Path2D + addition canvas APIs

2014-03-24 Thread Rik Cabanier
Path2D is a new object in the global namespace. Its main use case is to
cache path segments. For instance, shumway could use it to cache the flash
edgelists which should increase its drawing performance.

Eventually it will act as a bridge between SVG and Canvas but for now its
use is limited to canvas (except that you can construct a path with an SVG
path string).

It's defined as the Path object in the WhatWG spec [1] but after a long
discussion and author feedback that name was considered too generic and
renamed.

The signature for the new object is as follows:

[Constructor,
Constructor(Path2D path),
Constructor(DOMString d)] interface Path2D {
}
Path2D implements CanvasPathMethods;


and the new methods on the Canvas 2D context are:

void fill(Path2D path, optional CanvasFillRule fillRule = nonzero);
void stroke(Path2D path);
void clip(Path2D path, optional CanvasFillRule fillRule = nonzero);
boolean isPointInPath(Path2D path, unrestricted double x, unrestricted
double y, optional CanvasFillRule fillRule = nonzero);
boolean isPointInStroke(Path2D path, unrestricted double x, unrestricted
double y);


WebKit and Blink implemented the same subset of the Path2D API.

Blink also implemented the 'addPath' API [2] but Dirk and I feel that that
API is not ready for shipping yet. (ie Blink and WK treat non-invertible
matrices different from IE/FF + it's unclear if there is interop if we
append paths that don't start with a 'moveto')

The Path2D object is currently behind the canvas.path flag.
Can we turn this feature on by default?

1:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#path-objects
2:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-path-addpath
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: DOMRect, DOMPoint, DOMQuad, and GeometryUtils

2014-03-18 Thread Rik Cabanier
Great to see this going in!

Hopefully we can make progress on DOMMatrix in the near future so authors
can avoid parsing strings with CSS transforms.


On Tue, Mar 18, 2014 at 10:02 AM, Robert O'Callahan rob...@ocallahan.orgwrote:

 DOMRect, DOMPoint and DOMQuad are defined in the Geometry Interfaces spec:
 http://dev.w3.org/fxtf/geometry/Overview.html
 GeometryUtils is defined in the CSSOM View spec:
 http://dev.w3.org/csswg/cssom-view/#geometry
 The spec for the GeometryUtils methods is quite incomplete but we've
 discussed the syntax and semantics in www-style and reached consensus
 AFAIK. If there are unresolved issues, they would be for edge cases
 unlikely to be hit by Web authors.

 I am unaware of other engine plans to implement these APIs, but I am also
 unaware of any objections. Tab Atkins (Google), Simon Pieters (Opera) and
 Rik Cabanier (Adobe) were all involved in the design discussions and I
 believe they all approve of the current design.

 These are implemented in bugs 917755 and 918189. DOMPoint and DOMQuad are
 behind independent prefs, default on. DOMRect is not behind a pref because
 the interface already exists (with a bit less functionality). getBoxQuads
 and the convert*FromNode functions are behind prefs, currently defaulting
 to on in non-release builds. But I expect to switch them to default-on as
 soon as the spec is updated, which will hopefully happen in the next few
 weeks so this can ship in FF31.

 FYI these methods subsume the functionality of WebkitPoint and
 webkitConvertPointFromPageToNode/FromNodeToPage, which are used by some
 mobile applications and currently have no Gecko equivalent. They are also
 wanted by devtools.

 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
 le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-24 Thread Rik Cabanier
On Mon, Feb 24, 2014 at 3:01 PM, Andreas Gal andreas@gmail.com wrote:

 Is this something we could load dynamically and offline cache?


That should be possible. The CMap name is in the PDF so Firefox could
download it on demand.
Also, if the user has acrobat, the CMaps are already on their machine.

 On Feb 24, 2014, at 23:41, Brendan Dahl bd...@mozilla.com wrote:
 
  PDF.js plans to soon start including and using Adobe CMap files for
 converting character codes to character id's(CIDs) and mapping character
 codes to unicode values. This will fix a number of bugs in PDF.js and will
 improve our support for Chinese, Korean, and Japanese(CJK) documents.
 
  I wanted to inform dev-platform because there are quite a few files and
 they are large. The files are loaded lazily as needed so they shouldn't
 affect the size of Firefox when running, but they will affect the
 installation size. There are 168 files with an average size of ~40KB, and
 all of the files together are roughly:
  6.9M
  2.2M when gzipped
 
  http://sourceforge.net/adobe/cmap/wiki/Home/
 
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship background-blend-mode

2014-02-06 Thread Rik Cabanier
On Thu, Feb 6, 2014 at 1:28 AM, L. David Baron dba...@dbaron.org wrote:

 On Thursday 2014-02-06 22:02 +1300, Robert O'Callahan wrote:
  On Thu, Feb 6, 2014 at 6:58 PM, Rik Cabanier caban...@gmail.com wrote:
 
   We would like to ship mix-blend-mode too. In our experiments, Firefox
 has
   the most stable implementation. (We only have 1 open bug )
   However, we're taking a slow path today and it might take us a while to
   implement the fast path.
   We're facing similar problems on the other browsers so we're doing the
 same
   staggered approach there.
  
 
  I think what we have is reasonable to ship. It's the same situation as
 our
  SVG filters implementation basically.

 OK with me, then.


Thanks!
We will create a patch to turn this on by default in the coming days.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship background-blend-mode

2014-02-05 Thread Rik Cabanier
Primary eng emails

caban...@adobe.com, ol...@adobe.com, scrac...@adobe.com

Spec
http://www.w3.org/TR/compositing-1/#background-blend-mode


*Summary*

This new CSS property allows you to specify blending between background images
and the background color.

The specification just transitioned to CR (see
http://lists.w3.org/Archives/Public/public-fx/2014JanMar/0048.html) and is
scheduled to be published on Feb 8, 2014


It landed behind a runtime flag in mozilla:
https://bugzilla.mozilla.org/show_bug.cgi?id=841601

The flag is: layout.css.background-blend-mode.enabled

Nightly has is turned on by default.


We are in the process of porting over more tests but feel like that it is
ready to ship now and would like to flip the runtime flag to always be on.

*WebKit:*

The feature is current implemented with a prefix in WebKit:
https://bugs.webkit.org/show_bug.cgi?id=108546

The prefix will be dropped shortly (
https://bugs.webkit.org/show_bug.cgi?id=128270)

The property is not guarded by a runtime flag so it is part of all webkit
builds. (= a webkit client would have to go out of its way to remove it)


*Blink:*

This feature was landed behind the 'experimental features' flag in Chrome.

http://crbug.com/229166

http://www.chromestatus.com/features/5768037999312896

We are going to request an intent to ship there as well


*developers:*

We've not done a lot of evangelizing but lately we've seen some articles
that are covering this topic.

For instance: https://medium.com/p/6b51bf53743a with a set of nice example
files: http://codepen.io/collection/Kgshi/ (see
http://codepen.io/bennettfeely/pen/rxoAc for the typical use case of
background blending)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship background-blend-mode

2014-02-05 Thread Rik Cabanier
On Wed, Feb 5, 2014 at 9:08 PM, L. David Baron dba...@dbaron.org wrote:

 On Wednesday 2014-02-05 20:37 -0800, Rik Cabanier wrote:
  Spec
  http://www.w3.org/TR/compositing-1/#background-blend-mode
 
 
  *Summary*
 
  This new CSS property allows you to specify blending between background
 images
  and the background color.

 I'm a little concerned about shipping background-blend-mode without
 mix-blend-mode,


We would like to ship mix-blend-mode too. In our experiments, Firefox has
the most stable implementation. (We only have 1 open bug )
However, we're taking a slow path today and it might take us a while to
implement the fast path.
We're facing similar problems on the other browsers so we're doing the same
staggered approach there.


 just because mix-blend-mode is the primary blending
 control, and background-blend-mode handles an edge case.  I'm
 worried that shipping support for the edge case first will encourage
 authors to contort what they're doing in order to fit that edge
 case, and, as a result, harm other things (for example, by making
 things that should be content be background images instead, and
 hurting accessibility).


That is a valid concern. Of course, one could say the same thing if we
shipped mix-blend-mode first :-)

Since background-blend-mode *only* applies to background images and not the
content, it doesn't feel likely that this will happen that often.
On the other hand, if it's semantically more correct to use backgrounds,
people should do so. As a bonus, since background blending consumes less
resources, it will result in a better experience.



 Otherwise I think this is reasonable, though.


Thanks!
I guess I didn't follow the exact guidelines of
https://wiki.mozilla.org/WebAPI/ExposureGuidelines#Intent_to_Shiphttps://wiki.mozilla.org/WebAPI/ExposureGuidelines
since
I didn't mention a date.

When would be a good time to switch this on?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform