Re: The e10s throbber

2015-04-07 Thread andreas . gal

Counter-intuitively, having multiple content processes may use less memory than 
taking screenshots per tab. Especially if we use the same COW forking FFOS uses 
the overhead of a content processes should be very small, certainly less than a 
high resolution screenshot kept around. Not sure do what degree we can 
replicate on Windows what we do on FFOS to launch content processes. Why don’t 
we use more than 1 content process by default right now?

Thanks,

Andreas

> On Apr 7, 2015, at 12:50 PM, Ted Mielczarek  wrote:
> 
> On Tue, Apr 7, 2015, at 11:05 AM, Bill McCloskey wrote:
>> On Tue, Apr 7, 2015 at 5:48 AM, Benjamin Smedberg 
>> wrote:
>> 
>>> With desktop e10s on there can be a noticeable delay after switching tabs
>>> where there is a throbber displayed before the page content.
>>> 
>> 
>> When the user switches tabs, we allow the content process 300ms to send
>> layer information to the parent. During that time, we show the previous
>> tab. If no layers are received after 300ms, we show the spinner.
>> 
>> 
>>> Is the duration of this delay measured in telemetry anywhere, and do we
>>> have criteria for how much delay is acceptable in this case? If e10s were
>>> off, do we expect that this same delay would occur but would just show up
>>> as a jank switching tabs? Or is this a perf problem unique to e10s?
>>> 
>> 
>> We don't have telemetry yet. I've done some measurements and haven't
>> found
>> any cases where tab switching consistently takes longer in e10s. However,
>> it's certainly possible that it does on average. Either way, it's hard to
>> investigate until we can reproduce the problem.
>> 
>> The switch is definitely more noticeable in e10s because non-e10s would
>> just jank. A spinner (especially a low-quality animated gif like the one
>> we
>> have) is easier to notice than jank. We've considered a couple options
> 
> So, assuming there's content script eating up cycles, right now the e10s
> case will essentially always be worse than the non-e10s case, right? In
> the non-e10s case the chrome JS for switching tabs will get hung up and
> result in jank, but then when it gets to run the repaint should be very
> fast because the layer tree is in-process. In the e10s case the chrome
> JS will run, then we wait on IPC for the layer tree which will be held
> up by the content JS, then we have to transmit the layer tree via IPC,
> and only then can we paint.
> 
> Short of doing the layer IPC on a different IPC channel this doesn't
> seem fixable. Alternately we could focus on running >1 content process
> as others have noted, since that limits the content script problem to
> tabs running in the same process as the tab you're switching to.
> 
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: Serious windows bug affecting WebRTC when mem use is over 2GB

2015-03-26 Thread Andreas Gal

I guess we can add a command line option to our executable that calls the 
function and prints the results and exits and then invoke ourselves to do this 
in a new process and parse the output. What a silly bug.

Thanks,

Andreas

Sent from Mobile.

> On Mar 26, 2015, at 07:03, Daniel Stenberg  wrote:
> 
>> On Thu, 26 Mar 2015, Benjamin Smedberg wrote:
>> 
>> What is the largest buffer that we can expect to need? Since VM allocation 
>> happens in 64k boundaries, is it sufficient to just use a 64k buffer for 
>> this?
> 
> As per a recent comment in the bug however, it doesn't work to just reserve 
> some memory in the lower region:
> 
>  https://bugzilla.mozilla.org/show_bug.cgi?id=1107702#c20
> 
> -- 
> 
> / daniel.haxx.se
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: Serious windows bug affecting WebRTC when mem use is over 2GB

2015-03-25 Thread andreas . gal
Force a buffer in <2GB memory and always copy into/out of that buffer?

Thanks,

Andreas

> On Mar 25, 2015, at 11:17 PM, Randell Jesup  wrote:
> 
> Thanks to detective work by a subscriber to dev.media (Tor-Einar
> Jarnbjo), we've found the cause of unexplained ICE (NAT-traversal)
> failures in WebRTC on Windows (bug 1107702).  This may also affect the
> code in netwerk that tracks the network link status.
> 
> It turns out that 32bit Windows programs with the large-address-aware
> flag (to use >2GB of memory) will fail calls to
> GetHostByName/GetHostByAddress/GetAdaptersInfo/GetAdaptersAddresses *if*
> you're currently using >2GB of process memory (likely it's if the buffer
> you pass in is above the 2GB point).
> 
> This bug (http://support.microsoft.com/en-us/kb/2588507) affects
> Vista/Server2008/Win7, and unfortunately seems to only be fixable by a
> user-installation of a hotfix.  :-(
> 
> We're pretty sure it's the cause of some previously-unexplained ICE
> failures in WebRTC/Hello.
> 
> Since switching all Windows64 Firefox installs to 64-bit isn't on a
> short-term roadmap, and there is no reasonable technical workaround at
> the code level we know of, our only other options are to drop >2GB
> support in Windows (ouch) or lean heavily on Microsoft to ship the fix
> to all Windows users affected in their automatic updates.
> 
> The only other idea I can think of would be to proxy all such uses to an
> entire separate *process*.  I can't tell you how sick it makes me to
> suggest that...  Anyone have a better idea?  Or does anyone want to drop
> above-2GB support?
> 
> -- 
> Randell Jesup, Mozilla Corp
> remove "news" for personal email
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-06 Thread andreas . gal

> On Mar 6, 2015, at 6:18 PM, Ehsan Akhgari  wrote:
> 
> On 2015-03-06 1:14 PM, andreas@gmail.com wrote:
>> 
>>> On Mar 6, 2015, at 5:52 PM, Anne van Kesteren  wrote:
>>> 
>>> On Fri, Mar 6, 2015 at 6:33 PM,   wrote:
 Is the threat model for all of these permissions significant enough to 
 warrant the breakage?
>>> 
>>> What breakage do you envision?
>> 
>> I can no longer unblock popups on sites that use HTTP. The web is a big 
>> place. It will take a long time for everyone to move.
> 
> I think Anne is not proposing that.  He's proposing blocking persisting those 
> permissions.  IOW you would be able to still show popups from these websites, 
> but you won't be able to ask Firefox to remember your preference.

I know but we will break the persisting. The user will be annoyed that popup 
unblocking doesn’t work as expected on HTTP sites.

I am all for securing dangerous permissions but popups and notifications seems 
more like we are wagging our finger at the user in unhelpful ways. Most users 
will simply think Firefox is broken.

Thanks,

Andreas

> 
>>> Having said that:
>>> 
>>> * Geolocation allow for tracking the user
>>> * Fullscreen allows for impersonating the OS
>>> * Pointer Lock allows for spoofing
>> 
>> The two seem fairly trivial problems. The user will simply stop going to the 
>> spamming site. I don’t think it makes sense to treat them in the same bucket 
>> as the above 3.
> 
> I agree that the above three are more important problems to address, FWIW.
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-06 Thread andreas . gal

> On Mar 6, 2015, at 5:52 PM, Anne van Kesteren  wrote:
> 
> On Fri, Mar 6, 2015 at 6:33 PM,   wrote:
>> Is the threat model for all of these permissions significant enough to 
>> warrant the breakage?
> 
> What breakage do you envision?

I can no longer unblock popups on sites that use HTTP. The web is a big place. 
It will take a long time for everyone to move.

> 
> Having said that:
> 
> * Geolocation allow for tracking the user
> * Fullscreen allows for impersonating the OS
> * Pointer Lock allows for spoofing

The two seem fairly trivial problems. The user will simply stop going to the 
spamming site. I don’t think it makes sense to treat them in the same bucket as 
the above 3.

> * Popups allow for spamming the user
> * Notifications allow for spamming the user

Thanks,

Andreas

> 
> 
> -- 
> https://annevankesteren.nl/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-06 Thread andreas . gal
> 
> You might say that having a local network attacker able to see what
> your webcam is looking at is not scary, but I'm going to disagree.
> Also c.f. RFC 7258.

I asked for something very specific: popups. What is the threat model for the 
popup permission state?

Thanks,

Andreas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-06 Thread andreas . gal
Is the threat model for all of these permissions significant enough to warrant 
the breakage? Popups for example are annoying, but a spoofed origin to take 
advantage of whitelisted popups seems not terribly dangerous.

Thanks,

Andreas

> On Mar 6, 2015, at 5:27 PM, Anne van Kesteren  wrote:
> 
> A large number of permissions we currently allow users to store
> persistently for a given origin. I suggest we stop offering that
> functionality when there's no lock in the address bar. This will make
> it harder for a network attacker to abuse these permissions. This
> would affect UX for:
> 
> * Geolocation
> * Notification
> * Fullscreen
> * Pointer Lock
> * Popups
> 
> If you are interested in demos of how these function today:
> 
> * http://dontcallmedom.github.io/web-permissions-req/tests/geo-get.html
> * http://dontcallmedom.github.io/web-permissions-req/tests/notification.html
> * http://dontcallmedom.github.io/web-permissions-req/tests/fullscreen.html
> * http://dontcallmedom.github.io/web-permissions-req/tests/pointerlock.html
> * http://dontcallmedom.github.io/web-permissions-req/tests/popup.html
> 
> Note that we have already implemented this for getUserMedia(). You can
> contrast the UX for these two links:
> 
> * http://dontcallmedom.github.io/web-permissions-req/tests/gum-audiovideo.html
> * 
> https://dontcallmedom.github.io/web-permissions-req/tests/gum-audiovideo.html
> 
> This seems like a change we can make today that would be better for
> our users and nudge those that require persistence to do the right
> thing, without causing much harm.
> 
> 
> -- 
> https://annevankesteren.nl/
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


prebuilt libraries?

2014-11-25 Thread Andreas Gal

Would it make sense to check in some of the libraries we build that we very 
rarely change, and that don’t have a lot of configure dependencies people 
twiddle with? (icu, pixman, cairo, vp8, vp9). This could speed up build times 
in our infrastructure and for developers. This doesn’t have to be in 
mozilla-central. mach could pick up a matching binary for the current 
configuration from github or similar. Has anyone looked into this?

Thanks,

Andreas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: power use on Yosemite

2014-11-01 Thread Andreas Gal
> 
> Are we using the discrete GPU when Chrome is not?

That was my first guess as well. As far as I can tell we fall back to 
integrated GPU just fine, according to the Activity Monitor. Even App Nap seems 
to work when FF is occluded. Yet, our avg energy impact is 5x of Chrome.

Andreas

> 
> - Kyle

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


power use on Yosemite

2014-11-01 Thread Andreas Gal

I am using Nightly on Yosemite and power use is pretty atrocious. The battery 
menu tags Firefox Nightly as a significant battery hog, and I can confirm this 
from the user experience perspective as well. My battery time is a fraction of 
using Chrome for the same tasks.

Not every kind of content seems to trigger this behavior, but Google Sheets in 
Firefox seems to be a pretty reliable way to drain my battery quickly.

I checked with Instruments and I don’t think CPU utilization per se is the 
problem. We use less CPU than Chrome, especially if you add up all the many 
Chrome processes. Our CPU utilization is better than Chrome’s but our battery 
behavior is much worse. That’s really odd.

Ideas welcome.

Thanks,

Andreas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox 2.2.0 and everything.me

2014-10-27 Thread Andreas Gal

> On Oct 27, 2014, at 1:16 AM, Karl Dubost  wrote:
> 
> Andreas,
> 
> Le 27 oct. 2014 à 08:15, Andreas Gal  a écrit :
>> What happens when a user types letters into the Google search box we ship by 
>> default in Firefox?
> 
> Do you mean desktop? Sorry I was not clear, but I was talking about Firefox 
> OS.
> But that seems unrelated to everything.me
> 
> 
> About Desktop:
> * For Google search box on Desktop, I suspect the one which is available in 
> the initial tab (center of the page) is also sending XHR requests to Google 
> for each letter.
> * For the URL bar, it seems it does not send anything. At least it doesn't 
> suggest anything. It will send to Google keywords which are not URIs, once 
> you have typed "enter"
> 
> At least that is my understanding.

So we send an XHR request for each letter to Google on Desktop (search box), 
and XHR requests to e.me <http://e.me/> on Firefox OS. How are these cases 
different. Are they both a problem? I am just trying to understand the exact 
nature of your concern.

Thanks,

Andreas

> 
> 
> -- 
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox 2.2.0 and everything.me

2014-10-27 Thread Andreas Gal
What happens when a user types letters into the Google search box we ship by 
default in Firefox?

Thanks,

Andreas

> On Oct 27, 2014, at 12:08 AM, Karl Dubost  wrote:
> 
> In Firefox 2.2.0, each time you try to enter a letter, there are a list of 
> icons displayed which seems to be delivered by Everything.me.
> 
> I see two issues:
> 
> 1. experience-wise it is annoying to have the flickering of icons for each 
> letter typed.
> 
> 2. It seems to be a change of policy in terms of sharing data the user type 
> in the URL bar. Is there a possibility to put that off and not having 
> whatever you type up there to be sent somewhere the user does not expect?
> 
> Maybe there are bugs for this.
> Thanks.
> 
> -- 
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The worst piece of Mozilla code

2014-10-16 Thread Andreas Gal

I am glad to hear there is so much activity happening there. Kind of makes my 
point though: that code needed it :)

Andreas

On Oct 16, 2014, at 8:03 PM, Josh Matthews  wrote:

> I'm not certain that the image/src/ code is as bad as you make out any more. 
> bholley certainly is no longer the expert there; I took over a bunch of his 
> work to clean it up a year or two ago, and Seth is the benevolent dictator 
> now and has done some good cleanup work on it as well.
> 
> Cheers,
> Josh
> 
> On 2014-10-16 6:45 PM, Andreas Gal wrote:
>> 
>> The code is really bizarre, needlessly complex and impossible to understand 
>> and maintain. We could use a lot of improvements in this area to better 
>> decide what images to load when and how and when to retain or purge them. 
>> There is a lot of state machinery and multi-threading at work. I wouldn’t be 
>> surprised if we find a couple nasty correctness bugs if we ever decide to 
>> clean up this mess. bholley is the expert for this code I think. He can give 
>> you a better overview (full disclosure: this code used to be much worse 
>> before he went to town on it).
>> 
>> Andreas
>> 
>> On Oct 16, 2014, at 7:33 PM, Nicholas Nethercote  
>> wrote:
>> 
>>> On Fri, Oct 17, 2014 at 8:55 AM, Andreas Gal  wrote:
>>>> 
>>>> I would like to nominate image/src/* and in particular its class hierarchy 
>>>> which completely doesn’t make any sense what so ever. imgRequest, 
>>>> imgIRequest, we got it all.
>>> 
>>> Does this cause correctness problems, or is it just hard to read and
>>> thus modify? Is there a path that could be taken to gradually improve
>>> it?
>>> 
>>> Nick
>> 
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The worst piece of Mozilla code

2014-10-16 Thread Andreas Gal

The code is really bizarre, needlessly complex and impossible to understand and 
maintain. We could use a lot of improvements in this area to better decide what 
images to load when and how and when to retain or purge them. There is a lot of 
state machinery and multi-threading at work. I wouldn’t be surprised if we find 
a couple nasty correctness bugs if we ever decide to clean up this mess. 
bholley is the expert for this code I think. He can give you a better overview 
(full disclosure: this code used to be much worse before he went to town on it).

Andreas

On Oct 16, 2014, at 7:33 PM, Nicholas Nethercote  wrote:

> On Fri, Oct 17, 2014 at 8:55 AM, Andreas Gal  wrote:
>> 
>> I would like to nominate image/src/* and in particular its class hierarchy 
>> which completely doesn’t make any sense what so ever. imgRequest, 
>> imgIRequest, we got it all.
> 
> Does this cause correctness problems, or is it just hard to read and
> thus modify? Is there a path that could be taken to gradually improve
> it?
> 
> Nick

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The worst piece of Mozilla code

2014-10-16 Thread Andreas Gal

I would like to nominate image/src/* and in particular its class hierarchy 
which completely doesn’t make any sense what so ever. imgRequest, imgIRequest, 
we got it all.

Andreas

On Oct 16, 2014, at 6:44 PM, Randell Jesup  wrote:

>> On Fri, Oct 17, 2014 at 1:32 AM, Nicholas Nethercote >> wrote:
>> 
>>> I was wondering what people think is the worst piece of code in the
>>> entire Mozilla codebase. I'll leave the exact meanings of "worst" and
>>> "piece of code" unspecified...
>>> 
>> 
>> Probably not the worst, but always deserves a mention:
>> http://dxr.mozilla.org/mozilla-central/source/layout/xul/nsSprocketLayout.cpp#632
>> 
>> // That's it!  If you made it this far without having a nervous
>> // breakdown, congratulations!  Go get yourself a beer.
>> 
>> ... which ties into the XUL thread :-)
> 
> Reminds me of the total rewrite I did of the table-layout code for
> SpyGlass eons ago, to actually work correctly (I think I even had it
> handle bug 10212 correct - height 100% when nesting tables).  Quite
> brain-warping, especially how changes can ripple through multiple times.
> 
> -- 
> Randell Jesup, Mozilla Corp
> remove "news" for personal email
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Breakdown of Firefox full installer

2014-10-14 Thread Andreas Gal

I looked at lzma2 a while ago for FFOS. I got pretty consistently 30% smaller 
omni.ja with that. We could add it pretty easily to our decompression code but 
it has slightly different memory behavior.

Andreas

On Oct 13, 2014, at 5:39 PM, Gregory Szorc  wrote:

> On 10/13/14 4:54 PM, Chris More wrote:
>> Does anyone know or could any of you create a breakdown of the major blocks 
>> of the Firefox installer and each of their respective sizes or percentage of 
>> the whole?
>> 
>> For example, the win32 installer for Firefox 32 is 34MB. The Firefox Growth 
>> team [1] like to know of that 34MB, what is the percentage or size of each 
>> of the components within the 34MB. As for the granularity of the breakdown, 
>> it would be by some logic way of breaking down Firefox. For example, 
>> SpiderMonkey, tools, XUL, etc. I'll leave the granularity up to you on what 
>> you consider a logic block to quantify together.
>> 
>> Why am I asking this?
>> 
>> The win32 Firefox full installer continues to grow (see attachment) each 
>> release and it has been on an increasing growth since Firefox 29. Like 
>> anything on the web, the time it takes to download something (webpage, 
>> binary file, etc.) affects the key conversion rate by some amount. The 
>> Firefox Growth team has a project to understand what features/changes in 
>> Firefox are contributing to the growth or size of the installer. We've asked 
>> a few times previously, but it doesn't look like the documentation or 
>> analysis exist.
>> 
>> Would anyone be able to take on this project as it would be very helpful to 
>> the team? I am imagining a pie chart of the the current installer and then a 
>> table of the name of each component, their size (KB or MB), and any 
>> additional meta data.
> 
> If you are looking for ideas on how to reduce download size, the way omni.ja 
> is included in the installer could be reduced by 4+ MB. Both omni.ja and 
> browser/omni.ja are zip archives, where each file has a separate compression 
> context. If you treat all files from those two archives as a single 
> compression context (like tar+bz2) and stuff them in a single archive, you 
> get size reductions of 4+ MB due to the compression engine sharing state 
> between files. We can't install omni.ja like this on client systems because 
> it is bad for performance (we want the ability to extract individual files 
> without reading a large compression context - this is a benefit of the zip 
> format). But we could ship the files optimized for size (or have installer's 
> compression handle the files individually) and have the installer re-encode 
> them to omni.ja so they are optimized for performance.
> 
> I'm not sure if this has been considered or attempted before. Things are 
> slightly complicated by the fact that omni.ja is a slightly customized zip 
> format. We'd need to ship code to encode omni.ja.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Icon fonts in FxOS

2014-06-18 Thread Andreas Gal

On Jun 18, 2014, at 2:03 AM, Vivien Nicolas  wrote:

> 
> On 06/17/2014 09:18 PM, James Burke wrote:
>> On 6/17/14, 10:08 AM, Vivien Nicolas wrote:
>>> That's true. Actually there are many other hacks that depends on the fact 
>>> that application are certified. So even if I would like to have more apps 
>>> as privileged apps just for the principle, it's not that simple. And we may 
>>> need to reconsider the |privileged| status of the email app based on some 
>>> of the use case on some low end devices for now.
>>> 
>>> So one of the only reason the email app has been made |privileged| is for 
>>> some CSP compliance things, and because it does not needs APIs that are 
>>> certified-only. But we may need to keep it certified for perf reasons if 
>>> needed. It will depends on the impact of icon font there.
>> 
>> I appreciate there are always tradeoffs, but I also want to caution against 
>> just proceeding because there is an escape value via certified. We have a 
>> train model, and if it takes another train to avoid certified-only, all apps 
>> benefit. It is already disconcerting that just switching the type value in 
>> the manifest from certified to privileged, we see a 20ms slowdown[1].
> 
> TBH I'm not surpised. We (re)discovered last september/october that the CSP 
> in JS was consuming a lot of startup time. Not because the JS code was slow, 
> but because of 1 xpconnect call per file, and apps loads a lot of files.
> 
> So for certified app, we landed a fast path. It would be good to investigate 
> if this is purely related to the CSP or not, by simply disabling it 
> (security.csp.enable = false), and if yes, investigate if reducing the number 
> of files by aggregating them helps.

Please profile this. I am sure this can be optimized. We likely don’t need to 
involve xpconnect here for starters.

Thanks,

Andreas

> 
>> 
>> The greater concern is these certified escapes build up, and then taking the 
>> time to undo them later eats into the next certified escape that is wanted, 
>> so the gap will continue to grow. A good way to start fighting the issue is 
>> to stop adding to the pile.
>> 
> 
> It's true that this is a big concern. The greater concern is that the web 
> platform is not competitive / successful. So in order to mitigate some of the 
> issues, that are always solvable but takes time, we adding to the pile. We 
> know that at some point we have to pay the cost, and we always try to not 
> take this path - sadly sometimes there is no other choice for a given 
> release, and then we need to add to the pile.
> 
> 
>> On icon fonts, it would be good to make sure there implemented with 
>> accessibility in mind. This document[1] talks about that, and mentions a 
>> Firefox bug[2] about aria-hidden that may need some attention if icon fonts 
>> are used in buttons.
>> 
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1024005
>> [2] http://filamentgroup.com/lab/bulletproof_icon_fonts.html
>> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=948540
>> 
>> James
>> 
> 
> ___
> b2g-internal mailing list
> b2g-inter...@mozilla.org
> https://mail.mozilla.org/listinfo/b2g-internal

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-30 Thread andreas . gal
Please read my email again. This kind of animation cannot be rendered with high 
FPS by any engine. It's simply conceptually expensive and inefficient for the 
DOM rendering model. We will work on matching other engines if we are slightly 
slower than we could be, but you will never reach solid performance on low end 
hardware with the current approach. While we work on squeezing out a few more 
FPS, please work on implementing a tab strip that can be rendered efficiently.

Andreas

Sent from Mobile.

> On May 30, 2014, at 10:46, Dao  wrote:
> 
>> On 30.05.2014 07:28, Matt Woodrow wrote:
>> I definitely agree with this, but we also need OMTAnimations to be
>> finished and enabled before any of the interesting parts of the UI can
>> be converted.
>> 
>> Given that, I don't think we can have this conversation at the expense
>> of trying to fix the current set of regressions from OMTC.
> 
> Even if off-main-thread animations worked and we somehow re-designed and 
> re-implemented the tab strip today, this still wouldn't wipe away the gist of 
> the regressions, which really isn't about the tab strip. The tab strip uses 
> web technology or derivative thereof (XUL flexbox, but I guess that's not at 
> fault here...). Telling web developers that they should only ever animate 
> transforms / opacity or use canvas is a flawed strategy when Gecko performs 
> worse than it used to and/or worse than other engines on animations involving 
> reflows.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-30 Thread Andreas Gal

There are likely two causes here.

First, until we have APZ enabled its very unlikely that we can ever maintain a 
high frame-rate scrolling on low-end hardware. OMTC is a prerequisite for APZ 
(async pan/zoom). Low end hardware is simply not fast enough to repaint and 
buffer-rotate with 60FPS.

Now for Intel hardware being slow there could be a couple reasons, and APZ 
might fix them actually. If I remember correctly Atom GPUs are PowerVR based, 
which is a tile based rendering architecture. It splits the frame buffer in 
small tiles and renders those. To do this efficiently it defers rendering for 
as long as possible. Other GPUs start rendering as soon as possible, whereas 
PowerVR waits until the entire frame is ready and then renders it then. We do a 
couple operations while rendering that might force a pipeline flush, which 
likely forces PowerVR to render right away, which is very bad for PowerVR’s 
particular render model. If you can point us to some specific hardware we 
really suck on we can definitely look into this.

Andreas

On May 30, 2014, at 6:25 AM, avi...@gmail.com wrote:

> On Friday, May 30, 2014 8:22:25 AM UTC+3, Matt Woodrow wrote:
>> Thanks Avi!
>> 
>> 
>> 
>> I can reproduce a regression like this (~100% slower on 
>> 
>> iconFade-close-DPIcurrent.all) with my machine forced to use the intel 
>> 
>> GPU, but not with the Nvidia one.
> 
> Indeed, and it's not the first time we notice that Firefox performs much 
> worse with Intel iGPUs compared to nvidia.
> 
> This comment: https://bugzilla.mozilla.org/show_bug.cgi?id=894128#c30 
> compares scrolling performance on a Wikipedia page, on a different system 
> than the one I used to produce these OMTC numbers with.
> 
> It suggests that we were already doing badly enough with intel iGPUs even 
> before OMTC (about 300% worse and much more noisy intervals than nvidia on a 
> Wikipedia page if we're to believe those numbers), and it looks as if with 
> OMTC the regression compared to nvidia increased even more (in relative 
> terms).
> 
> In comment 1 of the same bug 894128, I also compared the performance of 
> Chrome and IE on the same pages.
> 
> FWIW, IE is able to maintain 100% smooth scrolling on some really complex 
> pages even on a _very_ low end Atom system (Intel iGPU), while Firefox 
> doesn't come anywhere near it.
> 
> While scroll and tab animations are possibly different things, I do think 
> there's a line which connects these dots, and it's that for whatever reason, 
> Firefox does really badly on Intel iGPUs.
> 
> Which is unfortunate, because on many many systems these are the only 
> available GPUs, and they're already considered good enough for the majority 
> of users to not need a dedicated GPU.
> 
> - avih
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-29 Thread Andreas Gal

I think we should shift the conversation to how we actually animate here. 
Animating by trying to reflow and repaint with 60fps is just a bad idea. This 
might work on very high end hardware, but it will cause poor performance on the 
low-end Windows notebooks people buy these days. In other words, I am pretty 
sure our animation here was bad for a lot of our users pre-OMTC.

OMTC enables us to do smooth 60fps animations even under high load and even on 
very low end hardware, as long we do the animation right. So lets focus on that 
and figure out how to draw a tab strip that doesn’t hit pathological repainting 
paths.

I see two options here. We have to change our UX such that we can execute a 
smooth animation on the compositor (transforms, opacity changes, filter 
effects, etc), or we should draw the tab strip with canvas, which is more 
suitable for complex custom animations than reflow.

Andreas

On May 29, 2014, at 10:14 PM, avi...@gmail.com wrote:

> So, wrt TART, I now took the time to carefully examine tab animation visually 
> on one system.
> 
> TL;DR:
> - I think OMTC introduces a clearly visible regression with tab animation 
> compared to without OMTC.
> - I _think_ it regresses more with tab close than with tab open animation.
> - The actual throughput regression is probably bigger than indicated by TART 
> numbers.
> 
> 
> The reason for the negative bias is that the TART results are an average of 
> 10 different animations, but only one of those is close to pure graphics perf 
> numbers, and when you look only on this test, the regression is bigger than 
> 50-100% (more like 100-400%).
> 
> The details:
> 
> System: Windows 8.1 x64, i7-4500u, using Intel's iGPU (HD4400), and with 
> official Firefox nightly 32bit (2014-05-29).
> 
> First, visually: both with and without ASAP mode, to my eyes, tab animation 
> with OMTC is less smooth, and seems to have lower frame rate than without 
> OMTC.
> 
> As for what TART measures, of all the TART subtests, there are 3 which are 
> most suitable for testing pure graphics performance - they test the css 
> fade-in and fade-out (that's the close/open animation) of a tab without 
> actually opening or closing a browser tab, so whatever performance it has, 
> the limit comes only from the animation itself and it doesn't include other 
> overheads.
> 
> These tests are the ones which have "fade" in their name, and only one of 
> them is enabled by default in talos - the other two are available only when 
> running TART locally and then manually selecting animations to run.
> 
> I'll focus on a single number which is the average frame interval of the 
> entire animation (these are the ".all" numbers), for the fade animation at 
> default DPI (which is 1 on my system - so the most common).
> 
> What TART measures locally on my system:
> 
> OMTC without ASAP mode (as out of the box config as it gets):
> iconFade-close-DPIcurrent.allAverage (5): 18.91 stddev: 0.86
> iconFade-open-DPIcurrent.all Average (5): 17.61 stddev: 0.78
> 
> OMTC with ASAP:
> iconFade-close-DPIcurrent.allAverage (5): 18.47 stddev: 0.46
> iconFade-open-DPIcurrent.all Average (5): 10.08 stddev: 0.46
> 
> While this is an average of only 5 runs, stddev shows it's reasonably 
> consistent, and the results are also consistent when I tried more.
> 
> We can already tell that close animation just doesn't get below ~18.5ms/frame 
> on this system, ASAP doesn't affect it at all. We can also see that the open 
> animation is around 60fps without ASAP (17.6 can happen with our inaccurate 
> interval timers) and with ASAP it goes down to about 10ms/frame.
> 
> Without OMTC and without ASAP:
> iconFade-close-DPIcurrent.allAverage (5): 16.54 stddev: 0.16
> iconFade-open-DPIcurrent.all Average (5): 16.52 stddev: 0.12
> 
> Without OMTC and with ASAP:
> iconFade-close-DPIcurrent.allAverage (5): 5.53 stddev: 0.07
> iconFade-open-DPIcurrent.all Average (5): 6.37 stddev: 0.08
> 
> The results are _much_ more stable (stddev), and quite lower (in ASAP) and 
> closer to 16.7 in "normal" mode.
> 
> While I obviously can't visually notice differences when the frame rate is 
> higher than my screen's 60hz, from what I've seen so far, both visually and 
> at the numbers, I think TART is not less reliable than before, it doesn't 
> look to me as if ASAP introduces very bad bias (I couldn't deduct any), and 
> OMTC does seem regress tab animations meaningfully.
> 
> - avih
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-15 Thread Andreas Gal

On Apr 15, 2014, at 9:00 PM, Robert O'Callahan  wrote:

> On Wed, Apr 16, 2014 at 11:14 AM, Vladimir Vukicevic 
> wrote:
> 
>> Note that for purposes of this discussion, "VR support" is minimal.. some
>> properties to read to get some info about the output device (resolution,
>> eye distance, distortion characteristics, etc) and some more to get the
>> orientation of the device.  This is not a highly involved API nor is it
>> specific to Oculus, but more as a first-put based on hardware that's easily
>> available.
>> 
> 
> A couple of related questions that might matter:
> 
> How much code are we talking about? (I'm too lazy to register to find out)

https://github.com/jdarpinian/LibOVR

Its really not a lot of code. There is some signal processing math to do sensor 
filtering and fusion, but its a few hundred lines only. The rest is standard 
USB HID device access glue for osx/linux/win.

Andreas

> 
> Any ideas on how Oculus might evolve their code in the future? Will it add
> new functionality we want?
> 
> Rob
> -- 
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-15 Thread Andreas Gal

On Apr 15, 2014, at 4:17 PM, Benoit Jacob  wrote:

> 
> 
> 
> 2014-04-15 18:28 GMT-04:00 Andreas Gal :
> 
> You can’t beat the competition by fast following the competition. Our 
> competition are native, closed, proprietary ecosystems. To beat them, the Web 
> has to be on the bleeding edge of technology. I would love to see VR support 
> in the Web platform before its available as a builtin capability in any major 
> native platform.
> 
> Can't we?   (referring to: "You can’t beat the competition by fast following 
> the competition.”)

Yes, we can. Look at some of the performance characteristics of FFOS on low-end 
hardware. We beat Android and other native systems on a regular basis on key 
performance metrics like startup performance by leveraging architectural 
advantages of the Web stack (lazy loading, etc). Or compare opening the App 
Store app on Mac OS X with going to a marketplace website like amazon.com. We 
load a rich content experience faster over the net than my SSD high end Mac 
loads from disk because the Web has evolved to a place where it has better 
capabilities for these tasks than native.

> 
> The Web has a huge advantage over the competition ("native, closed, 
> proprietary ecosystems"):
> 
> The web only needs to be good enough.

Aiming low is always wrong. Always. It is true that the Web has massive reach, 
but thats not an excuse to be stagnant and reach for the “lowest common 
denominator” as you are proposing it. The massive reach of the Web helps us to 
get innovation to people faster. It doesn’t remove the need to innovate.

> 
> Look at all the wins that we're currently scoring with Web games. (I mention 
> games because that's relevant to this thread). My understanding of this 
> year's GDC announcements is that we're winning. To achieve that, we didn't 
> really give the web any technical superiority over other platforms; in fact, 
> we didn't even need to achieve parity. We merely made it good enough. For 
> example, the competition is innovating with a completely new platform to "run 
> native code on the web", but with asm.js and emscripten we're showing that 
> javascript is in fact good enough, so we end up winning anyway.

We aren’t winning just yet. We barely got the foundation laid for Web gaming 
(even though I agree that we likely have tipped the scale now). In any case, we 
got here through technical excellence and innovation. asm.js is not merely good 
enough as you are claiming. It is the fastest, mostly widely available way to 
deliver portable game code to devices, with performance rivaling native 
performance. Thats very different from “lets just trail the market and do as 
little as we need to."

> 
> What we need to ensure to keep winning is 1) that the Web remains good enough 
> and 2) that it remains true, that the Web only needs to be good enough.
> 
> In this respect, more innovation is not necessarily better, and in fact, the 
> cost of innovating in the wrong direction could be particularly high for the 
> Web compared to other platforms. We need to understand the above 2) point and 
> make sure that we don't regress it. 2) probably has something to do with the 
> fact that the Web is the one "write once, run anywhere" platform and, on top 
> of that, also offers "run forever". Indeed, compared to other platforms, we 
> care much more about portability and we are much more serious about 
> committing to long-term platform stability. Now my point is that we can only 
> do that by being picky with what we support. There's no magic here; we don't 
> get the above 2) point for free.

I think you get the history of the Web all wrong. The Web has always been and 
will always be like the Wild West. Innovation happens all over the place, and 
we iterate towards a stable, standardized point after innovation happened. This 
is the biggest strength of the Web. Its not governed by a committee approving 
and managing the pace of innovation (or worse, by a single company controlling 
the ecosystem like Google or Apple). Nobody owns the Web and nobody can stop 
innovation. Of the 4 or so major browser vendors, if 2 move in some direction 
the other 2 have to follow suit or suffer the consequences of not being 
competitive on some characteristics. At the same time, nobody can go alone and 
fork the Web because nobody has enough market share to force a standard on 
their own. This is why Google’s proprietary extensions like NaCl and Dart are 
failing to get traction.

Innovation is the life blood of the Web and we need heretics like Vlad to push 
its boundaries. I remember when Vlad first started pushing for WebGL. A lot of 
people felt its crazy talk to expose GL to the Web and today we can’t imagine a 
Web without it. Knowing Vlad and hi

Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-15 Thread Andreas Gal

You can’t beat the competition by fast following the competition. Our 
competition are native, closed, proprietary ecosystems. To beat them, the Web 
has to be on the bleeding edge of technology. I would love to see VR support in 
the Web platform before its available as a builtin capability in any major 
native platform.

Andreas

On Apr 15, 2014, at 2:57 PM, Robert O'Callahan  wrote:

> On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob wrote:
> 
>> If VR is not yet a thing on the Web, could you elaborate on why you think
>> it should be?
>> 
>> I'm asking because the Web has so far mostly been a common denominator,
>> conservative platform. For example, WebGL stays at a distance behind the
>> forefront of OpenGL innovation. I thought of that as being intentional.
>> 
> 
> That is not intentional. There are historical and pragmatic reasons why the
> Web operates well in "fast follow" mode, but there's no reason why we can't
> lead as well. If the Web is going to be a strong platform it can't always
> be the last to get shiny things. And if Firefox is going to be strong we
> need to lead on some shiny things.
> 
> So we need to solve Vlad's problem.
> 
> Rob
> -- 
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-14 Thread Andreas Gal

Vlad asked a specific question in the first email. Are we comfortable using 
another open (albeit not open enough for MPL) license on trunk while we rewrite 
the library? Can we compromise on trunk in order to innovate faster and only 
ship to GA once the code is MPL friendly via re-licensing or re-writing? What 
is our view on this narrow question?

Andreas

On Apr 14, 2014, at 5:35 PM, Vladimir Vukicevic  wrote:

> On Monday, April 14, 2014 7:29:43 PM UTC-4, Ralph Giles wrote:
>>> The goal would be to remove LibOVR before we ship (or keep it in assuming 
>>> it gets relicensed, if appropriate), and replace it with a standard "Open 
>>> VR" library.
>> 
>> Can you dlopen the sdk, so it doesn't have to be in-tree? That still
>> leaves the problem of how to get it on a user's system, but perhaps an
>> add-on can do that part while the interface code in is-tree.
> 
> Unfortunately, no -- the interface is all C++, and the headers are licensed 
> under the same license.  A C layer could be written, but then we're back to 
> having to ship it separately via addon or plugin anyway.
> 
>> Finally, did you see Gerv's post at
>> 
>> http://blog.gerv.net/2014/03/mozilla-and-proprietary-software/
> 
> Yes -- perhaps unsurprisingly, I disagree with Gerv on some of the 
> particulars here.  Gerv's opinions are his own, and are not official Mozilla 
> policy.  That post I'm sure came out of a discussion regarding this very 
> issue here.  In particular, my stance is that we build open source software 
> because we believe there is value in that, and that it is the best way to 
> build innovative, secure, and meaningful software.  We don't build open 
> source software for the sake of building open source.
> 
>- Vlad
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Andreas Gal

Could we compress major parts of omni.ja en block? We could for example stick 
all JS we load at startup into a zip with zero compression and then compress 
that into an outer zip. I think we already support nested containers like that. 
Assuming your math is correct even without adding LZMA2 just sticking with zip 
we should get better compression and likely better load times. Wdyt?

Andreas

On Feb 27, 2014, at 12:25 AM, Mike Hommey  wrote:

> On Wed, Feb 26, 2014 at 08:56:37PM +0100, Andreas Gal wrote:
>> 
>> This randomly reminds me that it might be time to review zip as our
>> compression format for omni.ja.
>> 
>> ls -l omni.ja 
>> 
>> 7862939
>> 
>> ls -l omni.tar.xz (tar and then xz -z)
>> 
>> 4814416
>> 
>> LZMA2 is available as a public domain implementation. It uses a bit
>> more memory than zip, but its still within reason (the default level 6
>> is around 1MB to decode I believe). A fairly easy way to use it would
>> be to add support for a custom compression format for our version of
>> libjar.
> 
> IIRC, it's also slower both to compress and decompress. Note you're
> comparing oranges with apples, too.
> Jars are per-file compression. tar.xz is per-archive compression.
> This is what i get:
> 
> $ stat -c %s ../omni.ja
> 8609399
> 
> $ unzip -q ../omni.ja
> $ find -type f -not -name *.xz | while read f; do a=$(stat -c %s $f); xz 
> --keep -z $f; b=$(stat -c %s $f.xz); if [ "$a" -lt "$b" ]; then rm $f.xz; 
> else rm $f; fi; done
> # The above compresses each file individually, and keeps either the
> # decompressed file of the compressed file depending which is smaller,
> # which is essentially what we do when creating omni.ja
> 
> $ find -type f | while read f; do stat -c %s $f; done | awk '{t+=$1}END{print 
> t}'
> # Sum all file sizes, excluding directories that du would add.
> 7535827
> 
> That is, obviously, without jar headers.
> $ unzip -lv ../omni.ja 2>/dev/null | tail -1
> 27696753  8260243  70%2068 files
> $ echo $((8609399 - 8260243))
> 349156
> 
> Thus, that same omni.ja that is 8609399, with xz compression would be
> 7884983. Not much of a win, and i doubt it's worth it considering the
> runtime implication.
> 
> However, there is probably room for improvement on the installer side.
> 
> Mike

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Andreas Gal

This sounds like quite an opportunity to shorten download times and reduce CDN 
load. Who wants to file the bug? :)

Andreas

On Feb 26, 2014, at 9:44 PM, Benjamin Smedberg  wrote:

> On 2/26/2014 3:21 PM, Jonathan Kew wrote:
>> On 26/2/14 19:57, Andreas Gal wrote:
>>> 
>>> Lets turn this question around. If we had an on-demand way to load stuff 
>>> like this, what else would we want to load on demand?
>> 
>> A few examples:
>> 
>> Spell-checking dictionaries
>> Hyphenation tables
>> Fonts for additional scripts
> Yes!
> 
> Also maybe ICU data tables, although the current web-facing APIs don't 
> support asynchronous download very well.
> 
> --BDS
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Andreas Gal

Lets turn this question around. If we had an on-demand way to load stuff like 
this, what else would we want to load on demand?

Andreas

On Feb 26, 2014, at 8:53 PM, Bobby Holley  wrote:

> That's still a ton for something that most of our users will not (or will
> rarely) use. I think we absolutely need to get an on-demand story for this
> kind of stuff. It isn't the first time it has come up.
> 
> bholley
> 
> 
> On Wed, Feb 26, 2014 at 11:38 AM, Brendan Dahl  wrote:
> 
>> Yury Delendik worked on reformatting the files a bit and was able to get
>> them down to 1.1MB binary which gzips to 990KB. This seems like a
>> reasonable size to me and involves a lot less work than setting up a
>> process for distributing these files via CDN.
>> 
>> Brendan
>> 
>> On Feb 24, 2014, at 10:14 PM, Rik Cabanier  wrote:
>> 
>>> 
>>> 
>>> 
>>> On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal 
>> wrote:
>>> 
>>> My assumption is that certain users only need certain CMaps because they
>> tend to read only documents in certain languages. This seems like something
>> we can really optimize and avoid ahead-of-time download cost for.
>>> 
>>> So, you'd only install the Korean CMaps if the language is Korean?
>>> The problem with that is that if a user might install a English version
>> of Firefox but still open Korean PDFs (which will then display as junk)
>>> 
>>> 
>>> The fact that we don't do this yet doesn't seem like a good criteria.
>> There is a lot of good things we aren't doing yet. You can be the first to
>> change that on this particular topic, if it technically makes sense.
>>> 
>>> Load-on-demand (with an option to download all of them) seems like a
>> nice solution. A large majority of users will never need CMaps or only a
>> very small subset.
>>> 
>>> On Feb 25, 2014, at 1:27 AM, Brendan Dahl  wrote:
>>> 
>>>> It's certainly possible to load dynamically. Do we currently do this
>> for any other Firefox resources?
>>>> 
>>>> From what I've seen, many PDF's use CMaps even if they don't
>> necessarily have CJK characters, so it may just be better to include them.
>> FWIW both Popper and Mupdf embed the CMaps.
>>>> 
>>>> Brendan
>>>> 
>>>> On Feb 24, 2014, at 3:01 PM, Andreas Gal 
>> wrote:
>>>> 
>>>>> Is this something we could load dynamically and offline cache?
>>>>> 
>>>>> Andreas
>>>>> 
>>>>> Sent from Mobile.
>>>>> 
>>>>>> On Feb 24, 2014, at 23:41, Brendan Dahl  wrote:
>>>>>> 
>>>>>> PDF.js plans to soon start including and using Adobe CMap files for
>> converting character codes to character id's(CIDs) and mapping character
>> codes to unicode values. This will fix a number of bugs in PDF.js and will
>> improve our support for Chinese, Korean, and Japanese(CJK) documents.
>>>>>> 
>>>>>> I wanted to inform dev-platform because there are quite a few files
>> and they are large. The files are loaded lazily as needed so they shouldn't
>> affect the size of Firefox when running, but they will affect the
>> installation size. There are 168 files with an average size of ~40KB, and
>> all of the files together are roughly:
>>>>>> 6.9M
>>>>>> 2.2M when gzipped
>>>>>> 
>>>>>> http://sourceforge.net/adobe/cmap/wiki/Home/
>>>>>> 
>>>>>> ___
>>>>>> dev-platform mailing list
>>>>>> dev-platform@lists.mozilla.org
>>>>>> https://lists.mozilla.org/listinfo/dev-platform
>>>> 
>>> 
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>> 
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Andreas Gal

This randomly reminds me that it might be time to review zip as our compression 
format for omni.ja.

ls -l omni.ja 

7862939

ls -l omni.tar.xz (tar and then xz -z)

4814416

LZMA2 is available as a public domain implementation. It uses a bit more memory 
than zip, but its still within reason (the default level 6 is around 1MB to 
decode I believe). A fairly easy way to use it would be to add support for a 
custom compression format for our version of libjar.

Andreas

On Feb 26, 2014, at 8:38 PM, Brendan Dahl  wrote:

> Yury Delendik worked on reformatting the files a bit and was able to get them 
> down to 1.1MB binary which gzips to 990KB. This seems like a reasonable size 
> to me and involves a lot less work than setting up a process for distributing 
> these files via CDN.
> 
> Brendan
> 
> On Feb 24, 2014, at 10:14 PM, Rik Cabanier  wrote:
> 
>> 
>> 
>> 
>> On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal  wrote:
>> 
>> My assumption is that certain users only need certain CMaps because they 
>> tend to read only documents in certain languages. This seems like something 
>> we can really optimize and avoid ahead-of-time download cost for.
>> 
>> So, you'd only install the Korean CMaps if the language is Korean?
>> The problem with that is that if a user might install a English version of 
>> Firefox but still open Korean PDFs (which will then display as junk)
>> 
>> 
>> The fact that we don’t do this yet doesn’t seem like a good criteria. There 
>> is a lot of good things we aren’t doing yet. You can be the first to change 
>> that on this particular topic, if it technically makes sense.
>> 
>> Load-on-demand (with an option to download all of them) seems like a nice 
>> solution. A large majority of users will never need CMaps or only a very 
>> small subset.
>> 
>> On Feb 25, 2014, at 1:27 AM, Brendan Dahl  wrote:
>> 
>>> It’s certainly possible to load dynamically. Do we currently do this for 
>>> any other Firefox resources?
>>> 
>>> From what I’ve seen, many PDF’s use CMaps even if they don’t necessarily 
>>> have CJK characters, so it may just be better to include them. FWIW both 
>>> Popper and Mupdf embed the CMaps.
>>> 
>>> Brendan
>>> 
>>> On Feb 24, 2014, at 3:01 PM, Andreas Gal  wrote:
>>> 
>>>> Is this something we could load dynamically and offline cache?
>>>> 
>>>> Andreas
>>>> 
>>>> Sent from Mobile.
>>>> 
>>>>> On Feb 24, 2014, at 23:41, Brendan Dahl  wrote:
>>>>> 
>>>>> PDF.js plans to soon start including and using Adobe CMap files for 
>>>>> converting character codes to character id's(CIDs) and mapping character 
>>>>> codes to unicode values. This will fix a number of bugs in PDF.js and 
>>>>> will improve our support for Chinese, Korean, and Japanese(CJK) documents.
>>>>> 
>>>>> I wanted to inform dev-platform because there are quite a few files and 
>>>>> they are large. The files are loaded lazily as needed so they shouldn't 
>>>>> affect the size of Firefox when running, but they will affect the 
>>>>> installation size. There are 168 files with an average size of ~40KB, and 
>>>>> all of the files together are roughly:
>>>>> 6.9M
>>>>> 2.2M when gzipped
>>>>> 
>>>>> http://sourceforge.net/adobe/cmap/wiki/Home/
>>>>> 
>>>>> ___
>>>>> dev-platform mailing list
>>>>> dev-platform@lists.mozilla.org
>>>>> https://lists.mozilla.org/listinfo/dev-platform
>>> 
>> 
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-24 Thread Andreas Gal

My assumption is that certain users only need certain CMaps because they tend 
to read only documents in certain languages. This seems like something we can 
really optimize and avoid ahead-of-time download cost for.

The fact that we don’t do this yet doesn’t seem like a good criteria. There is 
a lot of good things we aren’t doing yet. You can be the first to change that 
on this particular topic, if it technically makes sense.

Andreas

On Feb 25, 2014, at 1:27 AM, Brendan Dahl  wrote:

> It’s certainly possible to load dynamically. Do we currently do this for any 
> other Firefox resources? 
> 
> From what I’ve seen, many PDF’s use CMaps even if they don’t necessarily have 
> CJK characters, so it may just be better to include them. FWIW both Popper 
> and Mupdf embed the CMaps.
> 
> Brendan
> 
> On Feb 24, 2014, at 3:01 PM, Andreas Gal  wrote:
> 
>> Is this something we could load dynamically and offline cache?
>> 
>> Andreas
>> 
>> Sent from Mobile.
>> 
>>> On Feb 24, 2014, at 23:41, Brendan Dahl  wrote:
>>> 
>>> PDF.js plans to soon start including and using Adobe CMap files for 
>>> converting character codes to character id's(CIDs) and mapping character 
>>> codes to unicode values. This will fix a number of bugs in PDF.js and will 
>>> improve our support for Chinese, Korean, and Japanese(CJK) documents.
>>> 
>>> I wanted to inform dev-platform because there are quite a few files and 
>>> they are large. The files are loaded lazily as needed so they shouldn't 
>>> affect the size of Firefox when running, but they will affect the 
>>> installation size. There are 168 files with an average size of ~40KB, and 
>>> all of the files together are roughly:
>>> 6.9M
>>> 2.2M when gzipped
>>> 
>>> http://sourceforge.net/adobe/cmap/wiki/Home/
>>> 
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-24 Thread Andreas Gal
Is this something we could load dynamically and offline cache?

Andreas

Sent from Mobile.

> On Feb 24, 2014, at 23:41, Brendan Dahl  wrote:
>
> PDF.js plans to soon start including and using Adobe CMap files for 
> converting character codes to character id's(CIDs) and mapping character 
> codes to unicode values. This will fix a number of bugs in PDF.js and will 
> improve our support for Chinese, Korean, and Japanese(CJK) documents.
>
> I wanted to inform dev-platform because there are quite a few files and they 
> are large. The files are loaded lazily as needed so they shouldn't affect the 
> size of Firefox when running, but they will affect the installation size. 
> There are 168 files with an average size of ~40KB, and all of the files 
> together are roughly:
> 6.9M
> 2.2M when gzipped
>
> http://sourceforge.net/adobe/cmap/wiki/Home/
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-22 Thread Andreas Gal

On Feb 22, 2014, at 1:12 PM, David Rajchenbach-Teller  
wrote:

> So, I'm wondering how much effort we should put in reducing the number
> of ChromeWorkers. On Desktop, we have a PageThumbsWorker and a
> SessionWorker, which are both useful but not strictly necessary. I seem
> to remember that they each take 1Mb+ of RAM.
> 
> On all platforms, we have the OS.File ChromeWorker, which also takes
> 1Mb+ RAM. We could completely get rid of it, but that might be 1+ year
> of work. Note that B2G deactivates this ChromeWorker in children
> processes after a few seconds.

Even on FFOS we can easily afford 1 MB for a temporary task. Firing up a 
ChromeWorker to do some IO is not a problem. Keeping it around when idle is 
probably a bad idea on FFOS and desktop.

We should continue to use JS in Chrome where it makes sense. Its often easier 
and faster to write some functionality in JS (and sometimes also safer), and it 
tends to be more compact when we ship it. In addition to purging caches and 
stopping idle workers the JS team is also working on making workers (and JS) 
more memory efficient in general. Naveed might want to chime in here.

Happy memory saving.

Andreas

> 
> Cheers,
> David
> 
> On 2/21/14 10:38 PM, Nicholas Nethercote wrote:
>> Greetings,
>> 
>> We now live in a memory-constrained world. By "we", I mean anyone
>> working on Mozilla platform code. When desktop Firefox was our only
>> product, this wasn't especially true -- bad leaks and the like were a
>> problem, sure, but ordinary usage wasn't much of an issue. But now
>> with Firefox on Android and particularly Firefox OS, it is most
>> definitely true.
>> 
>> In particular, work is currently underway to get Firefox OS working on
>> devices that only have 128 MiB of RAM. The codename for these devices
>> is Tarako (https://wiki.mozilla.org/FirefoxOS/Tarako). In case it's
>> not obvious, the memory situation on these devices is *tight*.
>> 
>> Optimizations that wouldn't have been worthwhile in the desktop-only
>> days are now worthwhile. For example, an optimization that saves 100
>> KiB of memory per process is pretty worthwhile for Firefox OS. On a
>> phone, memory consumption can be the difference between something
>> working and something not working in a much more clear-cut way than is
>> typical on desktop.
>> 
>> Conversely, landing new features that use extra memory need to be
>> considered carefully. If you're planning a new feature and it will
>> "only" use a few MiB on Firefox OS, think again. We probably cannot
>> afford it.
>> 
>> https://bugzilla.mozilla.org/show_bug.cgi?id=975367 is the bug that
>> motivated me to write this email, but I'm also reminded of Firefox 4,
>> which was a release with very high memory consumption partly because
>> several groups independently made decisions that using extra memory
>> was ok for a particular sub-system and the combined effect was bad.
>> 
>> We live in a memory-constrained world.
>> 
>> Nick
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>> 
> 
> 
> -- 
> David Rajchenbach-Teller, PhD
> Performance Team, Mozilla
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-21 Thread Andreas Gal

Caches are fine in the child process as long they are reasonably sized, and 
they purge themselves in case of a low-memory notification.

On FFOS the parent process never dies, so leaks or long-lived memory allocation 
there tends to hurt. Responding to low-memory notifications is really important 
here as well.

Andreas

On Feb 21, 2014, at 11:40 PM, Brian Smith  wrote:

> On Fri, Feb 21, 2014 at 1:38 PM, Nicholas Nethercote
>  wrote:
>> Optimizations that wouldn't have been worthwhile in the desktop-only
>> days are now worthwhile. For example, an optimization that saves 100
>> KiB of memory per process is pretty worthwhile for Firefox OS.
> 
> Do you mean 100KiB per child process? How worthwhile is it to cut
> 100KiB from the parent process? I ask because we have various caches
> that we use for TLS security (TLS session cache, HSTS, OCSP response
> cache), and I'd like to know how much memory we can can budge for
> these features, given that they all affect only a single process (the
> parent). In some cases, we can throw the information away and
> re-retrieve and/or recompute it, but only at a cost of reduced
> security and/or significantly reduced performance. In some cases, we
> could try to leave the data on disk and avoid loading it into memory,
> but then we run into main-thread-I/O and/or socket-thread-disk-I/O,
> both of which are important to avoid for responsiveness/perf reasons.
> 
> Also, I am concerned that, AFAICT, no TLS-related testing is being
> done for AWSY. Thus, a lot of my work isn't being measured there. Who
> is the best person to talk to about getting TLS functionality added to
> AWSY?
> 
> Cheers,
> Brian
> -- 
> Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Possible to make all (CSS) transitions 0/1ms to speed up FxOS Gaia tests?

2014-02-16 Thread Andreas Gal

We could easily add a time multiplier pref and you could set that during your 
test. This is probably cheap enough and useful enough to do in production 
builds.

Andreas

On Feb 16, 2014, at 7:24 PM, Andrew Sutherland  
wrote:

> In Gaia, the system and many of the apps use transitions/animations with a 
> non-trivial duration, particularly for card metaphor stuff where logic may be 
> blocked on the completion of the animation. (Values seem to vary between 0.3s 
> and 0.5s... no one tell UX!)
> 
> Our Gaia tests currently take an absurdly long amount of time for what is 
> being tested.  There are varying reasons for this (gratuitous teardown of the 
> b2g-desktop process, timeouts that time out but don't fail, etc.).  I believe 
> one thing we could do to speed things up would be to make all transitions 
> super short.  We still want transitionend to fire for logic reasons, but 
> otherwise the unit test infrastructure probably does not really care to 
> actually watch the transition happen.
> 
> Is it possible / advisable to have Gecko support some preference or other 
> magical mechanism to cause transitions and non-infinite animations to 
> effectively complete in some extremely small time interval / on the next turn 
> of the event loop, at least for durations originally less than 1second/other 
> short time?  I briefly looked for such an existing mechanism, but was unable 
> to find one.
> 
> The alternative would be to use a build-time mechanism in Gaia to transform 
> all CSS to effect this change in that fashion.  Gaia already has build steps 
> like this so introducing it would not be particularly difficult, but it's 
> always nice if what the tests run is minimally different from what the 
> devices run.
> 
> (Additionally, an in-Gecko mechanism could produce slightly more correct 
> results since it could realistically emulate the ordering of when the 
> transitionend events would fire before disregarding those numbers and firing 
> them all in succession.  Although an automated mechanism I suppose could just 
> map observed values to sequentially ordered small timeouts.)
> 
> Andrew
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cairo being considered as the basis of a standard C++ drawing API

2014-02-09 Thread Andreas Gal

It seems to me that we have arrived at the conclusion that a good drawing API 
should be mostly stateless (like Moz2D), instead of Cairo's stateful API. As a 
result we are currently removing all uses of the Cairo API and we will 
eventually remove Cairo from our codebase altogether (in favor of D2D via Moz2D 
on Windows and Skia via Moz2D elsewhere).

The C++ CS is free to standardize whatever they like, but if they ask for our 
opinion we probably want to point them at Moz2D instead of Cairo.

That having said, for maximum performance there is a fair amount of detailed 
knowledge callers need to have about the innards of Moz2D backends. You 
sometimes want to explicitly wrap a draw target around a texture and then use 
that texture in the compositor, etc. So I am not sure you can have a nice, 
non-leaky abstraction for 2D graphics in the language.

My best guess is that whatever they standardize will end up not being useful 
for high-performance applications. In that case, I wonder why bother. A library 
really might do just fine here.

Just my 2c. roc or Bas might have a more detailed opinion here.

Andreas

On Feb 9, 2014, at 11:29 AM, Botond Ballo  wrote:

> The C++ Standards Committee is aiming to standardize a 2D 
> drawing API in the post-C++14 timeframe. A study group 
> (SG 13 - Graphics [1]) has been created to investigate 
> possible approaches.
> 
> SG 13 is considering using Cairo as the basis for a 
> lightweight C++ drawing API [2] [3]. The idea would be to
> automatically wrap Cairo's C API into a C++ API without
> changing the semantics of the operations (see the last
> two pages of [2] for details).
> 
> This sounds like the sort of thing we might be interested
> in / have an opinion on.
> 
> Any thoughts?
> 
> SG 13 will be meeting in Issaquah next week as part of the
> larger C++ Standards Committee meeting, which I will be
> attending. If anyone has thoughts on this proposal, I am
> happy to convey them at the study group's meeting.
> 
> Thanks,
> Botond
> 
> 
> [1] http://isocpp.org/std/the-committee
> [2] http://isocpp.org/files/papers/N3825.pdf
> [3] http://lists.cairographics.org/archives/cairo/2013-December/024858.html
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HWA and OMTC on Linux

2013-11-07 Thread Andreas Gal

On Nov 7, 2013, at 3:06 PM, "L. David Baron"  wrote:

> On Thursday 2013-11-07 13:24 -0800, Andreas Gal wrote:
>> On Nov 7, 2013, at 1:19 PM, Karl Tomlinson  wrote:
>>> Will any MoCo developers be permitted to spend some time fixing
>>> these or the already-known issues?
>> 
>> Its not a priority to fix Linux/X11. We will happily take contributed 
>> patches, and people are welcome to fix issues they see, as long its not at 
>> the expense of the things that matter.
> 
> I think having Linux/X11 be working and in good shape is important
> for attracting contributors to the Mozilla project, particularly
> those who write code.  (Though I haven't seen recent data on OS use
> of Mozilla contributors who aren't paid to work on Mozilla.  I'd be
> very surprised if it wasn't a much higher proportion of developers
> than users, though.)

I don't think anyone disagrees with you here, except if you are saying that 
somehow keeping the non-OMTC Linux code is critical to attract contributors to 
Mozilla. I don't think thats the case and I don't think you are trying to say 
that. Thats what the post was all about. We want to get rid of the old non-OMTC 
code because its blocking making OMTC better everywhere, including Linux.

Andreas

> 
> -David
> 
> -- 
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
> Before I built a wall I'd ask to know
> What I was walling in or walling out,
> And to whom I was like to give offense.
>   - Robert Frost, Mending Wall (1914)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HWA and OMTC on Linux

2013-11-07 Thread Andreas Gal

On Nov 7, 2013, at 1:48 PM, Karl Tomlinson  wrote:

> Andreas Gal writes:
> 
>> Its not a priority to fix Linux/X11. We will happily take
>> contributed patches, and people are welcome to fix issues they
>> see, as long its not at the expense of the things that matter.
> 
> Do bugs in B2G Desktop on Linux/X11 matter?
> 
> I assume glitches and perf issues that are not on the device don't
> really matter.  How about crashes and security bugs?

Nobody said Linux/X11 doesn't matter. The proposal was to focus on OMTC on all 
platforms, including Linux/X11.

Andreas

> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HWA and OMTC on Linux

2013-11-07 Thread Andreas Gal

On Nov 7, 2013, at 1:19 PM, Karl Tomlinson  wrote:

> Nicholas Cameron writes:
> 
>> Currently on Linux our only 'supported' graphics backend is the
>> main-thread software backend (basic layers).
> 
> FWIW basic layers is predominantly GPU-based compositing
> (not-softwared) on most X11 systems.
> 
>> 5) We would love to spend time making OMTC OpenGL on Linux work
>> perfectly, but it is not a priority for the graphics team due to
>> the low number of users.
> 
> I would have assumed the number of Linux/X11 users is greater
> than the number of Linux/Android users.

Please think about the potential user base, not just current user base. The 
potential user base of Linux users is tiny, and mostly stagnant. The potential 
user base of mobile Linux (Android, FFOS) is massive, and growing explosively.

> 
>> After removing MT OGL, our proposal is that Linux users on nightly
>> or who manually set MOZ_USE_OMTC will get OMTC OpenGL. That means
>> they will get the same(-ish) performance, but a possibly worse
>> experience. Other Linux users will get main thread basic layers,
>> whether they force on HWA or not. Their performance will be
>> degraded (sometimes), but the rest of the experience should not
>> (this is the current default configuration for Linux).
> 
> The most significant performance degrading I expect here is for
> users that want to use WebGL maps or games.  Flicking the pref
> for OpenGL layers is the current workaround a get reasonable
> performance there.
> 
> The new workaround will also require setting an environment variable
> off nightly, but that's not so much more awkward than about:config. 
> 
>> So, does anyone have any objections to this plan? Or ways we could
>> do this better? If you use HWA please be aware of these changes
>> and report any bugs you find. And a heads up that we might be
>> seeing a few bugs filed around this by users when it happens and
>> at subsequent uplifts.
> 
> Will any MoCo developers be permitted to spend some time fixing
> these or the already-known issues?

Its not a priority to fix Linux/X11. We will happily take contributed patches, 
and people are welcome to fix issues they see, as long its not at the expense 
of the things that matter.

> 
> If MoCo is prepared to use some of the developer time saved
> through removing MT OGL on fixing these issues, then that
> hopefully will be a net positive move.
> 
> How about first moving Nightly OGL users to OMTC, and then remove
> MT OGL after the next uplift?

The old OGL code is a tremendous maintenance nightmare and is actively hurting 
our ability to make progress on the OMTC code. We would like to delete it asap.

Andreas

> 
> Then we have some time to discover whether there are any
> show-stopper OMTC issues, and we reduce the time frame for other
> branches between paying the price and reaping the returns.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Measuring power usage

2013-11-05 Thread Andreas Gal
If you can access the remaining battery status of a large enough
population over time it should be easy to use telemetry to measure
this pre and post patch.

Andreas

Sent from Mobile.

> On Nov 5, 2013, at 16:46, David Rajchenbach-Teller  
> wrote:
>
> Context: I am currently working on patches designed to improve
> performance of some subsystems in Firefox Desktop by decreasing disk
> I/O, but I hope that they will also have an effect (hopefully
> beneficial) on power/battery usage. I'd like to confirm/infirm that
> hypothesis.
>
> Measuring and collecting performance improvement is relatively easy,
> thanks to Telemetry. Measuring power usage, though? That looks harder.
>
> So, here are my questions:
> - do we already have a good way to measure power usage by some thread
> between two points in time?
> - if not, would there be interest in developing a library for this
> purpose ? Note that I don't even know if that's possible in userland.
> - do we already have a good way to measure total power usage by a
> xpcshell test, perhaps by interfacing with powertop or Intel Power Gadget?
>
> Cheers,
> David
>
> --
> David Rajchenbach-Teller, PhD
> Performance Team, Mozilla
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Killing the Moz Audio Data API

2013-10-17 Thread Andreas Gal

Looks like the comms app has some residual use of the old audio API:

apps/communications/dialer/js/keypad.js:this._audio.mozSetup(1, 
this._sampleRate);
apps/system/emergency-call/js/keypad.js:   this._audio.mozSetup(2, 
this._sampleRate);

Should be easy to replace. I will file a bug and make sure we do this for 1.3.

Andreas

On Oct 17, 2013, at 3:02 AM, Benoit Jacob  wrote:

> The other day, while testing some B2G v1.2 stuff, I noticed the Moz Audio
> Data deprecation warning flying in adb logcat. So you probably need to
> check with B2G/Gaia people about the timing to kill this API.
> 
> Benoit
> 
> 
> 2013/10/16 Ehsan Akhgari 
> 
>> I'd like to write a patch to kill Moz Audio Data in Firefox 28 in favor of
>> Web Audio.  We added a deprecation warning for this API in Firefox 23 (bug
>> 855570).  I'm not sure what our usual process for this kind of thing is,
>> should we just take the patch, and evangelize on the broken websites enough
>> times so that we're able to remove the feature in a stable build?
>> 
>> Thanks!
>> --
>> Ehsan
>> 
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unified shader for layer rendering

2013-10-16 Thread Andreas Gal

The experiment here is quite a bit different from what the current patch is 
proposing (6 shader programs, only drive swizzle and alpha/no-alpha via 
uniforms). Benoit is redoing the measurements for that scenario. More data 
coming shortly.

Andreas

On Oct 16, 2013, at 7:00 AM, Benoit Jacob  wrote:

> 2013/10/10 Benoit Jacob 
> 
>> this is the kind of work that would require very careful performance
>> measurements
>> 
> 
> Here is a benchmark:
> http://people.mozilla.org/~bjacob/webglbranchingbenchmark/webglbranchingbenchmark.html
> 
> Some results:
> http://people.mozilla.org/~bjacob/webglbranchingbenchmark/webglbranchingbenchmarkresults.txt
> 
> Benoit
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


unified shader for layer rendering

2013-10-10 Thread Andreas Gal
Hi,

we currently have a zoo of shaders to render layers:

  RGBALayerProgramType,
  BGRALayerProgramType,
  RGBXLayerProgramType,
  BGRXLayerProgramType,
  RGBARectLayerProgramType,
  RGBXRectLayerProgramType,
  BGRARectLayerProgramType,
  RGBAExternalLayerProgramType,
  ColorLayerProgramType,
  YCbCrLayerProgramType,
  ComponentAlphaPass1ProgramType,
  ComponentAlphaPass1RGBProgramType,
  ComponentAlphaPass2ProgramType,
  ComponentAlphaPass2RGBProgramType,

(I have just eliminated the Copy2D variants, so omitted here.)

Next, I would like to replace everything but the YCbCr and ComponentAlpha 
shaders with one unified shader (attached below).

Rationale:

Most of our shader programs only differ minimally in cycle count, and we are 
generally memory bound, not GPU cycle bound (even on mobile). In addition, GPUs 
are actually are very efficient at branching, as long the branch is uniform and 
doesn't change direction per pixel or vertex (the driver compiles essentially 
variants and runs that). Last but not least, switching shaders tends to be 
expensive.

Proposed approach:

We use a single shader to replace the current 8 layer shaders. I verified with 
the mali shader compiler that the shortest path (color layer) is pretty close 
to the old color shader (is now 3, due to the opacity multiplication, was 1). 
For a lot of scenes we will be able to render without ever switching shaders, 
so that should more than make up for the extra cycles, especially since we are 
memory bound anyway.

More uniforms have to be set per shader invocation, but that should be pretty 
cheap.

I completely dropped the distinction of 2D and 3D masks. 3D masks should be 
able to handle the 2D case and the cycle savings are minimal, and as mentioned 
before, irrelevant.

An important advantage is that with this approach we can now easily add 
additional layer effects to the pipeline without exponentially exploding the 
number of programs 
(RGBXRectLayerProgramWithGrayscaleAndWithoutOpacityButMaskAndNotMask3D…).

Also, last but not least, this reduces code complexity quite a bit.

Feedback welcome.

Thanks,

Andreas

---

// Base color (will be rendered if layer texture is not read).
uniform vec4 uColor;

// Layer texture (disabled for color layers).
uniform bool uTextureEnabled;
uniform vec2 uTexCoordMultiplier;
uniform bool uTextureBGRA; // Default is RGBA.
uniform bool uTextureNoAlpha;
uniform float uTextureOpacity;
uniform sampler2D uTexture;
uniform bool uTextureUseExternalOES;
uniform samplerExternalOES uTextureExternalOES;
#ifndef GL_ES
uniform bool uTextureUseRect;
uniform sampler2DRect uTextureRect;
#endif

// Masking (optional)
uniform bool uMaskEnabled;
varying vec3 vMaskCoord;
uniform sampler2D uMaskTexture;

void main()
{
  vec4 color = uColor;
  if (uTextureEnabled) {
vec2 texCoord = vTexCoord * uTexCoordMultiplier;
if (uTextureUseExternalOES) {
  color = texture2D(uTextureExternalOES, texCoord);
#ifndef GL_ES
} else if (uTextureUseRect) {
  color = texture2DRect(uTexture, texCoord);
#endif
} else {
  color = texture2D(uTexture, texCoord);
}
if (uTextureBGRA) {
  color = color.bgra;
}
if (uTextureNoAlpha) {
  color = vec4(color.rgb, 1.0);
}
color *= uTextureOpacity;
  }
  if (uMaskEnabled) {
color *= texture2D(uMaskTexture, vMaskCoord.xy / vMaskCoord.z).r;
  }
  gl_FragColor = color;
}

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing Pepper since Google is dropping NPAPI for good

2013-09-23 Thread Andreas Gal

Pepper is not an API, its basically a huge set of Chromium guts exposed you can 
link against. The only documentation is the source, and that source keeps 
constantly changing. I don't think its viable for anyone to implement Pepper 
without also pulling in most or all of Chromium. Pepper is Chrome, and Chrome 
is Pepper. This is the reason that we won't-fixed bug 729481, and nothing has 
changed since then. I don't think we should spend energy on getting onto 
Google's Pepper treadmill. We should instead continue to accelerate the decline 
of plugins by offering powerful new HTML5 capabilities that obsolete plugins.

Andreas

On Sep 23, 2013, at 1:29 PM, Hubert Figuière  wrote:

> Hi all,
> 
> Today Google said they'd drop NPAPI for good.
> 
> http://news.cnet.com/8301-1023_3-57604242-93/google-begins-barring-browser-plug-ins-from-chrome/
> 
> Bug 729481 was WONTFIXED a while ago. tl;dr : implement Pepper plugin API
> 
> I think it might be worth the revisit that decision before it is too late.
> 
> 
> Hub
> 
> PS: I truly believe that we should drop plugin support all together, but
> that's not what I'm discussing here.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more Makefile.in boilerplate

2013-09-04 Thread Andreas Gal


Can you delete this boilerplate from existing makefiles if not already 
done? That will prevent people from adding it since people look at 
examples when adding new makefiles.


Andreas

Mike Hommey wrote:

Hi,

Assuming it sticks, bug 912293 made it unnecessary to start Makefile.in
files with the usual boilerplate:

   DEPTH = @DEPTH@
   topsrcdir = @top_srcdir@
   srcdir = @srcdir@
   VPATH = @srcdir@
   relativesrcdir = @relativesrcdir@

   include $(DEPTH)/config/autoconf.mk

All of the above can now be skipped. Directories that do require a
different value for e.g. VPATH or relativesrcdir can still place a value
that will be taken instead of the default. It is not recommended to do
that in new Makefile.in files, or to change existing files to do that,
but the existing files that did require such different values still do
use those different values.

Also, if the last line of a Makefile.in is:

   include $(topsrcdir)/config/rules.mk

That can be skipped as well.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: partial GL buffer swap

2013-08-31 Thread Andreas Gal
Experiments and calculations show that the previous SOC we had for Flatfish
(tablet) could only fill about 3x the frame buffer size at 60fps. Without
culling occluded layers the homescreen would only pan at 25 FPS or so. So
yes, this is very much motivated by concrete hardware problems. Tablets
tend to have underpowered GPUs in comparison to the screen resolution. We
will need all three. We have to cull occluded layers, we should use partial
buffer swap, and we should use 2D composition hardware since in certain
cases it can bypass writing to the frame buffer, saving that bandwidth.

Also, I am a bit skeptical of your math. You are using the maximum fill
rate of adreno. In practice fill rate might be further limited by the
memory used. We have seen substantial differences in memory bandwidth
between devices. We can ask our hardware friends for an explanation why
this is the case.

I am also working on a set of CSS benchmarks to measure fill rate (CSS
allows us to measure both, 2D compositor and the GPU). That should shed
some light on this variance between devices.

Andreas

Sent from Mobile.

On Aug 31, 2013, at 17:40, Benoit Jacob  wrote:




2013/8/31 Andreas Gal 

>
> Soon we will be using GL (and its Windows equivalent) on most platforms to
> implement a hardware accelerated compositor. We draw into a back buffer and
> with up to 60hz we perform a buffer swap to display the back buffer and
> make the front buffer the new back buffer (double buffering). As a result,
> we have to recomposite the entire window with up to 60hz, even if we are
> only animating a single pixel.
>

Do you have a particular device in mind?

Knowing whether we are fill-rate bound on any device that we care about is
an important prerequisite before we can decide whether this kind of
optimization is worth the added complexity.

As an example maybe showing why it is not out of hand obvious that we'd be
fill-rate bound anywhere: the ZTE Open phone has a MSM7225A chipset with
the "enhanced" variant of the Adreno 200 GPU, which has a fill-rate of 432M
pixels per second (Source: http://en.wikipedia.org/wiki/Adreno). While that
metric is hard to give a precise meaning, it should be enough for an
order-of-magnitude computation. This device has a 320x480 screen
resolution, so we compute:

(320*480*60)/432e+6 = 0.02

So unless that computation is wrong, on the ZTE Open, refreshing the entire
screen 60 times per second consumes about 2% of the possible fill-rate.

On the original (not "enhanced") version of the Adreno 200, that figure
would be 7%.

By all means, it would be interesting to have numbers from an actual
experiment as opposed to the above naive, abstract computation. For that
experiment, a simple WebGL page with scissor/clearColor/clear calls would
suffice (scissor and clearColor calls preventing any short-circuiting).

Benoit



>
> On desktop, this is merely bad for battery life. On mobile, this can
> genuinely hit hardware limits and we won't hit 60 fps because we waste a
> lot of time recompositing pixels that don't change, sucking up memory
> bandwidth.
>
> Most platforms support some way to only update a partial rect of the frame
> buffer (AGL_SWAP_RECT on Mac, eglPostSubBufferNVfor Linux, setUpdateRect
> for Gonk/JB).
>
> I would like to add a protocol to layers to indicate that the layer has
> changed since the last composition (or not). I propose the following API:
>
> void ClearDamage(); // called by the compositor after the buffer swap
> void NotifyDamage(Rect); // called for every update to the layer, in
> window coordinate space (is that a good choice?)
>
> I am using Damage here to avoid overloading Invalidate. Bike shedding
> welcome. I would put these directly on Layer. When a color layer changes,
> we damage the whole layer. Thebes layers receive damage as the underlying
> buffer is updated.
>
> The compositor accumulates damage rects during composition and then does a
> buffer swap of that rect only, if supported by the driver.
>
> Damage rects could also be used to shrink the scissor rect when drawing
> the layer. I am not sure yet whether its easily doable to take advantage of
> this, but we can try as a follow-up patch.
>
> Feedback very welcome.
>
> Thanks,
>
> Andreas
>
> PS: Does anyone know how this works on Windows?
> __**_
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/**listinfo/dev-platform<https://lists.mozilla.org/listinfo/dev-platform>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


partial GL buffer swap

2013-08-31 Thread Andreas Gal


Soon we will be using GL (and its Windows equivalent) on most platforms 
to implement a hardware accelerated compositor. We draw into a back 
buffer and with up to 60hz we perform a buffer swap to display the back 
buffer and make the front buffer the new back buffer (double buffering). 
As a result, we have to recomposite the entire window with up to 60hz, 
even if we are only animating a single pixel.


On desktop, this is merely bad for battery life. On mobile, this can 
genuinely hit hardware limits and we won't hit 60 fps because we waste a 
lot of time recompositing pixels that don't change, sucking up memory 
bandwidth.


Most platforms support some way to only update a partial rect of the 
frame buffer (AGL_SWAP_RECT on Mac, eglPostSubBufferNVfor Linux, 
setUpdateRect for Gonk/JB).


I would like to add a protocol to layers to indicate that the layer has 
changed since the last composition (or not). I propose the following API:


void ClearDamage(); // called by the compositor after the buffer swap
void NotifyDamage(Rect); // called for every update to the layer, in 
window coordinate space (is that a good choice?)


I am using Damage here to avoid overloading Invalidate. Bike shedding 
welcome. I would put these directly on Layer. When a color layer 
changes, we damage the whole layer. Thebes layers receive damage as the 
underlying buffer is updated.


The compositor accumulates damage rects during composition and then does 
a buffer swap of that rect only, if supported by the driver.


Damage rects could also be used to shrink the scissor rect when drawing 
the layer. I am not sure yet whether its easily doable to take advantage 
of this, but we can try as a follow-up patch.


Feedback very welcome.

Thanks,

Andreas

PS: Does anyone know how this works on Windows?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking build defaults

2013-08-16 Thread Andreas Gal


First of all, thanks for raising this. Its definitely a problem that 
needs fixing.


I am not convinced by your approach though. In a few months from now 
disabling WebRTC is like calling for the DOM or JS or CSS to be disabled 
in local developer builds. It will become a natural part of the core Web 
platform.


I would like to propose the opposite approach:

- Remove all conditional feature configuration from configure. WebRTC et 
al are always on. Features should be disabled dynamically (prefs), if at 
all.

- Reduce configure settings to choice of OS and release or developer.
- Require triple super-reviews (hand signed, in blood) for any changes 
to configure.
- Make parts of the code base more modular and avoid super include files 
cross modules (hello LayoutUtils.h).


Rationale:

Its not slow for you to build WebRTC. Its slow for you to have it build 
over and over. Almost every time I pull from mozilla-central, someone 
touched configure and I have to rebuild from scratch, which is 
infuriating (argh!). Minimizing changes to configure and banning static 
defines for feature management would solve that. If we make sure major 
subsystems like WebRTC can stand on their own, you will take a hit 
building it one time, and then occasionally as the team lands changes. 
Its a pretty small team, so the amount of code they can possibly check 
in is actually pretty small. You will see churn all over the three when 
you pull, but you won't have to rebuild the entire Universe every time 
you pull.


What do you think?

Andreas

Mike Hommey wrote:

Hi everyone,

There's been a lot of traction recently about our builds getting slower
and what we could do about it, and what not.

Starting with bug 904979, I would like to change the way we're thinking
about default flags and options. And I am therefore opening a discussion
about it.

The main thing bug 904979 does is to make release engineering builds (as
well as linux distros, palemoon, icecat, you name it) use a special
--enable-release configure flag to use flags that we deem necessary for
a build of Firefox, the product. The flip side is that builds without
this flag, which matches builds from every developer, obviously, would
use flags that make the build faster. For the moment, on Linux systems,
this means disabling identical code folding and dead code removal (which,
while they make the binary size smaller, increase link time), and
forcing the use of the gold linker when it's available but is not system
default. With bug 905646, it will mean enabling -gsplit-dwarf when it's
available, which make link time on linux really very much faster (<4s
on my machine instead of 30s). We could and should do the same kind
of things for other platforms, with the goal of making linking
libxul.so/xul.dll/XUL faster, making edit-compile-edit cycles faster.
If that works reliably, for instance, we should for instance use
incremental linking. Please feel free to file Core::Build Config bugs
for what you think would help on your favorite build platform (and if
you do, for better tracking, make them depend on bug 904979).

That being said, this is not the discussion I want to have here, that
was merely an introduction.

The web has grown in the past few years, and so has our code base, to
support new technologies. As Nathan noted on his blog[1] disabling
webrtc calls for great build time improvements. And I think it's
something we should address by a switch in strategy.

- How many people are working on webrtc code?
- How many people are working on peripheral code that may affect webrtc?
- How many people are building webrtc code they're not working on and
   not using?

I'm fairly certain the answer to the above is that the latter population
is much bigger than the other two, by probably more than an order of
magnitude.

So here's the discussion opener: why not make things like webrtc (I'm
sure we can find many more[2]) opt-in instead of opt-out, for local,
developer builds? What do you think are good candidates for such a
switch?

Mike

1. 
https://blog.mozilla.org/nfroyd/2013/08/15/better-build-times-through-configury/
2. and we can already start with ICU, because it's built and not even
used. And to add injury to pain, it's currently built without
parallelism (and the patch to make it not do so was backed out).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: Email individual patch authors who improve performance

2013-08-13 Thread Andreas Gal
+1

Sent from Mobile.

On Aug 14, 2013, at 9:25, Chris Peterson  wrote:

> We could also send a weekly congratulations to the person who removed the 
> most lines of code that week. :)
>
> chris
>
>
> On 8/13/13 1:57 PM, Jet Villegas wrote:
>> This is awesome! Is it possible to see a log of the recipients/patches?
>>
>> --Jet
>>
>> - Original Message -
>> From: "Matt Brubeck" 
>> To: dev-platform@lists.mozilla.org
>> Sent: Monday, August 12, 2013 3:14:20 PM
>> Subject: Proposal: Email individual patch authors who improve performance
>>
>> I've posted a patch that would change how the graph server sends email
>> when a performance *improvement* is detected:
>> https://bugzilla.mozilla.org/show_bug.cgi?id=904250
>>
>> Emails about regressions would be unchanged.  Emails about improvements
>> would now be sent to individual patch authors using the same logic as
>> regression emails:
>>
>> * If there are up to five authors in the regression range, email
>> dev-tree-management and each of the patch authors.
>> * If there are more than five authors, just email dev-tree-management.
>>
>> I've spoken to other developers in a few places, and most were in favor
>> of this change.  Those who don't want the extra emails can filter them
>> out if they want to.  Motivations for this change:
>>
>> - When you are expecting a patch to affect performance, the email is
>> useful confirmation.  Receiving it directly saves you the work of
>> searching through the mailing list archives or manually inspecting graphs.
>>
>> - When you're not expecting a performance improvement, it may be
>> important to know that one has happened so you can figure out why.
>> (Sometimes it's a sign of a bug, for example if you accidentally
>> disabled some code.)
>>
>> - Warm fuzzy feeling for developers who write perf wins!
>>
>> I think the accuracy of the analysis script is now good enough that this
>> would not generate too many spurious emails.  If you have any opinions
>> or suggestions about this change, please share them here or in bug 904250!
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread Andreas Gal


We are working on ways to make add-ons like adblock work with e10s on 
desktop without major changes to the add-on. That mechanism might work 
for the thumbnail case. Gavin can reach out to trev and discuss whether 
this is something we should try to make work. I do agree this isn't 
super high priority right now though and we can live with this behavior 
for thumbnails. Using e10s for this is really cool and greatly improves 
responsiveness, after all.


Andreas

t...@adblockplus.org wrote:

On 02.08.2013 03:35, Gavin Sharp wrote:

The experiment you're referring to was Adblock running in Firefox with
remote tabs enabled, I think. I'm not up to date with how that
experiment was progressing, but I think there are some fundamental
differences between that scenario and the background content processes
being used for the background thumbnailing service that might not make
the two cases directly comparable.

It would be valuable for an adblockplus developer to investigate, certainly.


Unless I missed something, this is about Adblock Plus supporting the original 
incarnation of Firefox Mobile, the one with two separate processes for chrome 
and content. This code is long gone in the current Adblock Plus versions - it 
was a real pain to support due to lots of unavoidable code duplication. The 
last version still having it is Adblock Plus 1.3.10.

The code in question was explicitly running in Firefox Mobile only. It used 
messageManager.loadFrameScript() API to inject code into the content process of new tabs - I 
doubt that it will work the same here, Adblock Plus would probably need to look explicitly for 
these  elements (is there an event when they are 
created?).

Altogether, supporting this in Adblock Plus should be possible - but it will 
require significant amounts of additional code and introduce quite a bit of new 
complexity. I also have doubts whether this is work that should receive 
priority.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: XPIDL & Promises

2013-07-30 Thread Andreas Gal


Whats the main pain point? Whether promises are resolved immediately or 
from a future event loop iteration?


Andreas

Gavin Sharp wrote:

On Tue, Jul 30, 2013 at 11:17 AM, Boris Zbarsky  wrote:

On 7/30/13 11:13 AM, Dave Townsend wrote:

The JS promise implementation came out of a desire to use promises in
add-ons and devtools amongst others. I believe the C++ implementation came
out of the DOM spec. I'm not sure why we need both.

OK.  Given that there is also a desire to be able to use the DOM Promises in
b2g (see bug 897913), how do people feel about enabling the Promise API in
at least chrome globals (and via Xrays), and setting up Promise in things
like JS component globals as well?  This shouldn't be too difficult to do...
Then anyone who wants to use Promises in Chrome code can use the DOM ones.


Sounds great. We'll need to investigate whether the implementations
are compatible, though - we've been going through various existing JS
consumers switching them to a different promise implementation
(Promise.jsm, https://bugzilla.mozilla.org/show_bug.cgi?id=856878
tracks several instances) and have been running into issues with
different behavior related to promise resolution. There is a
significant amount of existing chrome code using one of the two
existing JS implementations (Promise.jsm and core/promise.js from the
Add-on SDK), and porting them to DOM promises will take some effort.

Gavin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: XPIDL & Promises

2013-07-30 Thread Andreas Gal


Yeah, I just saw that grepping through the tree. Both completely 
independent, too. On the upside, this might solve Jan's problem.


Andreas

Boris Zbarsky wrote:

On 7/30/13 7:36 AM, Andreas Gal wrote:

For that we would have to implement Promise via IDL. Definitely
possible. All you need is a bit IDL and some JS that implements it. It
would be a lot slower than the jsm since it wraps into C++ objects that
call into JS, but in most cases that doesn't really matter.


Wait.  Why do we have multiple Promise implementations now?  :(

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: XPIDL & Promises

2013-07-30 Thread Andreas Gal


For that we would have to implement Promise via IDL. Definitely 
possible. All you need is a bit IDL and some JS that implements it. It 
would be a lot slower than the jsm since it wraps into C++ objects that 
call into JS, but in most cases that doesn't really matter.


Andreas

janjongb...@gmail.com wrote:

Hi,

 From code I can use Cu.import("resource://gre/modules/Promise.jsm"); to use 
promises. But is it possible to have a promise as a return type in my .idl file (b2g)?

Something like Promise blah(); won't just work and I'm a bit lost :-)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-05-01 Thread Andreas Gal

On May 1, 2013, at 1:06 AM, "Robert O'Callahan"  wrote:

> On Wed, May 1, 2013 at 6:41 PM, Andreas Gal  wrote:
> Both Skia/SkiaGL and D2D support basically all the effects and filters we 
> want.
> 
> D2D does not support GLSL custom filters. We'd need ANGLE/GLContext 
> integration there.
>  
> The upside would be that there is exactly one path for effects/filters for 
> content and compositor. To add new filters or to maintain code or features we 
> don't have to go and update 3 different shader representations in 3 text 
> files for 3 platforms plus the content fallback path.
>  
> I don't think this would affect the amount of work required for filters all 
> that much. I proposed having GLContext-based and a CPU-based filters 
> implementation. If Skia(GL) does everything we need, then we can use it as 
> the basis of those implementations; if it doesn't, that's work we have to do 
> either way. Integrating a filter implementation into each compositor 
> shouldn't be much work.

We should probably start with the CPU-based fallback path. We can then try that 
with SkiaGL to see what the performance looks like (the GLContext-based 
implementation, essentially). Should we file a couple bugs? I might volunteer 
for the CPU fallback to get something started.

Andreas

> 
> Rob
> -- 
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq 
> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq 
> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq 
> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q 
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq 
> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-05-01 Thread Andreas Gal

On May 1, 2013, at 12:14 AM, Nicholas Cameron  wrote:

> This sounds like an awful lot of work, a lot more than some glue code and 
> code deletion. It sounds like you are proposing to make Moz2D pretty much a 
> general purpose 2D and 3D graphics library,

Minus support for 3D transforms which boil down to 3 lines of shader math, why 
do you think that Moz2D would have to become a 3D graphics library?

> touch (to some effect) the whole of the graphics code, and switch to using 
> new libraries which have not been tested for this purpose and at this scale.

My understanding is that we are betting on SkiaGL and D2D as the content 
acceleration paths of the future. If that is true, why do we think that those 
paths are ok for content rendering, but not for compositing?

> All for implementing a new feature which is only "fairly" important. Given 
> how much time/pain the shadow layers refactoring cost and the time/pain it 
> has taken to get SkiaGL going, this all seems like quite a risk.

I absolutely don't think its worth refactoring like this for filters. I am 
asking the question whether our gfx stack should be architected like this in 
general.

> 
> Surely there is a less drastic way to implement the filters?
> 
> Perhaps it is worth refactoring the graphics code in this way, but that seems 
> like a different conversation.

Thats really the conversation I would like to have actually. Filters are the 
occasion, but not the sole purpose.

Andreas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-04-30 Thread Andreas Gal

roc and I took the discussion offline and there is another option that might be 
possible:

Instead of creating a separate content effect path and compositor effect path, 
we could add support for effects to Moz2D, and then implement our compositor 
via Moz2D.

In this world, there would only be 2 backends to Moz2D, Skia/SkiaGL and D2D. 
Both Skia/SkiaGL and D2D support basically all the effects and filters we want. 
Where D2D is not available, we can fall back onto SkiaGL via ANGLE, or if all 
else fails Skia (CPU only, old XP).

The compositor is implemented based on Moz2D and has no backend-specific code. 
layers/opengl layers/d3* can be deleted, mostly. gfx/gl can be deleted as well, 
for the most part, minus for what SkiaGL needs to bind to the platform GL code.

roc points out that we would have to add a couple extensions to Moz2D, 
including gralloc, tiling, component alpha, and 3D transform.

The upside would be that there is exactly one path for effects/filters for 
content and compositor. To add new filters or to maintain code or features we 
don't have to go and update 3 different shader representations in 3 text files 
for 3 platforms plus the content fallback path. Almost all of our gfx code will 
live above the Moz2D layer, and is platform independent.

Since both D2D and Skia/SkiaGL already support (and accelerate) all the filters 
we want to implement, this exercise would be mostly code deletion and writing 
some glue code around D2D and SkiaGL.

What do people think?

Andreas

On Apr 30, 2013, at 10:36 PM, "Robert O'Callahan"  wrote:

> On Wed, May 1, 2013 at 5:28 PM, Andreas Gal  wrote:
> Should we hide the temporary surface generation (when needed) within the API?
> 
> GLContext::Composite(Target, Source, EffectChain, Filters)
> 
> And if multiple shaders or passes are needed, we create a temporary surface 
> on the fly and then do the final composite with the given EffectChain.
> 
> It certainly should be hidden within the GLContext filter implementation. 
> That probably needs to live somewhere under gfx/gl, since Moz2D will need 
> access to it and Moz2D doesn't depend on layers.
> 
> Rob
> -- 
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq 
> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq 
> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq 
> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q 
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq 
> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-04-30 Thread Andreas Gal

On Apr 30, 2013, at 10:36 PM, "Robert O'Callahan"  wrote:

> On Wed, May 1, 2013 at 5:28 PM, Andreas Gal  wrote:
> Should we hide the temporary surface generation (when needed) within the API?
> 
> GLContext::Composite(Target, Source, EffectChain, Filters)
> 
> And if multiple shaders or passes are needed, we create a temporary surface 
> on the fly and then do the final composite with the given EffectChain.
> 
> It certainly should be hidden within the GLContext filter implementation. 
> That probably needs to live somewhere under gfx/gl, since Moz2D will need 
> access to it and Moz2D doesn't depend on layers.

Why would Moz2D need access to that path? D2D provides built-in effects for 
pretty much all of the SVG filter spec (and more). If we want to abstract 
things at the EffectChain and Filter object level instead of expressing 
everything in GLSL, we can as well directly generate D2D-specific code from 
that. For accelerated GL I think we are homing in on using SkiaGL, which does 
its own internal implementation of effects/filters via GLSL. So Moz2D would 
basically map to D2D and SkiaGL.

We might actually want to do it the other way around and use SkiaGL to 
implement complex parts of the compositor pipeline. If we are comfortable with 
using SkiaGL via ANGLE on Windows, we could even do the Windows compositor 
effects/filters using SkiaGL. This would reduce the amount of code we have to 
write a lot (and give us one grand unified gfx pipeline). Its mostly just a 
bunch of glue code around SkiaGL.

Andreas

> 
> Rob
> -- 
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq 
> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq 
> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq 
> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q 
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq 
> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-04-30 Thread Andreas Gal

On Apr 30, 2013, at 10:28 PM, Andreas Gal  wrote:

> 
> On Apr 30, 2013, at 9:56 PM, "Robert O'Callahan"  wrote:
> 
>> On Wed, May 1, 2013 at 4:11 PM, Andreas Gal  wrote:
>> I wonder whether we should focus on one fast GPU path via GLSL, and have one 
>> precise, working, I-don't-care-how-slow CPU fallback.
>> 
>> I agree that should be our top priority, and it may not be worth doing CPU 
>> SIMD at all. But if we can get it cheaply via Skia or somewhere else, maybe 
>> it's worth having. I didn't mean to imply CPU SIMD should be a priority.
>> 
>> As for the filters on GLContext, I wonder whether thats really the best 
>> approach. Don't most filter applications want to be injected into the shader 
>> pipeline instead? We would have to be able to compose filters for that and 
>> generate a composite GLSL program from that. So for example we want a 
>> BGRXTextureLayer, with Mask3D, with ColorMatrix, and we want GLSL source 
>> generated from that. So isn't really what we want a GLSL shader program 
>> generator (and cache) that we give an EffectChain and that gives us back a 
>> compiled shader? (with added effects including ColorMatrix, etc).
>> 
>> That's a good point: for optimal performance with simple filters we need to 
>> be able to combine the EffectChain with the filter. However I think adding 
>> filters to the EffectChain is probably not the right way to do that, because 
>> some filters require multiple passes with intermediate buffers and can't be 
>> compiled to a single program. I think to keep basic compositing simple we 
>> want an EffectChain to always compile to a single program. So I suspect the 
>> best approach might be to pass the EffectChain as an additional parameter to 
>> the GLContext filter implementation.
> 
> Should we hide the temporary surface generation (when needed) within the API?
> 
> GLContext::Composite(Target, Source, EffectChain, Filters)
> 
> And if multiple shaders or passes are needed, we create a temporary surface 
> on the fly and then do the final composite with the given EffectChain.

On a second thought I think this should really live in gfx/layers/opengl, not 
on GLContext. I just filed bug 867460 to move all the layers-specific shader 
program stuff out of gfx/gl (which we should probably slim down and mostly 
eliminate over time).

Andreas

> 
> Andreas
> 
>> 
>> Rob
>> -- 
>> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq 
>> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq 
>> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq 
>> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q 
>> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq 
>> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-04-30 Thread Andreas Gal

On Apr 30, 2013, at 9:56 PM, "Robert O'Callahan"  wrote:

> On Wed, May 1, 2013 at 4:11 PM, Andreas Gal  wrote:
> I wonder whether we should focus on one fast GPU path via GLSL, and have one 
> precise, working, I-don't-care-how-slow CPU fallback.
> 
> I agree that should be our top priority, and it may not be worth doing CPU 
> SIMD at all. But if we can get it cheaply via Skia or somewhere else, maybe 
> it's worth having. I didn't mean to imply CPU SIMD should be a priority.
> 
> As for the filters on GLContext, I wonder whether thats really the best 
> approach. Don't most filter applications want to be injected into the shader 
> pipeline instead? We would have to be able to compose filters for that and 
> generate a composite GLSL program from that. So for example we want a 
> BGRXTextureLayer, with Mask3D, with ColorMatrix, and we want GLSL source 
> generated from that. So isn't really what we want a GLSL shader program 
> generator (and cache) that we give an EffectChain and that gives us back a 
> compiled shader? (with added effects including ColorMatrix, etc).
> 
> That's a good point: for optimal performance with simple filters we need to 
> be able to combine the EffectChain with the filter. However I think adding 
> filters to the EffectChain is probably not the right way to do that, because 
> some filters require multiple passes with intermediate buffers and can't be 
> compiled to a single program. I think to keep basic compositing simple we 
> want an EffectChain to always compile to a single program. So I suspect the 
> best approach might be to pass the EffectChain as an additional parameter to 
> the GLContext filter implementation.

Should we hide the temporary surface generation (when needed) within the API?

GLContext::Composite(Target, Source, EffectChain, Filters)

And if multiple shaders or passes are needed, we create a temporary surface on 
the fly and then do the final composite with the given EffectChain.

Andreas

> 
> Rob
> -- 
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq 
> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq 
> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq 
> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q 
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq 
> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-04-30 Thread Andreas Gal

OpenCL could definitely do this and it interoperates pretty well with GL, 
however, its basically not available anywhere.

Andreas

On Apr 30, 2013, at 9:26 PM, Kevin Gadd  wrote:

> Could you implement all the filters once in OpenCL to get adequate 
> performance on-CPU (with simd and threading) and adequate performance on-GPU? 
> It is my understanding that OpenCL can manipulate textures, but I don't know 
> what the constraints are (or whether ordinary users actually have a working 
> OpenCL implementation).
> 
> -kg
> 
> 
> On Tue, Apr 30, 2013 at 9:11 PM, Andreas Gal  wrote:
> 
> You propose SIMD optimization for the software fallback path. I wonder 
> whether we should focus on one fast GPU path via GLSL, and have one precise, 
> working, I-don't-care-how-slow CPU fallback. All hardware made the last few 
> years will have a GPU we support. Really old XP hardware might not, but it 
> will work (just not really fast). Skia happens to implement most of these 
> filters. Should we rely on Skia for this?
> 
> As for the filters on GLContext, I wonder whether thats really the best 
> approach. Don't most filter applications want to be injected into the shader 
> pipeline instead? We would have to be able to compose filters for that and 
> generate a composite GLSL program from that. So for example we want a 
> BGRXTextureLayer, with Mask3D, with ColorMatrix, and we want GLSL source 
> generated from that. So isn't really what we want a GLSL shader program 
> generator (and cache) that we give an EffectChain and that gives us back a 
> compiled shader? (with added effects including ColorMatrix, etc).
> 
> Andreas
> 
> On Apr 30, 2013, at 8:21 PM, "Robert O'Callahan"  wrote:
> 
> > This is a fairly important feature that people want to get working on soon,
> > but there are quite a few design issues to settle on before we go too far.
> >
> > I've tried to summarize the requirements, and my ideas about the design,
> > here:
> > https://wiki.mozilla.org/Gecko:AcceleratedFilters
> > Please tear this apart, or better still, constructively add to it :-).
> >
> > Thanks,
> > Rob
> > --
> > q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
> > qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
> > qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
> > qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
> > qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> > qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing CSS/SVG filters

2013-04-30 Thread Andreas Gal

You propose SIMD optimization for the software fallback path. I wonder whether 
we should focus on one fast GPU path via GLSL, and have one precise, working, 
I-don't-care-how-slow CPU fallback. All hardware made the last few years will 
have a GPU we support. Really old XP hardware might not, but it will work (just 
not really fast). Skia happens to implement most of these filters. Should we 
rely on Skia for this?

As for the filters on GLContext, I wonder whether thats really the best 
approach. Don't most filter applications want to be injected into the shader 
pipeline instead? We would have to be able to compose filters for that and 
generate a composite GLSL program from that. So for example we want a 
BGRXTextureLayer, with Mask3D, with ColorMatrix, and we want GLSL source 
generated from that. So isn't really what we want a GLSL shader program 
generator (and cache) that we give an EffectChain and that gives us back a 
compiled shader? (with added effects including ColorMatrix, etc).

Andreas

On Apr 30, 2013, at 8:21 PM, "Robert O'Callahan"  wrote:

> This is a fairly important feature that people want to get working on soon,
> but there are quite a few design issues to settle on before we go too far.
> 
> I've tried to summarize the requirements, and my ideas about the design,
> here:
> https://wiki.mozilla.org/Gecko:AcceleratedFilters
> Please tear this apart, or better still, constructively add to it :-).
> 
> Thanks,
> Rob
> -- 
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andreas Gal
We filed a bug for this and I am working on the patch.

Andreas

Sent from Mobile.

On Apr 26, 2013, at 16:06, Mounir Lamouri  wrote:

> On 26/04/13 11:17, Gregory Szorc wrote:
>> Anyway, I just wanted to see if others have thought about this. Do
>> others feel it is a concern? If so, can we formulate a plan to address
>> it? Who would own this?
>
> As others, I believe that we should use IndexedDB for Gecko internal
> storage. I opened a bug regarding this quite a while ago:
> https://bugzilla.mozilla.org/show_bug.cgi?id=766057
>
> We could easily imagine an XPCOM component that would expose a simple
> key/value storage available from JS or C++ using IndexedDB in the backend.
>
> --
> Mounir
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andreas Gal

Preferences are as the name implies intended for preferences. There is no sane 
use case for storing data in preferences. I would give any patch I come across 
doing that an automatic sr- for poor taste and general insanity.

SQLite is definitely not cheap, and we should look at more suitable backends 
for our storage needs, but done right off the main thread, its definitely the 
saner way to go than (1).

While (2) is a foot-gun, (3) is a guaranteed foot-nuke. While its easy to use 
sqlite wrong, its almost guaranteed that you get your own atomic storage file 
use wrong, across our N platforms.

Chrome is working on replacing sqlite with leveldb for indexeddb and most their 
storage needs. Last time we looked it wasn't ready for prime time. Maybe it is 
now. This might be the best option.

Andreas

On Apr 26, 2013, at 11:17 AM, Gregory Szorc  wrote:

> I'd like to start a discussion about the state of storage in Gecko.
> 
> Currently when you are writing a feature that needs to store data, you
> have roughly 3 choices:
> 
> 1) Preferences
> 2) SQLite
> 3) Manual file I/O
> 
> Preferences are arguably the easiest. However, they have a number of
> setbacks:
> 
> a) Poor durability guarantees. See bugs 864537 and 849947 for real-life
> issues. tl;dr writes get dropped!
> b) Integers limited to 32 bit (JS dates overflow b/c milliseconds since
> Unix epoch).
> c) I/O is synchronous.
> d) The whole method for saving them to disk is kind of weird.
> e) The API is awkward. See Preferences.jsm for what I'd consider a
> better API.
> f) Doesn't scale for non-trivial data sets.
> g) Clutters about:config (all preferences aren't config options).
> 
> We have SQLite. You want durability: it's your answer. However, it too
> has setbacks:
> 
> a) It eats I/O operations for breakfast. Multiple threads. Lots of
> overhead compared to prefs. (But hard to lose data.)
> b) By default it's not configured for optimal performance (you need to
> enable the WAL, muck around with other PRAGMA).
> c) Poor schemas can lead to poor performance.
> d) It's often overkill.
> e) Storage API has many footguns (use Sqlite.jsm to protect yourself).
> f) Lots of effort to do right. Auditing code for 3rd party extensions
> using SQLite, many of them aren't doing it right.
> 
> And if one of those pre-built solutions doesn't offer what you need, you
> can roll your own with file I/O. But that also has setbacks:
> 
> a) You need to roll your own. (How often do I flush? Do I use many small
> files or fewer large files? Different considerations for mobile (slow
> I/O) vs desktop?)
> b) You need to roll your own. (Listing it twice because it's *really*
> annoying, especially for casual developers that just want to implement
> features - think add-on developers.)
> c) Easy to do wrong (excessive flushing/fsyncing, too many I/O
> operations, inefficient appends, poor choices for mobile, etc).
> d) Wheel reinvention. Atomic operations/transactions. Data marshaling. etc.
> 
> I believe there is a massive gap between the
> easy-but-not-ready-for-prime-time preferences and
> the-massive-hammer-solving-the-problem-you-don't-have-and-introducing-many-new-ones
> SQLite. Because this gap is full of unknowns, I'm arguing that
> developers tend to avoid it and use one of the extremes instead. And,
> the result is features that have poor durability and/or poor
> performance. Not good. What's worse is many developers (including
> myself) are ignorant of many of these pitfalls. Yes, we have code review
> for core features. But code review isn't perfect and add-ons likely
> aren't subjected to the same level of scrutiny. The end result is the
> same: Firefox isn't as awesome as it could be.
> 
> I think there is an opportunity for Gecko to step in and provide a
> storage subsystem that is easy to use, somewhere between preferences and
> SQLite in terms of durability and performance, and "just works." I don't
> think it matters how it is implemented under the hood. If this were to
> be built on top of SQLite, I think that would be fine. But, please don't
> make consumers worry about things like SQL, schema design, and PRAGMA
> statements. So, maybe I'm advocating a generic key-value store. Maybe
> something like DOM Storage? Maybe SQLite 4 (which is emphasizing
> key-value storage and speed)? Just... something. Please.
> 
> Anyway, I just wanted to see if others have thought about this. Do
> others feel it is a concern? If so, can we formulate a plan to address
> it? Who would own this?
> 
> Gregory
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-25 Thread Andreas Gal

How many 10.7 machines do we operate in that pool?

Andreas

On Apr 25, 2013, at 10:30 AM, "Armen Zambrano G."  wrote:

> (please follow up through mozilla.dev.planning)
> 
> Hello all,
> I have recently been looking into our Mac OS X test wait times which have 
> been bad for many months and progressively getting worst.
> Less than 80% of test jobs on OS X 10.6 and 10.7 are able to start
> within 15 minutes of being requested.
> This slows down getting tests results for OS X and makes tree closures longer 
> if we have Mac OS X test back logs.
> Unfortunately, we can't buy any more revision 4 Mac minis (they're not sold 
> anymore) as Apple discontinues old hardware as new ones comes out.
> 
> In order to improve the turnaround time for Mac testing, we have to look into 
> reducing our test load in one of these two OSes (both of them run on revision 
> 4 minis).
> We have over a third of our OS X users running 10.6. Eventually, down the 
> road, we could drop 10.6 but we still have a significant amount of our users 
> there; even though Mac stopped serving them major updates since July 2011 [1].
> 
> Our current Mac OS X distribution looks like this:
> * 10.6 - 43%
> * 10.7 - 30%
> * 10.8 - 27%
> OS X 10.8 is the only version that is growing.
> 
> In order to improve our wait times, I propose that we stop testing on tbpl 
> per-checkin [2] on OS X 10.7 and re-purpose the 10.7 machines as 10.6 to 
> increase our capacity.
> 
> Please let us know if this plan is unacceptable and needs further discussion.
> 
> best regards,
> Armen Zambrano - Mozilla's Release Engineering
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking the amount of system JS we use in Gecko on B2G

2013-04-22 Thread Andreas Gal
JS is a big advantage for rapid implementation of features and it's
easier to avoid exploitable mistakes. Also, in many cases JS code
(bytecode, not data) should be slimmer than C++. Using JS for
infrequently executing code should be a memory win. I think I would
like to hear from the JS team on reducing memory use of JS and
disabling any compilation for infrequently running code before we give
up on it.

Andreas

Sent from Mobile.

On Apr 22, 2013, at 7:05, Justin Lebar  wrote:

> Of course attachments don't work great on newsgroups.  I've uploaded
> the about:memory dumps I tried to attach to people.m.o:
>
> http://people.mozilla.org/~jlebar/downloads/merged.json.xz
> http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz
>
> On Sun, Apr 21, 2013 at 7:51 PM, Justin Lebar  wrote:
>> I think we should consider using much less JS in the parts of Gecko that are
>> used in B2G.  I'd like us to consider writing new modules in C++ where
>> possible, and I'd like us to consider rewriting existing modules in C++.
>>
>> I'm only proposing a change for modules which are enabled for B2G.  For 
>> modules
>> which aren't enabled on B2G, I'm not proposing any change.
>>
>> What I'd like to come out of this thread is a consensus one way or another as
>> to whether we continue along our current path of writing many features that 
>> are
>> enabled on B2G in JS, or whether we change course.
>>
>> Since most of these features implemented in JS seem to be DOM features, I'm
>> particularly interested in the opinions of the DOM folks.  I'm also 
>> interested
>> in the opinions of JS folks, particularly those who know about the memory 
>> usage
>> of our new JITs.
>>
>> In the remainder of this e-mail I'll first explain where our JS memory is
>> going.  Then I'll address two arguments that might be made against my 
>> proposal
>> to use more C++.  Finally, I'll conclude by suggesting a plan of action.
>>
>> === Data ===
>>
>> Right now about 50% (16mb) of the memory used by the B2G main process
>> immediately after rebooting is JS.   It is my hypothesis that we could 
>> greatly
>> reduce this by converting modules to C++.
>>
>> On our 256mb devices, we have about 120mb available to Gecko, so this 16mb
>> represents 13% of all memory available to B2G.
>>
>> To break down the 16mb of JS memory, 8mb is from four workers: ril_worker,
>> net_worker, wifi_worker (x2).  5mb of the 8mb is under "unused-arenas"; this 
>> is
>> fragmentation in the JS heap.  Based on my experience tackling fragmentation 
>> in
>> the jemalloc heap, I suspect reducing this would be difficult.  But even if 
>> we
>> eliminated all of the fragmentation, we'd still be spending 3mb on these four
>> workers, which I think is likely far more than we need.
>>
>> The other 8mb is everything else in the system compartment (all our JSMs,
>> XPCOM components, etc).  In a default B2G build you don't get a lot of 
>> insight
>> into this, because most of the system compartments are squished together to 
>> save
>> memory (bug 798491).  If I set jsloader.reuseGlobal to false, the amount of
>> memory used increases from 8mb to 15mb, but now we can see where it's going.
>>
>> The list of worst offenders follows, but because this data was collected with
>> reuseGlobal turned off, apply generous salt.
>>
>>  0.74 MB modules/Webapps.jsm
>>  0.59 MB anonymous sandbox from devtools/dbg-server.jsm:41
>>  0.53 MB components/SettingsManager.js
>>  0.53 MB chrome://browser/content/shell.xul
>>  0.49 MB components/WifiWorker.js
>>  0.43 MB modules/DOMRequestHelper.jsm
>>  0.38 MB modules/XPCOMUtils.jsm
>>  0.34 MB RadioInterfaceLayer.js
>>  0.31 MB AppsUtils.jsm
>>  0.27 MB Webapps.js
>>  0.22 MB BrowserElementParent.jsm
>>  0.21 MB app://system.gaiamobile.org/index.html
>>
>> Many (but certainly not all) of these modules could be rewritten in C++.
>>
>> Beyond this list, it's death by a thousand cuts; there are 100 compartments 
>> in
>> there, and they each cost a small amount.
>>
>> I've attached two about:memory dumps collected on my hamachi device soon 
>> after
>> reboot, so you can examine the situation more closely, if you like.
>> merged.json was collected with the default config, and unmerged.json was
>> collected with jsloader.reuseGlobal set to false.
>>
>> Download and extract these files and then open them with the button at
>> the bottom
>> of about:memory in Nightly.
>>
>> (Before you ask: Most of the heap-unclassified in these dumps is
>> graphics memory,
>> allocated in drivers.)
>>
>> === Should we use JS because it's nicer than C++? ===
>>
>> I recognize that in many ways JS is a more convenient language than C++.  But
>> that's besides the point here.  The point is that in the environment we're
>> targeting, our implementation of JS is too heavyweight.  We can either fix 
>> our
>> implementation or use less JS, but we can't continue using as much JS as we
>> like without doing one of these two things.
>>
>> === Why not just make JS slimmer? ===
>>
>

Re: WebP support

2013-04-08 Thread Andreas Gal
I assume all this data/reasoning will be posted in the bug. People
just didn't get around to it yet. The idea was to use the bug to
discuss the issue. There is definitely no decision yet to ship, just a
decision to take a look at some additional data point someone raised.

Andreas

Sent from Mobile.

On Apr 8, 2013, at 9:55, Ralph Giles  wrote:

> On 13-04-08 4:06 AM, Jeff Muizelaar wrote:
>> No decision has been made yet. We are still evaluating the format.
>
> I think the concern is that none of that re-evaluation has been on a
> public list or bug I've seen. Can you clarify what Andreas meant by,
> "new data that shows that WebP has valid use cases and advantages" in
> https://bugzilla.mozilla.org/show_bug.cgi?id=600919#c185 ?
>
> -r
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Soliciting advice on #650960 (replacement for print progress bars)

2013-02-25 Thread Andreas Gal

Do we actually need the tab, or just the document? If its the latter, can we 
just keep the document around invisibly?

Andreas

On Feb 25, 2013, at 10:14 PM, Zack Weinberg  wrote:

> https://bugzilla.mozilla.org/show_bug.cgi?id=650960 seeks to replace the 
> existing print progress bars with something that isn't app-modal. Ignore 
> musings in the description and first few comments about getting rid of them 
> entirely and/or waiting for bug 629500.  The current thinking is that we need 
> *some* indication that a print job is in progress, because we need to prevent 
> the user from closing the tab or window until the print job has been 
> completely handed off to the OS. However, the way this is implemented now is 
> inconvenient (it's been shoehorned into the nsIWebProgressListener interface, 
> which is not really fit for the purpose, and it involves some really icky 
> [that's a technical term] back-and-forth between C++ and JS) and app-modal 
> anything is Just Wrong.
> 
> The existing patches in the bug have been vetoed because doorhanger 
> notifications aren't even universally available within Firefox, never mind 
> other applications.  I am not aware of any universal alternative, and I know 
> very little about XUL.  I *think* that the low-level approach in the bug, of 
> firing special chrome events at the window (plus some docshell goo to do the 
> actual close suppression), is still viable, and I think doorhangers are 
> appropriate for this when they're available.  But I would like some help 
> figuring out what a good universal-backstop *receiver* of those chrome events 
> would look like, both in UX terms and implementation-wise.
> 
> Thanks,
> zw
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OpenVG Azure backend

2013-02-25 Thread Andreas Gal

On Feb 25, 2013, at 8:22 PM, George Wright  wrote:

> On 02/23/2013 04:00 PM, Andreas Gal wrote:
>> OpenVG is a Khronos standard API for GPU accelerated 2D rendering. Its very 
>> similar to OpenGL in design. In fact, its an alternative API to OpenGL ES on 
>> top of EGL. It looks like that OpenVG is supported on most Android devices 
>> and is used there by Flash (or well used to be used). B2G devices have 
>> OpenVG support as well. There are also a number of open source 
>> implementations of OpenVG that use OpenGL to accelerate 2D operations that 
>> might be interesting for the desktop. OpenVG is pretty similar to Cairo and 
>> Skia when it comes to the actual operations offered. The biggest drawback of 
>> OpenVG is that it doesn't mix well with OpenGL. Its possible to render with 
>> OpenVG to a texture and then composite that with OpenGL, but its not 
>> possible to do mixed rendering with VG and GL to the same surface. That 
>> having said, I still think we should consider adding an OpenVG backend. It 
>> would potentially significantly speed up rendering on mobile hardware. 
>> OpenVG is quite a bit more seasone
 d t
>> han Skia/SkiaGL, and explicitly targets mobile, whereas SkiaGL seems to be 
>> mostly optimized for the desktop (at least so far). A particular advantage 
>> of OpenVG is that it can take advantage of dedicated 2D acceleration 
>> hardware (Mali and Adreno both have special 2D hardware OpenVG uses). SkiaGL 
>> on the other hand is limited to using GLES to accelerate 2D drawing 
>> operations. What do people think. Should we add a OpenVG backed?
>> 
> 
> My (very limited) knowledge about OpenVG is that people in the industry
> at the vendor level tend to not care about it. I think the reason for
> this is basically that there aren't any significant users of OpenVG
> (WebKit used to have a VG backend but it only had one user and has now
> been disowned). As a result of this indifference from the vendors, my
> understanding is that there aren't any actual hardware accelerated VG
> implementations. I think Qualcomm used to have one, but if I remember
> correctly they didn't really want to keep maintaining it.

I am trying to get feedback from 3 different chipset vendors. We should know 
pretty soon what the support level looks like these days.

Andreas

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OpenVG Azure backend

2013-02-23 Thread Andreas Gal

On Feb 23, 2013, at 11:15 PM, Benoit Jacob  wrote:

> The biggest disadvantage of OpenVG is that it is a whole new surface of 
> contact with drivers. We're only just now digesting the surface of OpenGL and 
> gralloc drivers, in terms of driver bugs and other device-dependent behavior. 
> So the prospect of throwing OpenVG into the mix is scary and unexciting, and 
> so the bar has to be very high. In addition to performance comparisons I 
> would have to hear a good reason to believe that OpenGL+software can't be as 
> fast as OpenVG -- is it because OpenGL doesn't efficiently expose the 
> hardware features, or is it because there is OpenVG-specific hardware?

Scary and unexciting are not strong reasons to miss out on potential gfx 
performance wins, especially on low-end mobile hardware. From reading up on two 
chipset specs, both Adreno and Mali seem to have special dedicated hardware for 
OpenVG (for tessellation I am guessing). If we find OpenVG too unstable for the 
broad set of Android hardware, we can always limit OpenVG use to B2G devices 
where we can work with the vendor.

Andreas

> 
> Benoit
> 
> 2013/2/23 Andreas Gal 
> 
> OpenVG is a Khronos standard API for GPU accelerated 2D rendering. Its very 
> similar to OpenGL in design. In fact, its an alternative API to OpenGL ES on 
> top of EGL. It looks like that OpenVG is supported on most Android devices 
> and is used there by Flash (or well used to be used). B2G devices have OpenVG 
> support as well. There are also a number of open source implementations of 
> OpenVG that use OpenGL to accelerate 2D operations that might be interesting 
> for the desktop. OpenVG is pretty similar to Cairo and Skia when it comes to 
> the actual operations offered. The biggest drawback of OpenVG is that it 
> doesn't mix well with OpenGL. Its possible to render with OpenVG to a texture 
> and then composite that with OpenGL, but its not possible to do mixed 
> rendering with VG and GL to the same surface. That having said, I still think 
> we should consider adding an OpenVG backend. It would potentially 
> significantly speed up rendering on mobile hardware. OpenVG is quite a bit 
> more seasoned
  t
>  han Skia/SkiaGL, and explicitly targets mobile, whereas SkiaGL seems to be 
> mostly optimized for the desktop (at least so far). A particular advantage of 
> OpenVG is that it can take advantage of dedicated 2D acceleration hardware 
> (Mali and Adreno both have special 2D hardware OpenVG uses). SkiaGL on the 
> other hand is limited to using GLES to accelerate 2D drawing operations. What 
> do people think. Should we add a OpenVG backed?
> 
> Andreas
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OpenVG Azure backend

2013-02-23 Thread Andreas Gal

On Feb 23, 2013, at 10:15 PM, Kevin Gadd  wrote:

> Are there any benchmarks that demonstrate OpenVG being faster on
> mobile hardware than say, SkiaGL? Does it have the full feature set

There are benchmarks showing that the OpenVG backend of cairo sped up WebKit 
significantly, including SVG.

> that Gecko tends to need in 2D scenarios?

>From a casual comparison of the API surface OpenVG supports every 
>functionality Azure needs.

> Having to render to a
> separate render target and then composite back into your OpenGL scene
> seems like it could cause some really bad overhead if you end up
> having to fall back to GL for things that VG can't do, because you'd
> end up bouncing between render targets repeatedly. It'd also increase
> the memory usage for each canvas since you'd need two RTs instead of
> one.

Again, as far as I can tell anything we do via Azure we can do in OpenVG. We 
can't run shaders in OpenVG, so the more advanced shader stuff coming to CSS we 
will have to do via a layer, but thats the plan anyway.

Nvidia has a proposal to integrate OpenVG-like path drawing with OpenGL 
(NV_path_rendering). That approach is strictly superior, but probably not 
available broadly for quite a while. Over time we will want to switch over to 
this if it becomes more common, but for now, especially for mobile, OpenVG 
seems attractive.

Andreas

> 
> -kg
> 
> On Sat, Feb 23, 2013 at 1:00 PM, Andreas Gal  wrote:
>> 
>> OpenVG is a Khronos standard API for GPU accelerated 2D rendering. Its very 
>> similar to OpenGL in design. In fact, its an alternative API to OpenGL ES on 
>> top of EGL. It looks like that OpenVG is supported on most Android devices 
>> and is used there by Flash (or well used to be used). B2G devices have 
>> OpenVG support as well. There are also a number of open source 
>> implementations of OpenVG that use OpenGL to accelerate 2D operations that 
>> might be interesting for the desktop. OpenVG is pretty similar to Cairo and 
>> Skia when it comes to the actual operations offered. The biggest drawback of 
>> OpenVG is that it doesn't mix well with OpenGL. Its possible to render with 
>> OpenVG to a texture and then composite that with OpenGL, but its not 
>> possible to do mixed rendering with VG and GL to the same surface. That 
>> having said, I still think we should consider adding an OpenVG backend. It 
>> would potentially significantly speed up rendering on mobile hardware. 
>> OpenVG is quite a bit more seasone
 d t
>> han Skia/SkiaGL, and explicitly targets mobile, whereas SkiaGL seems to be 
>> mostly optimized for the desktop (at least so far). A particular advantage 
>> of OpenVG is that it can take advantage of dedicated 2D acceleration 
>> hardware (Mali and Adreno both have special 2D hardware OpenVG uses). SkiaGL 
>> on the other hand is limited to using GLES to accelerate 2D drawing 
>> operations. What do people think. Should we add a OpenVG backed?
>> 
>> Andreas
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


OpenVG Azure backend

2013-02-23 Thread Andreas Gal

OpenVG is a Khronos standard API for GPU accelerated 2D rendering. Its very 
similar to OpenGL in design. In fact, its an alternative API to OpenGL ES on 
top of EGL. It looks like that OpenVG is supported on most Android devices and 
is used there by Flash (or well used to be used). B2G devices have OpenVG 
support as well. There are also a number of open source implementations of 
OpenVG that use OpenGL to accelerate 2D operations that might be interesting 
for the desktop. OpenVG is pretty similar to Cairo and Skia when it comes to 
the actual operations offered. The biggest drawback of OpenVG is that it 
doesn't mix well with OpenGL. Its possible to render with OpenVG to a texture 
and then composite that with OpenGL, but its not possible to do mixed rendering 
with VG and GL to the same surface. That having said, I still think we should 
consider adding an OpenVG backend. It would potentially significantly speed up 
rendering on mobile hardware. OpenVG is quite a bit more seasoned t
 han Skia/SkiaGL, and explicitly targets mobile, whereas SkiaGL seems to be 
mostly optimized for the desktop (at least so far). A particular advantage of 
OpenVG is that it can take advantage of dedicated 2D acceleration hardware 
(Mali and Adreno both have special 2D hardware OpenVG uses). SkiaGL on the 
other hand is limited to using GLES to accelerate 2D drawing operations. What 
do people think. Should we add a OpenVG backed?

Andreas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increase in memory utilization on Mac OSX 10.7+ due to history swipe animations

2013-02-12 Thread Andreas Gal

I think people are legitimately concerned about the memory use of this feature. 
I don't think anyone is trying to dismiss anything here. I still don't fully 
understand why we need a full size screenshot of the last N pages. Is the last 
page sufficient and we redraw the rest while we animate? I am sure you guys 
considered this, so I am curious why this was excluded.

Thanks,

Andreas

On Feb 12, 2013, at 11:29 PM, Asa Dotzler  wrote:

> On 2/12/2013 8:05 PM, Andreas Gal wrote:
>> Hey Asa,
>> 
>> where does the magic 20 pages deep history number come from? Why not 1? Or 
>> 999?
>> 
>> Andreas
> 
> The goal of the feature is to work when ever the user engages it and not just 
> "for the first couple of pages".
> 
> Just have one and too many users would fall off of their back stack very 
> quickly. Nine hundred ninety nine and we'd be wasting memory for all but a 
> very small minority of users who accumulate and navigate through that much 
> history.
> 
> We have Test Pilot data we could use to fine tune this, maybe we only need 
> 15, or maybe we actually need 25.
> 
> My point isn't to quarrel about the depth of that back stack, but to say that 
> it is not OK to simply dismiss a new feature because it increases memory 
> footprint. Features vs footprint is a balancing act. Both sides must be 
> weighed and that wasn't what I saw happening here. What I saw happening was 
> the out of hand dismissal of a feature based on no consideration other than 
> increased memory footprint. That cannot be how we roll.
> 
> - A
> 
> 
>> 
>> On Feb 12, 2013, at 9:40 PM, Asa Dotzler  wrote:
>> 
>>> On 2/12/2013 3:08 PM, Ed Morley wrote:
>>>> On 12 February 2013 22:11:12, Stephen Pohl wrote:
>>>>> I wanted to give a heads up that we're in the process of finalizing
>>>>> the patch for bug 678392 which will give us history swipe animations
>>>>> on Mac OSX 10.7+. Since we will be taking snapshots of the 20
>>>>> most-recently visited pages, this will undoubtedly lead to an increase
>>>>> in memory utilization on these platforms.
>>>> 
>>>> To save everyone having to look at the graph - the initial landing
>>>> showed a consistent 20% regression in trace malloc maxheap. If this were
>>>> a 1-5% regression, then I think it would be worth discussing the
>>>> trade-off. At 20%, I really don't see how we can take this, sorry! :-(
>>>> 
>>>> Ed
>>> 
>>> I don't see how we can *not* take this. Of course it's going to mean using 
>>> more memory.  If it doesn't leak, and it doesn't put us over some magic 
>>> limit where a significant portion of our users end up paging or something 
>>> like that, then I don't see how we can reject it.
>>> 
>>> Without context, 1-5% or 20% growth are just meaningless numbers. The 
>>> context here is not some accidental regression or a feature doing something 
>>> horribly wrong with memory. This is simply a memory-expensive feature and 
>>> it's a feature we *must* land.
>>> 
>>> I'm all for smart people looking for ways to get this memory usage as low 
>>> as it can be without undermining the value of the feature, but if we cannot 
>>> find those wins, we should land this as it is.
>>> 
>>> 
>>> - A
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>> 
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increase in memory utilization on Mac OSX 10.7+ due to history swipe animations

2013-02-12 Thread Andreas Gal
Hey Asa,

where does the magic 20 pages deep history number come from? Why not 1? Or 999?

Andreas

On Feb 12, 2013, at 9:40 PM, Asa Dotzler  wrote:

> On 2/12/2013 3:08 PM, Ed Morley wrote:
>> On 12 February 2013 22:11:12, Stephen Pohl wrote:
>>> I wanted to give a heads up that we're in the process of finalizing
>>> the patch for bug 678392 which will give us history swipe animations
>>> on Mac OSX 10.7+. Since we will be taking snapshots of the 20
>>> most-recently visited pages, this will undoubtedly lead to an increase
>>> in memory utilization on these platforms.
>> 
>> To save everyone having to look at the graph - the initial landing
>> showed a consistent 20% regression in trace malloc maxheap. If this were
>> a 1-5% regression, then I think it would be worth discussing the
>> trade-off. At 20%, I really don't see how we can take this, sorry! :-(
>> 
>> Ed
> 
> I don't see how we can *not* take this. Of course it's going to mean using 
> more memory.  If it doesn't leak, and it doesn't put us over some magic limit 
> where a significant portion of our users end up paging or something like 
> that, then I don't see how we can reject it.
> 
> Without context, 1-5% or 20% growth are just meaningless numbers. The context 
> here is not some accidental regression or a feature doing something horribly 
> wrong with memory. This is simply a memory-expensive feature and it's a 
> feature we *must* land.
> 
> I'm all for smart people looking for ways to get this memory usage as low as 
> it can be without undermining the value of the feature, but if we cannot find 
> those wins, we should land this as it is.
> 
> 
> - A
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Off-main-thread Painting

2013-02-12 Thread Andreas Gal

There is definitely a lot of prerequisites we can work on first. Making all the 
DL code self-containing means eliminating much of nsCSSRendering.cpp, which is 
probably quite some refactoring work we can do right now.

Andreas

On Feb 12, 2013, at 12:24 PM, Jet Villegas  wrote:

> I'm not too concerned about the landing schedule here as I don't expect to 
> see OMTP landing too soon (though mwoodrow has surprised us before.) We 
> should have plenty of runway between the required Layers work and OMTP. 
> 
> Let's put OMTP (and prerequisites) on the next Rendering call agenda. This 
> should be a lively discussion.
> 
> --Jet
> 
> - Original Message -
> From: "Milan Sreckovic" 
> To: "Matt Woodrow" 
> Cc: "Robert O'Callahan" , dev-platform@lists.mozilla.org
> Sent: Tuesday, February 12, 2013 6:50:42 AM
> Subject: Re: Off-main-thread Painting
> 
> 
> I think we need a stronger statement than "worthwhile" in this:
> 
> It would be worthwhile to wait for the Layers refactoring to be completed to 
> avoid too many conflicts.
> 
> when it comes to actually landing code.  Something like "we should" or even 
> "we must" comes to mind :-)  That doesn't preclude conversations, design, 
> etc., we just can't afford to delay the layers refactoring.
> 
> Milan
> 
> On 2013-02-12, at 2:21 AM, Matt Woodrow  wrote:
> 
>> Hi All
>> 
>> As an effort to improve both performance and responsiveness of the browser, 
>> we are planning on moving painting to happen on a separate thread.
>> 
>> My initial draft plan to do this can be found here: 
>> https://wiki.mozilla.org/Gecko:OffMainThreadPainting
>> 
>> Some of the details still need to be worked out, in particular coordinating 
>> with other platform teams to make sure everything we need is available off 
>> the main thread.
>> 
>> Please let me know if you have an queries, input or want to be involved.
>> 
>> - Matt
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Off-main-thread Painting

2013-02-12 Thread Andreas Gal

On Feb 12, 2013, at 9:50 AM, Milan Sreckovic  wrote:

> 
> I think we need a stronger statement than "worthwhile" in this:
> 
> It would be worthwhile to wait for the Layers refactoring to be completed to 
> avoid too many conflicts.
> 
> when it comes to actually landing code.  Something like "we should" or even 
> "we must" comes to mind :-)  That doesn't preclude conversations, design, 
> etc., we just can't afford to delay the layers refactoring.

I am a little concerned that we are putting so many things on hold for the 
refactoring. Do we have a somewhat established schedule for that now, or is the 
timing still fairly open-ended? Is there any chance that we can start landing 
some parts of it incrementally?

Andreas

> 
> Milan
> 
> On 2013-02-12, at 2:21 AM, Matt Woodrow  wrote:
> 
>> Hi All
>> 
>> As an effort to improve both performance and responsiveness of the browser, 
>> we are planning on moving painting to happen on a separate thread.
>> 
>> My initial draft plan to do this can be found here: 
>> https://wiki.mozilla.org/Gecko:OffMainThreadPainting
>> 
>> Some of the details still need to be worked out, in particular coordinating 
>> with other platform teams to make sure everything we need is available off 
>> the main thread.
>> 
>> Please let me know if you have an queries, input or want to be involved.
>> 
>> - Matt
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Scrolling, Scrolling and more Scrolling

2013-01-28 Thread Andreas Gal
Hi Anthony,

thanks for bringing this up. I completely agree that we have to unify all that 
scrolling code. Chris Jones and Doug Sherk wrote most of the current C++ async 
scrolling code. We should definitely unify around that. Once we support 
multiple concurrent scrollable regions instead of just one level of scrolling, 
BrowserElementScrolling.js won't be needed anymore I think. The Java code 
should definitely die. Since you have worked a bunch on this code, I don't 
think anyone would try to stop you if you want to propose a plan and I am sure 
Chris is happy to help draft it.

Andreas

On Jan 28, 2013, at 8:14 PM, Anthony Jones  wrote:

> I've spent several weeks fixing scrolling and zooming bugs on b2g. You
> may have enjoyed bug 831973 (and it's duplicates) over the last week. In
> bug 811950 it took -9 lines to introduce the bug and a further -27 lines
> to fix it. Perhaps we have too much scrolling code:
> 
>   * AsyncPanZoomController.cpp
>   * BrowserElementScrolling.js
>   * PanZoomController.java
>   * nsFrameLoader.cpp
> 
> There is a fair amount of duplication here. Ideally we should have a
> single async implementation with a good set of tests. Removing the
> duplication is going to require some collaboration.
> 
> Are there other scrolling implementations that I've missed?
> 
> Who is responsible for each of these implementations?
> 
> What will it take to get to a single implementation?
> 
> Anthony
> 
> 
> 
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of "instanceof SomeDOMInterface" in chrome and extensions

2012-12-30 Thread Andreas Gal

"In terms of implementation complexity on our end, it's trivial as long as we 
don't follow the WebIDL spec and make things like HTMLAnchorElement actual 
Function instances."

You said you want to make it a function. I am just trying to say thats ok, as 
long it says "object" for typeof.

Andreas

On Dec 31, 2012, at 8:44 AM, Boris Zbarsky  wrote:

> On 12/30/12 10:34 PM, Andreas Gal wrote:
>> In this sea of terrible choices, how about making HTMLAnchorElement an 
>> actual function, but having it return "object" for typeof?
> 
> Apart from being an ES violation (which we're in the business of anyway, at 
> the moment), what does that actually buy us?
> 
> Right now, in my tree, HTMLAnchorElement is an object with a [[Call]], so 
> typeof returns "function" and it has a JSClass that lets us override the 
> behavior of instanceof.
> 
> But if it were an actual function then we couldn't change how instanceof 
> behaves (it would just look up the proto chain of the LHS for the value of 
> HTMLAnchorElement.prototype), right?  What do we get from making its typeof 
> return "object"?
> 
> Feel like I'm missing something,
> Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of "instanceof SomeDOMInterface" in chrome and extensions

2012-12-30 Thread Andreas Gal

On Dec 31, 2012, at 8:08 AM, Boris Zbarsky  wrote:

> On 12/30/12 2:16 PM, Robert O'Callahan wrote:
>> How bad would it be to make " instanceof > WebIDL interface>" special-cased to re-map the RHS to the appropriate
>> WebIDL interface object for ?
> 
> In terms of implementation complexity on our end, it's trivial as long as we 
> don't follow the WebIDL spec and make things like HTMLAnchorElement actual 
> Function instances.
> 
> In terms of specs and getting other UAs to go along... I don't know.

In this sea of terrible choices, how about making HTMLAnchorElement an actual 
function, but having it return "object" for typeof? That should be reasonably 
Web-compatible.

Andreas

> 
> -Boris
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of "instanceof SomeDOMInterface" in chrome and extensions

2012-12-30 Thread Andreas Gal

I think it would be extremely surprising to chrome JS authors if instanceof 
works differently in content and chrome, resulting in very hard to diagnose 
bugs.

Andreas

On Dec 31, 2012, at 12:16 AM, "Robert O'Callahan"  wrote:

> On Fri, Dec 28, 2012 at 8:20 PM, Boris Zbarsky  wrote:
> 
>> Well, it really has to change as exposed to web content (or we have to
>> convince every single other browser to change behavior and get the
>> ECMAScript spec changed and so forth).
>> 
> 
> How bad would it be to make " instanceof  WebIDL interface>" special-cased to re-map the RHS to the appropriate
> WebIDL interface object for ?
> 
> Rob
> -- 
> Jesus called them together and said, “You know that the rulers of the
> Gentiles lord it over them, and their high officials exercise authority
> over them. Not so with you. Instead, whoever wants to become great among
> you must be your servant, and whoever wants to be first must be your
> slave — just
> as the Son of Man did not come to be served, but to serve, and to give his
> life as a ransom for many.” [Matthew 20:25-28]
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gecko 17 as the base for B2G v1

2012-08-01 Thread Andreas Gal

We have test coverage using emulators, and actual hardware (panda boards) is 
being set up. Reporting of the results and integration is very lacking, and 
until those pieces fall into place (e.g. try integration), the developer 
experience is going to suck a lot if we enforce the rule below (backout). We 
might want to try a "coordinate" instead of the automatic back out approach. If 
some change breaks B2G, we will try to negotiate with the patch author whether 
we can back it out and land it later or back it out and we help fix the B2G 
regression and then re-land etc. If you break B2G, the feedback is not 
immediate, and there is almost no fair and reasonable way for general m-c 
committers to predict and test for B2G regressions (the A team is working 
really hard on fixing this).

Andreas

On Aug 1, 2012, at 9:30 PM, Boris Zbarsky wrote:

> On 8/1/12 5:47 PM, Alex Keybl wrote:
>> any desktop/mobile change that negatively impacts B2G builds in a 
>> significant way will be backed out (and vice versa).
> 
> Do we have any sort of B2G test coverage?  Ideally on try?
> 
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to get Docshell for plugin?

2012-07-23 Thread Andreas Gal

If you disable NPAPI for B2G with a fatal configure warning, I think I could 
live with that. In the alternative, we could disallow instantiating plugins in 
a (packaged) app context.

Plugins delenda est.

Andreas

On Jul 23, 2012, at 9:36 PM, Jason Duell wrote:

> Do we have any way within our NPAPI functions to determine what docshell the 
> plugin is displaying content in?  I don't see an obvious one.
> 
> For both per-tab private browsing (bug 722850)  and B2G application "cookie 
> jars" (bug 756648) we need to implement separate simultaneous cookie 
> databases, i.e. have separate cookie namespaces.  For most requests we can 
> get the AppId and/or private browsing mode from the necko channel's callbacks 
> (which now implement nsILoadContext on both parent/child in e10s).   But the 
> cookie API allows callers to skip passing in a channel to 
> get/setCookieString, and that's what NPAPI does
> 
> http://mxr.mozilla.org/mozilla-central/source/dom/plugins/base/nsNPAPIPlugin.cpp#2736
> 
> So I'm wondering how I can get the info some other way.
> 
> thanks,
> 
> Jason
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform