Re: Changes to tab min-width

2017-10-06 Thread Gabor Krizsanits
On Fri, Oct 6, 2017 at 3:52 PM, Nicolas B. Pierron <
nicolas.b.pier...@mozilla.com> wrote:

>
> I will add that 91% of the session on release have 12 or fewer tabs, and
> thus would not be concerned at all by these changes.  So among the 9%
> remaining, 33% of them are using 20 tabs or fewer, and 67% are using more
> than 20 tabs.
>
>
It's worth to be mentioned that these numbers tend to look very differently
if we focus on our heavy users (defined by number of pageloads and various
other factors). If I recall it right, Benjamin did an experiment a while
back, and the outcome was that for this particular group the 12 is more
like an average number. And the experience of this group is probably very
important.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Throttling timeouts in background using execution budget.

2017-10-05 Thread Gabor Krizsanits
Thanks a lot for working on this, it sounds awesome! Can I ask if there are
any related telemetry probes that will be used later on to evaluate the
results and maybe tune these constants? Do you plan to expose any of these
parameters to web extensions?

Gabor

On Thu, Oct 5, 2017 at 5:27 PM, Andreas Farre  wrote:

> Hi all!
>
> After Bug 1377766 lands, we will increase the amount timeouts
> executing in background tabs are throttled, based on an execution
> budget. This budget is continuously regenerating, and is decreased
> when timeouts execute. If the budget becomes negative, timeouts will
> not be allowed to run until the budget is positive again. This
> punishes pages that behave poorly while not being in the foreground.
>
> This feature has been developed behind
> "dom.timeout.enable_budget_timer_throttling". Other relevant prefs
> are:
>
> * dom.timeout.background_budget_regeneration_rate
>   The rate of budget regeneration; the time in milliseconds that it
> takes to regenerate 1 millisecond
>
> * dom.timeout.background_throttling_max_budget
>   The maximum budget. Budget is clamped to this.
>
> * dom.timeout.budget_throttling_max_delay
>   Effectively the minimum budget. The maximum delay of a throttled timeout
>
> * dom.timeout.foreground_budget_regeneration_rate
>   The same as the background variant, but for foreground tabs. Only
> applicable for testing.
>
> * dom.timeout.foreground_throttling_max_budget
>   The same as the background variant, but for foreground tabs. Only
> applicable for testing.
>
> * dom.timeout.throttling_delay
>   The amount of time we require to pass after a page has completely
> loaded until we start throttling.
>
> The default values of these prefs are:
>
> dom.timeout.background_budget_regeneration_rate: 100
>
> dom.timeout.background_throttling_max_budget: 50
>
> dom.timeout.budget_throttling_max_delay: 15000
>
> dom.timeout.foreground_budget_regeneration_rate: 1
>
> dom.timeout.foreground_throttling_max_budget: -1
>
> dom.timeout.throttling_delay: 3
>
> This is read as: budget regenerates 10 ms per second and will never
> grow beyond 50 ms. If the execution budget is negative a timeout will
> not run until it becomes positive, which happens at the said rate, but
> it will also not be delayed more than 15 seconds. Throttling for
> foreground is effectively turned off. Throttling doesn't commence
> until after 30 seconds.
>
> Google has a similar feature[1] for timer throttling.
>
> Turning on this feature is tracked in Bug 1377766 [2]
>
> It is inherently difficult to test this feature without false
> negatives due to the timing dependency of the feature. Bug 1378402
> tracks adding tests for testing throttling, but they suffer from being
> a bit too intermittent still. At least for debug builds.
>
> My hope by getting this in early in the release cycle is to be able to
> actively evaluate the feature, so I hope that you take the time to
> report those bugs! :)
>
> My hope is that this will land on Monday October 9th if there are no
> objections.
>
> Cheers, Andreas
>
> [1] https://developers.google.com/web/updates/2017/03/
> background_tabs#budget-based_background_timer_throttling
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1377766
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1378402
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Creating a content process during shutdown...

2017-09-21 Thread Gabor Krizsanits
I guess the question is how do you define "after we've entered shutdown".

For the preallocated process manager both "profile-change-teardown" and
"xpcom-shutdown" will prevent any further process spawning. For the
preloaded browser in tabbrowser.xml we usually instead of creating a
process for the hidden about:blank page we select an existing one, however
if there is non around (the browser has only some chrome pages open when
the shutdown is initiated) then it can create a process and I'm not sure if
anything would prevent the process creation now that I'm thinking about it.

I think we should set some flag when some of these shutdown related
notifications are fired and based on that prevent any new content process
creation somewhere in ContentParent::LaunchSubprocess. Since I don't see
any legit use case for it and I'm pretty sure it usually ends up with
crashes or hangs.

Gabor

On Wed, Sep 20, 2017 at 8:38 PM, Milan Sreckovic 
wrote:

> I've spoken to some of you about this, but at this point need a larger
> audience to make sure we're covering all the bases.
>
> Do we have code in Firefox that would cause us to create a new content
> process, after we've entered shutdown?
>
> I understand the possibility of user action that would create a new
> content process followed quickly by one that would cause shutdown, and the
> timing making it so that the new content process initialization can then
> happen out of order, but I'm more asking about the explicit scenario.
>
> Thanks!
>
> --
> - Milan (mi...@mozilla.com)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Kris Maglione and Andrew McCreight are now XPConnect Peers

2017-09-07 Thread Gabor Krizsanits
Totally deserved. Congrats to both of you!

Gabor

On Thu, Sep 7, 2017 at 1:47 AM, Bobby Holley  wrote:

> XPConnect is not exactly a fun module. When I took the reins from mrbkap
> 4.5 years ago, Peter wiped a tear from his eye and said "Congratulations,
> Blake."
>
> Still, it's at the heart of Gecko, and needs ongoing attention from deep
> hackers. With the old hands busy with other things recently, Kris and
> Andrew have stepped up and done a lot of the heavy lifting, especially
> around Quantum Flow.
>
> Andrew became a peer in May, and Kris become a peer a few minutes ago.
> Please be sure to congratulate me and offer your condolences to Kris and
> Andrew. ;-)
>
> bholley
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: disabled non-e10s tests on trunk

2017-08-09 Thread Gabor Krizsanits
On Wed, Aug 9, 2017 at 3:36 AM, Boris Zbarsky  wrote:

>
> Hmm.  Do we load about:blank from the url bar in a content process?
>
>
Yes.

I agree, I find it annoying too that we have to rely on MOZ_DEBUG_CHILD_PROCESS
or MOZ_DEBUG_CHILD_PAUSE and that I have to be clever all the time about
how to hit the right process at the right time with the debugger. I never
switched back to non-e10s though since I don't trust that everything will
work the same and I don't think that should be the solution. Switching back
to single content process for debugging should come with less side effects
though... Also, this is not just an e10s/e10s-multi related issues we're
adding all kind of processes (extension/gpu/plugin/etc.).

I didn't file a bug about this but I've been trying to find a decent
solution for it, but it seems like it's not trivial in any debugger (msvc,
gdb, lldb). Or maybe I was just hoping to find something better than what
seems to be achievable. Anyway, let's start with the bug: Bug 1388693.

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-25 Thread Gabor Krizsanits
In my mind at least the concept is to share the API across all browsers
where we can, but WebExtensions should not be limited to APIs that are
accepted and implemented by all browser vendors. Google extensions have
some Google app specific API that we might never implement because of
technical limitations, and we certainly plan to add APIs that Google will
never implement because of policy reasons for example.

I hope one day we can even have decent specs around some of the common API (
https://browserext.github.io/), and for new APIs I guess we should try to
work together with other vendors as much as possible IF that makes sense.
But if it's about an API that is needed for a popular Firefox specific
extension to be ported, and it's totally out of policy for Google
extensions anyway, then we should "only" care about security and the extra
cost it might cause for US to keep supporting it. And since we're trying to
move away from manual review http://www.agmweb.ca/2017-07-11-manual-review/
we should probably be quite conservative about what we accept.

Gabor

On Tue, Jul 25, 2017 at 11:14 AM, smaug  wrote:

> Hi all,
>
>
> recently in couple of bugs there has been requests to add Gecko specific
> APIs for extensions.
> It isn't clear to me why, and even less clear to me is what the plan is
> there.
> I thought WebExtensions should work in several browsers, but the more we
> add Gecko specific APIs, the less likely
> that will be.
>
> Could someone familiar WebExtensions clarify this a bit? Do we have some
> policy here. Should we be as strict as with web APIs, or allow some
> exceptions or do whatever people want/need?
>
>
>
>
> -Olli
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-11 Thread Gabor Krizsanits
On Mon, Jul 10, 2017 at 7:51 PM, Kris Maglione 
wrote:

> Combined with the fact that I would have needed to find and dig through
> various scattered mailing list posts and wiki pages, and then pester a
> bunch of people by email or IRC just to get started, I've always given up
> the idea pretty quickly.
>
>
This is by far the biggest obstacle for me. I guess the right approach here
is to take the time to walk someone through from the very beginning and
document the areas where it was not trivial how to progress (talking about
gecko development in Rust not general Rust). And iterate that a few times.
The lack of experience in the area and the sub-optimal tooling will result
a huge overhead in developer time for anyone new in this area especially
for remote contributors. Trying to minimize that overhead is important.
Convincing managers to encourage developers to pay that overhead is also a
requirement (tight deadlines will not help there).

I have been flooded with work for a while, and it has been difficult to
find more time for improving my Rust skills in general. Encouraging people
to pick up Rust related goals and make it a priority to learn more Rust
would be also important.

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: e10s-multi update and tests

2017-04-13 Thread Gabor Krizsanits
Hi,

On Thu, Apr 13, 2017 at 6:09 AM, Fischer Liu  wrote:

> We are using dom.ipc.processCount to limit the count of process.
> After updating dom.ipc.processCount, do we still need to restart Firefox?
>
>
It depends. It will prevent any new processes to be launched that would go
beyond the new limit right away, but it will not shut down any existing
processes if the current number is above the limit. For that you must
either close tabs or restart the browser.


> As I know in PreallocatedProcessManagerImpl::AllocateNow[2] and
> PreallocatedProcessManagerImpl::RereadPrefs, there would read the pref to
> check the max count to allow creation or close process.
>
> [1]
> https://dxr.mozilla.org/mozilla-central/rev/f40e24f40b4c4556944c762d4764ea
> ce261297f5/dom/ipc/PreallocatedProcessManager.cpp#143
>  [2]
> https://dxr.mozilla.org/mozilla-central/rev/f40e24f40b4c4556944c762d4764ea
> ce261297f5/dom/ipc/PreallocatedProcessManager.cpp#200
>
>
The preallocated process manager is not enabled yet on any channels by
default, I'm still struggling with some failing tests and other issues with
it. Hopefully that'll be fixed soon (Bug 1341008
), I wish I had more
time for it... Once it's turned on, it will detect the change of the pref
and shut down the preallocated process if necessary just like you described.

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: e10s-multi on Aurora

2017-04-12 Thread Gabor Krizsanits
We are going to do a beta experiment when 54 hits the beta channel. If
everything matches the release criteria right away we might ship with 54,
but that is more like a 'bonus' scenario. The official plan is as Chris
said, releasing it with 55.

On Wed, Apr 12, 2017 at 8:45 AM, Chris Peterson 
wrote:

> On 2017-04-11 10:31 PM, Salvador de la Puente wrote:
>
>> How does this relate with Project Down and the end of Aurora channel? Will
>> be multi-e10s enabled when shifting from nightly to beta?
>>
>
> There is no connection between Project Dawn and enabling multiple e10s
> content processes in the Aurora channel. e10s-multi is expected to ride the
> trains to release with Firefox 55.
>
>
> On Wed, Apr 5, 2017 at 1:34 AM, Blake Kaplan  wrote:
>>
>> Hey all,
>>>
>>> We recently enabled 4 content processes by default on Aurora. We're
>>> still tracking several bugs that we are planning to fix in the near
>>> future as well as getting more memory measurements in Telemetry as we
>>> look towards a staged rollout in Beta and beyond.
>>>
>>> We were able to turn on in Aurora thanks to a bunch of work from
>>> bkelly, baku, Gabor, and a bunch of other folks.
>>>
>>> Here's looking forward to riding more trains!
>>> --
>>> Blake
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
>>
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: e10s-multi update and tests

2017-03-23 Thread Gabor Krizsanits
On Thu, Mar 23, 2017 at 8:56 AM, tapper  wrote:


> Hi is a11y working yet?
> Last time I tried it out it kept crashing.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

Hi, do you have a bug filed for that? As far as I can tell a11y only blocks
e10s on XP [1] on nightly, so 4 cp's should be enabled by default with a11y
users on any other platforms. If it's keep crashing that's a pretty big
deal, but so far we have not received any bug reports for that. So I would
like to think it's working.

[1]:
http://searchfox.org/mozilla-central/source/toolkit/xre/nsAppRunner.cpp#4950

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: e10s-multi update and tests

2017-03-23 Thread Gabor Krizsanits
On Thu, Mar 23, 2017 at 2:36 AM, Nicholas Nethercote  wrote:

> On Thu, Mar 23, 2017 at 12:12 PM, Andrew McCreight  >
> wrote:
>
> >
> > Though maybe you are asking which processes count against the limit of 4.
> >
>
> Yes, that's what I am asking.
>
> Nick
>
>
That is a good question, thanks for bringing it up. The GMP and other
non-content process types are  already addressed in previous messages, I
won't cover them here. The 4 content processes limit is for "web" type
content process.
We have different content process pools with different limits on them.
Currently we have these types: [1]

Web, File, Extension, Large allocation.

It's possible to set limit for all types by dom.ipc.processCount.[type].
(Note: dom.ipc.processCount sets limit for the default "web" type, we kept
this one for backward compatibility, but if you set
dom.ipc.processCount.web, dom.ipc.processCount will be ignored.)

[1]: http://searchfox.org/mozilla-central/source/dom/ipc/ContentParent.h#40

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-11 Thread Gabor Krizsanits
On Sat, Mar 11, 2017 at 7:23 AM, Nicholas Nethercote  wrote:

>
> Depending on the relative timezones of the reviewer and reviewee, that
> could delay landing by 24 hours or even a whole weekend.
>
>
As someone working from Europe and 90% of the time with people from the
West Coast, thank
you very much for bringing up the timezone argument. r+-with-minor-fixes is
an absolute must for
some of us.

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Content process launch time distribution

2017-02-08 Thread Gabor Krizsanits
On Wed, Feb 8, 2017 at 11:18 AM, Gabriele Svelto 
wrote:

>
> I was also thinking about it. I'm not sure in what shape pre-allocated
> process support is since it's not been tested for a year already (only
> Firefox OS used it AFAIK) but it's a good way to take process startup
> out of the critical path at the cost of some extra memory consumption.
>
>  Gabriele
>
>
I've added back a simplified version as a start in bug *1324428*
 feel free to
experiment
with the flag :) But yeah, it's really under tested and before this patch I
don't think
it ever worked on desktop.

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Content process launch time distribution

2017-02-07 Thread Gabor Krizsanits
Thanks a lot for these numbers! Also, is there a way to get numbers for the
time from process launch to the point when we start loading the first URL
in the new content process? We run a lot of frame and process scripts
sooner than we would get there, and I'm afraid we would get even worse
numbers that way, and that will affect user experience. (I'm not sure how
to filter the related telemetry based on "if the new tab is starting up a
new content process or not")

For a temporary workaround until we can speed up content process start up
and initialization time, we might want to use the pre-allocated process
manager for the e10s-multi case. Maybe even force to pick one from the
existing processes unless there is one already available.

Gabor

On Tue, Feb 7, 2017 at 6:36 PM, Benjamin Smedberg 
wrote:

> https://telemetry.mozilla.org/new-pipeline/dist.html#!
> cumulative=0&end_date=2017-02-06&keys=__none__!__none__!__
> none__&max_channel_version=aurora%252F53&measure=CONTENT_
> PROCESS_LAUNCH_TIME_MS&min_channel_version=null&product=
> Firefox&sanitize=1&sort_keys=submissions&start_date=2017-
> 01-26&table=0&trim=1&use_submission_date=0
>
> This shows the distribution of times to launch a content process from the
> time we initially ask for it to the time we get back the first
> initialization message through IPDL. So this covers actual process launch
> in the OS, XPCOM startup, and other bootstrap.
>
> At first glance, this appears worrying to me: almost 25% of content process
> startups take more than 1 second, and the median is >700ms. And this is on
> nightly/aurora, which users typically have faster computers.
>
> There's a lot of potential noise here: we don't know what else is going on
> on the computer (maybe it's near boot and there's still a lot of system
> churn). But this time definitely can have an impact on how quickly Firefox
> is ready to load pages and therefore the impression that users have of its
> total speed.
>
> Soliciting everyone's opinion, but Harald's in particular: is it important
> to dive into this in more detail soon (before Firefox 57)?
>
> This metric is currently exploratory, and so I need guidance about whether
> it's important to keep this metric around for e.g. a release-health
> dashboard or to prevent regressions.
>
> --BDS
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Multiple content processes in Nightly

2017-01-24 Thread Gabor Krizsanits
Hello everyone,

After quite a few attempts and some more bug fixing the patch finally seem
to stuck. Thanks for all the help! \o/

Gabor

On Thu, Nov 10, 2016 at 11:46 AM, Gabor Krizsanits 
wrote:

> The patch has been backed out because of the merge. To be continued on
> next Monday.
> More info about the back-out: https://bugzilla.mozilla.org/
> show_bug.cgi?id=1303113#c9
>
> -Gabor
>
> On Thu, Nov 10, 2016 at 1:29 AM, Blake Kaplan  wrote:
>
>> Hello everyone,
>>
>> We've been working on the e10s-multi project for a while now and are
>> looking at turning on multiple content processes in Nightly. Our plan
>> is to start with two content processes (compared to the single content
>> process we currently use) and if that goes well, we'll start ramping
>> up the number of content processes we use as well as playing with more
>> exciting process allocation strategies. We are not yet planning to
>> ride the trains.
>>
>> In order to turn on in Nightly, Gabor Krizsanits has been doing a ton
>> of work to make sure our tests are green. We've had to disable a
>> couple of them in e10s mode and for others we've been forcing them to
>> use a single content process (to be clear, the tests themselves are
>> broken with multiple content processes and the underlying code is
>> not). We will be working on fixing the tests as we can as well as
>> turning the disabled tests back on [1].
>>
>> We have two known bugs that we're enabling with:
>>   * Service workers for the same origin can run simultaneously in
>> multiple processes [2]. We expect the user-visible aspects of this bug
>> to be limited to desktop notifications being duplicated (for each
>> content process that has a bogus service worker running in it). bkelly
>> is leading a team to fix this.
>>
>>   * DOM storage doesn't properly propagate changes to other processes
>> [3]. This could cause web sites to misbehave. janv is working on
>> fixing this.
>>
>> Let Gabor or me know if you have any concerns or comments and needinfo
>> us on bugs that you run into.
>>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1315042
>> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1231208
>> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=666724
>> --
>> Blake
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Multiple content processes in Nightly

2016-11-10 Thread Gabor Krizsanits
The patch has been backed out because of the merge. To be continued on next
Monday.
More info about the back-out:
https://bugzilla.mozilla.org/show_bug.cgi?id=1303113#c9

-Gabor

On Thu, Nov 10, 2016 at 1:29 AM, Blake Kaplan  wrote:

> Hello everyone,
>
> We've been working on the e10s-multi project for a while now and are
> looking at turning on multiple content processes in Nightly. Our plan
> is to start with two content processes (compared to the single content
> process we currently use) and if that goes well, we'll start ramping
> up the number of content processes we use as well as playing with more
> exciting process allocation strategies. We are not yet planning to
> ride the trains.
>
> In order to turn on in Nightly, Gabor Krizsanits has been doing a ton
> of work to make sure our tests are green. We've had to disable a
> couple of them in e10s mode and for others we've been forcing them to
> use a single content process (to be clear, the tests themselves are
> broken with multiple content processes and the underlying code is
> not). We will be working on fixing the tests as we can as well as
> turning the disabled tests back on [1].
>
> We have two known bugs that we're enabling with:
>   * Service workers for the same origin can run simultaneously in
> multiple processes [2]. We expect the user-visible aspects of this bug
> to be limited to desktop notifications being duplicated (for each
> content process that has a bogus service worker running in it). bkelly
> is leading a team to fix this.
>
>   * DOM storage doesn't properly propagate changes to other processes
> [3]. This could cause web sites to misbehave. janv is working on
> fixing this.
>
> Let Gabor or me know if you have any concerns or comments and needinfo
> us on bugs that you run into.
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1315042
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1231208
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=666724
> --
> Blake
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Guidance wanted: checking whether a channel (?) comes from a particular domain

2016-07-06 Thread Gabor Krizsanits
I don't think we have exactly what you need, but here is what we have and
can be extended to match your use case.

1. What we have is ExpandedPrincipal, which is essentially an array of
principals (it subsumes all those principals). What we don't have is a wild
card principal which you could use for the subdomain part (details if
anyone cares to implement it:
https://bugzilla.mozilla.org/show_bug.cgi?id=723627#c42).

2. So what you could use instead is what we have for web extensions:
https://bugzilla.mozilla.org/show_bug.cgi?id=1180921

Probably it needs some hacking, since it is designed for add-ons and in
your case you want to associate the principal with a plugin... It works
like this roughly: for add-on scopes we add some extra data to their
principals, and based on that during load access check we do a JS callback
and some chrome JS code does the validation part. So you can write the
logic in JS.

That's all we have I'm afraid.

- Gabor

On Tue, Jul 5, 2016 at 4:50 PM, Benjamin Smedberg 
wrote:

> As part of plugin work, I'm implementing code in
> nsDocument::StartDocumentLoad which is supposed to check whether this
> document is being loaded from a list of domains or any subdomains. So e.g.
> my list is:
>
> ["foo.com", "baz.com"] // expect 15-20 domains in this list, maybe more
> later
>
> And I want the following documents to match:
>
> http://foo.com/...
> https://foo.com/...
> https://subd.foo.com
> http://subd.baz.com
>
> But http://www.bar.com would not match.
>
> The existing domain and security checks in nsDocument::StartDocumentLoad
> all operate on the nsIChannel, so I suppose that's the right starting
> point.
>
> I couldn't find an existing API on nsContentUtils to do the check that I
> care about. I'm sure that there is a way to do what I want using
> nsIScriptSecurityManager, but I'm not sure whether that's the "right" thing
> to do or whether this code already exists somewhere.
>
> Reading the APIs, I imagine that I want to do something like this:
>
> contentPrincipal = ssm.getChannelResultPrincipal(channel);
> testPrincipal = ssm.createCodebasePrincipalFromOrigin(origin); // Is it ok
> that this is scheme-less?
> if (testPrincipal.subsumes(contentPrincipal)) -> FOUND A MATCH
>
> Is this the right logic, or is there a simpler way to do this that doesn't
> involve creating a bunch of principal objects on every document load? Is
> running this logic on every document load a potential perf problem?
>
> --BDS
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Searchfox (new code search tool)

2016-06-07 Thread Gabor Krizsanits
Wow, this is amazing. Million thanks for this, especially for the speed,
the easy blame walk and the highlight. I always wanted to have a tool for
the traversing history like that. One thing I miss is to see the revision
number / contributor name on the side bar for each line on the sidebar in
blame view. One question, is the source code public?

By the way I do not believe we all want the same thing from a code search
tool, so I see nothing wrong in someone implementing a personalized version
for his own taste, and some of us will benefit from it greatly.

- Gabor

On Tue, Jun 7, 2016 at 6:35 AM, Bill McCloskey 
wrote:

> Hi everyone,
>
> I would like to announce a new tool I've been working on for source
> code searching called Searchfox (http://searchfox.org). If you use MXR
> or DXR, I recommend you try Searchfox. Here are some of the benefits:
>
> - Besides C++ code, Searchfox indexes JavaScript, XBL, and IDL. You
>   can search by property name and, in some cases, qualified property
>   name (e.g., SessionStore.duplicateTab). IDL files link to both JS
>   and C++ implementations and users.
>
> - Blame in Searchfox is fast and easy. Every file includes blame
>   information in a gray bar on the left side, and walking through the
>   blame history takes only one click per revision. Each file in the
>   blame chain downloads quickly, and blame goes all the way back to
>   1998. Say goodbye to the frustration of reaching "Free the
>   (distributed) Lizard" at hg.mozilla.org and finding that GitHub
>   blame times out!
>
> - Searchfox jumps to the actual definition of methods rather than the
>   header file declaration.
>
> - C++ template handling is a little better, files download a little
>   quicker, and other smaller improvements.
>
> If you would like to try out Searchfox, I recommend that you change
> your keyword searches to point to it. Otherwise it's too easy to
> forget and revert to muscle memory.
>
> Keyword search:
> http://searchfox.org/mozilla-central/search?q=%s
>
> Keyword search to find a particular file:
> http://searchfox.org/mozilla-central/search?q=&path=%s
>
> Some help on using Searchfox can be found at
> http://searchfox.org. Also, you can see some screenshots at my blog:
> https://billmccloskey.wordpress.com/2016/06/07/searchfox/
>
> Also, here are some reasons not to use Searchfox:
>
> - You frequently look at repositories besides
>   mozilla-central. Searchfox only handles m-c.
>
> - You like MXR's ability to sorta index all platforms. Like DXR,
>   Searchfox uses a clang plugin that only analyzes Linux64 debug
>   builds. I'm very eager to fix this problem, but it will take some
>   time. Full-text search finds everything, of course.
>
> -Bill
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Cancel your old Try pushes

2016-04-26 Thread Gabor Krizsanits
As someone who was high on the list of try server usage for two weeks
My problem was a test I tried to fix for both e10s and non-e10s, and it
timed out _sometimes_ on _some_ platforms even depending on debug/release
build. It was a whack-a-mole game by fiddling with the test and a complex
patch. I did stop old builds but I did not run only the test in question
but the rest of them as well because of the invasive nature of the patch
the whole thing was sitting on. Probably I could have been smarter, BUT...

What would have helped me a lot in this case and most cases when I rely on
the try server is the ability to push a new changeset on top of my previous
one, and tell the server to use the previous session instead of a full
rebuild (if there is only a change in the tests that's even better, no
rebuild at all) and then tell the server exactly which tests I want to
re-run with those changes (as it can be an empty set this can be used to
trigger additional tests for a previous push). This could all be done by an
extensions to the try syntax like -continue [hash]. As an addition this
follow up push would also kill the previous job.

Maybe there is already such functionality available, just I'm not aware of
it (I would be so happy if this were the case, and would feel bad for the
machine hours I wasted...), if so please let me know.

- Gabor


On Fri, Apr 15, 2016 at 5:47 PM, Ryan VanderMeulen <
rvandermeu...@mozilla.com> wrote:

> I'm sure most of you have experienced the pain of long backlogs on Try
> (Windows in particular). While we'd all love to have larger pools of test
> machines (and our Ops people are actively working on improving that!), one
> often-overlooked thing people can do to help with the backlog Right Now is
> to cancel pending jobs on pushes they no longer need (i.e. newer push to
> Try, broken patch, already pushed to inbound, etc).
>
> Treeherder makes it easy to do this - just hit the little circle with an X
> icon on the right hand side adjacent to the "XX% - Y in progress" text
> along the top bar of the push. You will be prompted whether you really want
> to cancel all jobs on the push. Just hit OK and you're done.
>
> Killing off unnecessary jobs can have a significant impact on wait times
> and backlog, so your consideration is greatly appreciated!
>
> Thanks,
> Ryan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-03-14 Thread Gabor Krizsanits
On Mon, Mar 14, 2016 at 8:51 PM, Benjamin Smedberg 
wrote:

>
>
> On 3/12/2016 7:19 PM, Gabor Krizsanits wrote:
>
>>
>> Seems like a tough decision for such a short time...  There were some
>> great
>> points on both sides so far, but I'm missing the math. To evaluate the
>> cost/benefit for a decision like this we should be able to estimate how
>> much engineering time does it take for us to gain 1.2% new users and how
>> much does it cost to keep the support. My personal estimation for the
>> first
>> is pretty high :(
>>
>
> The math is pretty striking: the problem is not so much about user
> acquisition but about retention and user engagement. We have no problem
> getting new Firefox users: we still have amazingly high brand recognition
> and get many downloads. In terms of retention, what kind of engineering
> effort do we need to do to keep users one, two, four, eight weeks after
> they've tried Firefox? In terms of ongoing engagement, the Firefox product
> strategy has us measuring and optimizing how many days (per week) Firefox
> users use Firefox.
>
> Our basic product strategy is that by focusing our engineering efforts on
> engagement/retention of new users, we'll end up in a much better spot, both
> in terms of overall product quality and our position in the market, than if
> we focus on keeping small cohorts of existing users. That tradeoff of
> existing users for new-user engagement is driving our strategy with e10s,
> extensions, and other engineering priorities, and is the basis for this
> decision.
>
> --BDS
>

Thanks for the explanation that makes perfect sense to me. By the way since
the questions is never if we drop support for a system but when do we do
it, would it make sense to track / measure of how much work went into
maintaining a particular system in the last x month (or will be in the next
month like in the e10s case)? I guess that data would help to make this
decision easier and with less debates (although there are ofc always
exceptional cases for fundamental releases like e10s).

- Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-03-12 Thread Gabor Krizsanits
On Thu, Mar 10, 2016 at 7:03 PM, Benjamin Smedberg 
wrote:

> This will affect approximately 1.2% of our current release population.
> Here are the specific breakdowns by OS version:
>
>
Seems like a tough decision for such a short time...  There were some great
points on both sides so far, but I'm missing the math. To evaluate the
cost/benefit for a decision like this we should be able to estimate how
much engineering time does it take for us to gain 1.2% new users and how
much does it cost to keep the support. My personal estimation for the first
is pretty high :(

We also might miss the opportunity to gain new users on these systems, and
we risk a bad press as well, but I'm a little less concerned about these. I
just feel like there should be some other way to save engineering time that
costs less users but without the metrics I can only guess.

- Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Platform quality

2016-03-10 Thread Gabor Krizsanits
While the other thread about fuzzing friendly Gecko is an interesting
option I would like to go back to the original topic, and start another
thread to collect other ideas too, that might help getting better on the
performance front. Here are some of my thoughts after spending some time
with the profiler and Talos tests in the past couple of weeks.

Probably most regression happens where we don't detect them because of the
lack of perf. test coverage. It should be easy and straightforward to add a
new Talos test (it isn't right now). There is an ongoing work on this I
think but don't know where is that work being tracked. We clearly need more
tests. A lot more. Especially if we want to ship features with huge impact
like multi-process Firefox or removing XUL. I don't think we have all the
metrics we need yet to make the best decisions.

We do have some explanations about each Talos tests at
https://wiki.mozilla.org/Buildbot/Talos/Tests and I'm thankful for that but
some of the tests need more explanation, and some of them does not have
any. We could further improve that, it will save a lot of engineering time
(this wiki rocks by the way).

The great thing about Talos tests that they can be profiled. What would be
even better if I could just compare two runs on the profiler level as
easily as I can compare the results now on threeherder compare. It would be
simpler to assign perf bugs, also less time consuming to fix them if I knew
instantly where that test used to spend the time and where is it spending
it right now. And most importantly where do I have to tune my module to
make the biggest impact on performance even if there is no regression.
(Backing out all the features that causing regression is not always the
option or the best way to gain back performance.)

I don't think the goal is the optimize Gecko. We need to optimize our end
products. So performance tests that go through the entire browser and
reproducing common user stories are the most important I think (we need
more). Gecko and Firefox is often deeply intertwined to a level that  joint
effort between Platform and Firefox team folks has the best chance to
tackle the more complex cases. I don't want to end up with a super fast
engine and slow browser because we put our focus on the wrong goal.

Add-ons. Last number I heard is that 40% of our users using some Add-ons,
we have access to these Add-ons code yet we don't have any performance
tests using them. It should be our responsibility to make sure if we
regress the user experience of our users with some of the most popular
Add-ons, we at least give a heads up to the authors and help them to
address the problem. I know resources are limited but maybe there are some
low hanging fruit here that would make a huge impact.

These are my two cents off the top of my head, I hope others have even
better ideas to share.

- Gabor

On Wed, Mar 9, 2016 at 12:09 AM, David Bryant  wrote:

> Platform Peeps,
>
> Improving release quality is one of the three fundamental goals Platform
> Engineering committed to this year. To this end, lmandel built a Bugzilla
> dashboard that allows us to track regressions found in any given release
> cycle. This dashboard is up on monitors in many of the offices and can also
> be found at: http://mozilla.github.io/releasehealth/
>
> While this metric might not be perfect, it does expose the number of
> newly-discovered regressions we would ship in a release. As of Monday*, we
> had *58* new regressions in Firefox 45 Beta -- this is the version that was
> released today. Of these bugs, 43 of them are unassigned**. Both of these
> things are unacceptable, and we will not continue to operate this way.
>
> Starting in release 46, we will *not* ship unless all new regressions are
> triaged and are either fixed or explicitly deferred by release management
> (working with Firefox engineering and Platform Engineering leads). We will
> hold a triage meeting every Monday at 2pm PT in the ReleaseCoordination
> Vidyo room, open to all of engineering, to stay on top of the overall
> regression list, and our first such meeting was yesterday. Bugs will be
> assigned by engineering managers and treated as release blockers.
>
> Engineering managers own the triage of their team's components. Please
> work with them and also let Johnny, Doug, or me know if you need help.
>
> All of us, together, are accountable for adopting this fundamental change
> in how we work. This is one of several changes that we’ll be making, so
> more to come.
>
>
> Thanks,
> David
>
>
> * Yes, I am aware that dougt, dveditz, and jst triaged on Monday, so this
> number may be slightly lower now.  Still it isn’t zarro.
>
> ** Yes, I am aware that some of the teams don’t always assign bugs, but I
> am asking everyone to start doing this. Unowned bugs will signal to us that
> they need help. Basically, we want the assignee field to be someone that is
> directly responsible for moving the bug to the next state. That might 

Re: Talos e10s dashboard

2016-03-02 Thread Gabor Krizsanits
I've just visited guardian.co.uk, *(Bug 1252822*
) scrolling seems
quite bad... :(

On Wed, Mar 2, 2016 at 12:29 AM, Chris Peterson 
wrote:

> On 3/1/16 9:57 AM, William Lachance wrote:
>
>> Also, mconley suggested being able to compare the results of individual
>> subtests. You can access this view for any given talos test by hovering
>> over the line in the comparison and selecting "subtests". This sometimes
>> give interesting data, for instance on "tps" some pages are clearly
>> causing more problems than others:
>>
>>
>> https://treeherder.allizom.org/perf.html#/e10s_comparesubtest?baseSignature=fe016968d213834efd424ca88680cfa7490b6c09&e10sSignature=5c199ff7bd97284c5f3820ba908f92275620cd8b
>>
>>
>> (notice how aljaazera.net has a consistent ~450% regression!)
>>
>
> This looks great, Will. Good catch on the aljazeera.net problem. The
> other outliers are mail.ru (at ~410% regression) and guardian.co.uk (at
> ~380% regression). We should probably file bugs for those individual sites.
> :)
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-08 Thread Gabor Krizsanits
On Wed, Jul 8, 2015 at 1:05 AM, Bobby Holley  wrote:

> On Tue, Jul 7, 2015 at 3:59 PM, Eric Rahm  wrote:
>
> > I'm not a huge fan of the 'aFoo' style, but I am a huge fan of
> > consistency. So if we want to change the style guide we should update our
> > codebase, and I don't think we can reasonably do that automatically
> without
> > introducing shadowing issues.
> >
>
> This a great point, and perhaps the most useful one to be raised in this
> thread.
>
> The priority is to automatically rewrite our source with a unified style.
> foo -> aFoo is reasonably safe, whereas aFoo->foo is not, at least with the
> current tools. So we either need to combine the rewrite tools with static
> analysis, or just go with aFoo.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

+1 for consistency. Any volunteers who is willing to get rid of aFoo
EVERYWHERE
and someone else who is willing to review that work? If not then we should
do it
the other way around.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Excessive inbound bustage

2015-04-20 Thread Gabor Krizsanits
"then it's fine to land it on m-i without try."

Maybe because I usually work on core, and such confidence is hard to reach
there, but I'd like to think at least a try run that check if the patch
builds on all platform and a full test run on at least one platform is not
too much sacrifice of ones time.

Personally I think that "my time" is cheaper than "everyone's time". It is
slow. It is annoying, but holding up ALL the other patches/developers is
expensive hence a risky option. So I suggest everyone to be very
conservative about that confident feeling.

I'm all for auto landing, but with all the intermittent bugs we have it's
not easy I guess.

On Mon, Apr 20, 2015 at 11:47 PM, Eric Rescorla  wrote:

> I think perhaps part of the question is what the purpose of m-i versus try
> is.
>
> My general algorithm is that you should get your patch to the point
> where you have tested it locally and have reasonable confidence that there
> are no portability issues and then it's fine to land it on m-i without try.
> And if you think there are likely to be portability issues or you don't
> want to/can't run tests locally you should push to try.
>
> In answer to the question of why I avoid try, the answer is simple: it's
> slow.
>
> With that said, I think the right fix isn't to make try faster (though that
> would
> also be good) but to make autolanding work. That way people could just
> fire and forget without inconveniencing others if their patch failed.
>
> -Ekr
>
>
> On Mon, Apr 20, 2015 at 1:54 PM, Aaron Klotz  wrote:
>
> > Do I have terrible timing when it comes to landing patches, or has
> inbound
> > been closed due to bustage far too often over the past couple of months?
> At
> > first I thought maybe it was the former, but now I'm believing that it is
> > the latter.
> >
> > As of late, when I check to see if inbound is open, I just assume it is
> > going to be closed before I even check its status.
> >
> > I can only infer that patches are not being tested extensively enough on
> > try. To me this is symptomatic of a problem: What is it about try that we
> > are avoiding it, and what can we do to improve the situation?
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Review problems

2015-03-19 Thread Gabor Krizsanits
On Wed, Mar 18, 2015 at 4:47 PM, Gavin Sharp  wrote:

> This is a difficult problem to discuss in the abstract.
>
> It should never be the case that you are "waiting for weeks/months" -
> you should either be getting reviews within a week (at worst), or be
> getting responses saying "can't spend time reviewing this now". Where
> that is not happening, the escalation path should be something like:
>

If it never happened I would not have written this email. But I agree it
should not.
It's a very nice gesture that someone at least reacts on the request and
tell me
an ETA or gives me a heads-up about the delay, and I can just encourage
people
to do so, but that does not fix the issue. And escalation can cause more
trouble
sometimes than it helps...

Let's say I'm at the other side, and all of a sudden I have to review
dozens of huge
patches I have not seen coming, while I'm working on P1 sec. crit.
blockers, or
have 100 other patches to review, what can I do? But let's forget about
extreme
cases and long delays for a second...

I've seen core developers on bugzilla closing their review queues... Example
could be smaug or bz, who probably have more reviews to do than I could
count.
I seriously have no idea how can they do any other work. All my gratitude
for their
amazing job with reviews, and juggling with all the things they do at once.
It's
incredible to me that someone can do 100 reviews a week... I just  think it
would
be nice if there were more people who could lift off some reviews from their
shoulders. Might be a stupid idea... Based on the reactions, or the lack of
them
probably it is.

 - Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Review problems

2015-03-18 Thread Gabor Krizsanits
I think I'm not the only one experienced issues with reviews from one side
or the other.
I'm wondering if we could do some improvements here for everyone's sake.

Here are the issues the way I see it:
* some parts of the code need more peers
  - we should identify the areas
  - we should select candidates
  - there should be a clear path to become a peer
(reading up code/spec, asking for sr's first and starting with easier
ones, etc)
* reviews are not part of our goal system
  - it makes no sense to work on something if for the
  reviewer will likely take several weeks or even months
  to get to the review (for various but foreseeable reasons)
  - people who are flooded with reviews cannot focus on their
  actual goals they signed up for, or have to block people by
  not doing reviews

Maybe I'm just not well informed, and all these issues are already being
taken care of... But if not, I would like to hear what others think.

- Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing early any changes on the try server and the exact build envs

2014-06-03 Thread Gabor Krizsanits
> 
> It's pretty rare that things such OS, Compiler, SDK change on our build
> systems. We do tend to make noise about them when that happens, too. Do
> you have specific examples to point at?

Where can I follow these changes? One specific example is bug 1002729 and the 
like...
Currently m-c does not build with gcc 4.6 on ubuntu because something similar. 
After
updating to 4.8 I got some warning in webrtc code, so I had to turn off 
warning-as-errors.

In the past we changed required windows SDK unannounced too... That being said 
I can
totally imagine that I don't follow the right channels where these kind of 
changes
are announced. For example, do we have a way to tell the exact configurations 
on the
try server? Like exact gcc version / linux version...

> 
> In any case, it's good to keep in mind that Try follows mozilla-central
> in terms of build configuration and requirements -- which means it's
> inherently unstable. We make no guarantees about build requirements
> staying static for any period of time.

That is understandable. But let's say I want to work on a similar configuration 
as the
linux box on try, where can I check the exact environment on it to reproduce 
something
similar? Once I have it, how will I know that I should update something on it 
to match
what we have on try?

Thanks,
Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Announcing early any changes on the try server and the exact build envs

2014-06-03 Thread Gabor Krizsanits
>From time to time, no matter what platform I use, the build configuration on 
>the try server changes
and from that point on it's just a matter of time that my build gets broken. 
When you're
about to work on some urgent fixes, it can be very frustrating to try and fix 
the build instead...

I think we only support the exact configurations that we are testing against on 
the try server. Anything
else (that is untested) will be broken. We should fix those... but it will 
happen from time to time.
So one thing we can do is to communicate it clearly what is the exact env. on 
the try server, (os version,
gcc version, on windows SDK version, etc.) and warn people when it's changed. 
Or we have to do
automated tests for more envs (which is resource wise won't be easy/feasible)

I really cannot blame people for breaking the build on env we are not testing 
against on the try server,
(especially not with warnings-as-errors turned on...) for obvious reasons. But 
I think we could
do better about announcing any changes on it early, so we could all update or 
systems in time
accordingly, and avoid having to deal with these kind of problems in the worst 
possible time.

 - Gabor Krizsanits
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please do not add new web-exposed XPCOM objects

2014-03-18 Thread Gabor Krizsanits
> On Mon, Mar 17, 2014 at 10:25 AM, Benjamin Smedberg
> wrote:
> 
> > Maybe this means we should consider exposing some kind of structured-clone
> > system for calling untrusted code, plus a safer way to call functions which
> > may return arbitrary results?
> >
> 
> This actually already exists. Cu.exportFunction (also see Cu.cloneInto).
> 
> Gabor, do we have docs for them anywhere on MDN? If not, can you coordinate
> with the docs team to make that happen?
> 

I already talked about this with Will during the devtools workweek. It's not
there yet. I think the best would be if I wrote a quick sketch and then Will
could quickly turn it into a doc...

Gabor
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform